The Industrial Electronics Handbook
SEcond EdITIon

control and mechatronIcs

© 2011 by Taylor and Francis Group, LLC

The Industrial Electronics Handbook
SEcond EdITIon

Fundamentals oF IndustrIal electronIcs Power electronIcs and motor drIves control and mechatronIcs IndustrIal communIcatIon systems IntellIgent systems

© 2011 by Taylor and Francis Group, LLC

The Electrical Engineering Handbook Series
Series Editor

Richard C. Dorf

University of California, Davis

Titles Included in the Series
The Avionics Handbook, Second Edition, Cary R. Spitzer The Biomedical Engineering Handbook, Third Edition, Joseph D. Bronzino The Circuits and Filters Handbook, Third Edition, Wai-Kai Chen The Communications Handbook, Second Edition, Jerry Gibson The Computer Engineering Handbook, Vojin G. Oklobdzija The Control Handbook, Second Edition, William S. Levine CRC Handbook of Engineering Tables, Richard C. Dorf Digital Avionics Handbook, Second Edition, Cary R. Spitzer The Digital Signal Processing Handbook, Vijay K. Madisetti and Douglas Williams The Electric Power Engineering Handbook, Second Edition, Leonard L. Grigsby The Electrical Engineering Handbook, Third Edition, Richard C. Dorf The Electronics Handbook, Second Edition, Jerry C. Whitaker The Engineering Handbook, Third Edition, Richard C. Dorf The Handbook of Ad Hoc Wireless Networks, Mohammad Ilyas The Handbook of Formulas and Tables for Signal Processing, Alexander D. Poularikas Handbook of Nanoscience, Engineering, and Technology, Second Edition, William A. Goddard, III, Donald W. Brenner, Sergey E. Lyshevski, and Gerald J. Iafrate The Handbook of Optical Communication Networks, Mohammad Ilyas and Hussein T. Mouftah The Industrial Electronics Handbook, Second Edition, Bogdan M. Wilamowski and J. David Irwin The Measurement, Instrumentation, and Sensors Handbook, John G. Webster The Mechanical Systems Design Handbook, Osita D.I. Nwokah and Yidirim Hurmuzlu The Mechatronics Handbook, Second Edition, Robert H. Bishop The Mobile Communications Handbook, Second Edition, Jerry D. Gibson The Ocean Engineering Handbook, Ferial El-Hawary The RF and Microwave Handbook, Second Edition, Mike Golio The Technology Management Handbook, Richard C. Dorf Transforms and Applications Handbook, Third Edition, Alexander D. Poularikas The VLSI Handbook, Second Edition, Wai-Kai Chen

© 2011 by Taylor and Francis Group, LLC

The Industrial Electronics Handbook
SEcond EdITIon

control and mechatronIcs
Edited by

Bogdan M. Wilamowski J. david Irwin

© 2011 by Taylor and Francis Group, LLC

MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2011 by Taylor and Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number: 978-1-4398-0287-8 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access (http:// or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging‑in‑Publication Data Control and mechatronics / editors, Bogdan M. Wilamowski and J. David Irwin. p. cm. “A CRC title.” Includes bibliographical references and index. ISBN 978-1-4398-0287-8 (alk. paper) 1. Mechatronics. 2. Electronic control. 3. Servomechanisms. I. Wilamowski, Bogdan M. II. Irwin, J. David. III. Title. TJ163.12.C67 2010 629.8’043--dc22 Visit the Taylor & Francis Web site at and the CRC Press Web site at 2010020062

© 2011 by Taylor and Francis Group, LLC

Preface....................................................................................................................... xi Acknowledgments................................................................................................... xiii Editorial Board.......................................................................................................... xv Editors. . ................................................................................................................... xvii Contributors����������������������������������������������������������������������������������������������������������� xxi

Part I  Control System Analysis

1 2 3 4 5 6 7

Nonlinear Dynamics........................................................................................ 1-1
István Nagy and Zoltán Süto ˝

Basic Feedback Concept.................................................................................. 2-1
Tong Heng Lee, Kok Zuea Tang, and Kok Kiong Tan Naresh K. Sinha Igor M. Boiko

Stability Analysis. . ........................................................................................... 3-1 Frequency-Domain Analysis of Relay Feedback Systems.............................. 4-1 Linear Matrix Inequalities in Automatic Control......................................... 5 -1
Miguel Bernal and Thierry Marie Guerra

Motion Control Issues.. ................................................................................... 6 -1
Roberto Oboe, Makoto Iwasaki, Toshiyuki Murakami, and Seta Bogosyan

New Methodology for Chatter Stability Analysis in Simultaneous Machining........................................................................................................ 7-1
Nejat Olgac and Rifat Sipahi

Part II  Control System Design

8 9

Internal Model Control.. ................................................................................. 8 -1
James C. Hung James C. Hung

Dynamic Matrix Control................................................................................ 9 -1

© 2011 by Taylor and Francis Group, LLC



10 11 12 13 14 15 16 17 18 19 20 21 22

PID Control.................................................................................................... 10 -1
James C. Hung and Joel David Hewlett James R. Rowland

Nyquist Criterion........................................................................................... 11-1 Root Locus Method........................................................................................ 12 -1
Robert J. Veillette and J. Alexis De Abreu Garcia

Variable Structure Control Techniques.. ....................................................... 13 -1
Asif Šabanovic ´ and Nadira Šabanovic ´-Behlilovic ´ Timothy N. Chang and John Y. Hung Guan-Chyun Hsieh Victor M. Becerra Emilia Fridman

Digital Control............................................................................................... 14 -1 Phase-Lock-Loop-Based Control................................................................... 15 -1 Optimal Control.. ........................................................................................... 16 -1 Time-Delay Systems....................................................................................... 17-1 AC Servo Systems........................................................................................... 18 -1
Yong Feng, Liuping Wang, and Xinghuo Yu Liuping Wang, Shan Chai, and Eric Rogers Jing Zhou and Changyun Wen

Predictive Repetitive Control with Constraints........................................... 19 -1 Backstepping Control.. ................................................................................... 20 -1 Sensors............................................................................................................ 21-1
Tiantian Xie and Bogdan M. Wilamowski Xinghuo Yu and Okyay Kaynak

Soft Computing Methodologies in Sliding Mode Control.. .......................... 22 -1

PART III  Estimation, Observation, and Identification

23 24 25 26 27

Adaptive Estimation. . ..................................................................................... 23 -1
Seta Bogosyan, Metin Gokasan, and Fuat Gurleyen Christopher Edwards and Chee Pin Tan Kouhei Ohnishi

Observers in Dynamic Engineering Systems................................................ 24 -1

Disturbance Observation–Cancellation Technique...................................... 25 -1 Ultrasonic Sensors......................................................................................... 26 -1
Lindsay Kleeman

Robust Exact Observation and Identification via High-Order Sliding Modes............................................................................................................. 27-1
Leonid Fridman, Arie Levant, and Jorge Angel Davila Montoya

© 2011 by Taylor and Francis Group, LLC



PART IV  Modeling and Control

28 29 30 31 32 33

Modeling for System Control.. ....................................................................... 28 -1
A. John Boye

Intelligent Mechatronics and Robotics. . ........................................................ 29 -1
Satoshi Suzuki and Fumio Harashima

State-Space Approach to Simulating Dynamic Systems in SPICE. . .............. 30 -1
Joel David Hewlett and Bogdan M. Wilamowski

Iterative Learning Control for Torque Ripple Minimization of Switched Reluctance Motor Drive................................................................................. 31-1
Sanjib Kumar Sahoo, Sanjib Kumar Panda, and Jian-Xin Xu Jian-Xin Xu and Sanjib Kumar Panda Alain Bouscayrol

Precise Position Control of Piezo Actuator.. ................................................. 32 -1 Hardware-in-the  Loop Simulation................................................................ 33 -1

PART V  Mechatronics and Robotics

34 35 36 37 38 39

Introduction to Mechatronic Systems........................................................... 34 -1
Ren C. Luo and Chin F. Lin

Actuators in Robotics and Automation Systems.. ......................................... 35 -1
Choon-Seng Yee and Marcelo H. Ang Jr. Raymond Jarvis Raymond Jarvis Raymond Jarvis

Robot Qualities.............................................................................................. 36 -1 Robot Vision.. ................................................................................................. 37-1 Robot Path Planning...................................................................................... 38 -1 Mobile Robots. . ............................................................................................... 39 -1
Miguel A. Salichs, Ramón Barber, and María Malfaz

Index . ................................................................................................................ Index-1

© 2011 by Taylor and Francis Group, LLC

The field of industrial electronics covers a plethora of problems that must be solved in industrial practice. Electronic systems control many processes that begin with the control of relatively simple devices like electric motors, through more complicated devices such as robots, to the control of entire fabrication processes. An industrial electronics engineer deals with many physical phenomena as well as the sensors that are used to measure them. Thus, the knowledge required by this type of engineer is not only traditional electronics but also specialized electronics, for example, that required for high-power applications. The importance of electronic circuits extends well beyond their use as a final product in that they are also important building blocks in large systems, and thus the industrial electronics engineer must also possess knowledge of the areas of control and mechatronics. Since most fabrication processes are relatively complex, there is an inherent requirement for the use of communication systems that not only link the various elements of the industrial process but are also tailor-made for the specific industrial environment. Finally, the efficient control and supervision of factories require the application of intelligent systems in a hierarchical structure to address the needs of all components employed in the production process. This need is accomplished through the use of intelligent systems such as neural networks, fuzzy systems, and evolutionary methods. The Industrial Electronics Handbook addresses all these issues and does so in five books outlined as follows: 1. Fundamentals of Industrial Electronics 2. Power Electronics and Motor Drives 3. Control and Mechatronics 4. Industrial Communication Systems 5. Intelligent Systems

The editors have gone to great lengths to ensure that this handbook is as current and up to date as possible. Thus, this book closely follows the current research and trends in applications that can be found in IEEE Transactions on Industrial Electronics. This journal is not only one of the largest engineering publications of its type in the world but also one of the most respected. In all technical categories in which this journal is evaluated, its worldwide ranking is either number 1 or number 2. As a result, we believe that this handbook, which is written by the world’s leading researchers in the field, presents the global trends in the ubiquitous area commonly known as industrial electronics. The successful construction of industrial systems requires an understanding of the various aspects of control theory. This area of engineering, like that of power electronics, is also seldom covered in depth in engineering curricula at the undergraduate level. In addition, the fact that much of the research in control theory focuses more on the mathematical aspects of control than on its practical applications makes matters worse. Therefore, the goal of Control and Mechatronics is to present many of the concepts of control theory in a manner that facilitates its understanding by practicing engineers or students who would like to learn about the applications of control systems. Control and Mechatronics is divided into several parts. Part I is devoted to control system analysis while Part II deals with control system design.
© 2011 by Taylor and Francis Group, LLC



Various techniques used for the analysis and design of control systems are described and compared in these two parts. Part III deals with estimation, observation, and identification and is dedicated to the identification of the objects to be controlled. The importance of this part stems from the fact that in order to efficiently control a system, it must first be clearly identified. In an industrial environment, it is difficult to experiment with production lines. As a result, it is imperative that good models be developed to represent these systems. This modeling aspect of control is covered in Part IV. Many modern factories have more robots than humans. Therefore, the importance of mechatronics and robotics can never be overemphasized. The various aspects of robotics and mechatronics are described in Part V. In all the material that has been presented, the underlying central theme has been to consciously avoid the typical theorems and proofs and use plain English and examples instead, which can be easily understood by students and practicing engineers alike. For MATLAB • and Simulink• product information, please contact The MathWorks, Inc. 3 Apple Hill Drive Natick, MA, 01760-2098 USA Tel: 508-647-7000 Fax: 508-647-7001 E-mail: Web:

© 2011 by Taylor and Francis Group, LLC

The editors wish to express their heartfelt thanks to their wives Barbara Wilamowski and Edie Irwin for their help and support during the execution of this project.

© 2011 by Taylor and Francis Group, LLC

Editorial Board
Timothy N. Chang New Jersey Institute of Technology Newark, New Jersey Okyay Kaynak Bogazici University Istanbul, Turkey Ren C. Luo National Taiwan University Taipei, Taiwan István Nagy Budapest University of Technology and Economics Budapest, Hungary Kouhei Ohnishi Keio University Yokohama, Japan James R. Rowland University of Kansas Lawrence, Kansas Xinghuo Yu RMIT University Melbourne, Victoria, Australia

© 2011 by Taylor and Francis Group, LLC

IEEE Transactions on Industrial Electronics. mixed signal and analog signal processing. He is currently serving as the editor in chief of IEEE Transactions on Industrial Electronics. he served as an associate director at the Microelectronics Research and Telecommunication Institute. Dr. IEEE Transactions on Education. He was a professor at the University of Wyoming. From 2000 to 2003. more than 300 refereed publications. Poland. xvii © 2011 by Taylor and Francis Group. He was the principal professor for about 130 graduate students. he was awarded the Commander Cross of the Order of Merit of the Republic of Poland for outstanding service in the proliferation of international scientific collaborations and for achievements in the areas of microelectronics and computer science by the president of Poland. He served as an associate editor of IEEE Transactions on Neural Networks. He received the title of full professor from the president of Poland in 1987. He is the author of 4 textbooks. LLC . In 2008. the Journal of Computing. from 1989 to 2000. habil. Laramie. and Dr. Moscow. he is the director of ANMSTC—Alabama Nano/Micro Science and Technology Center. and the International Journal of Circuit Systems and IES Newsletter. the Journal of Intelligent and Fuzzy Systems. Alabama. He was the director of the Institute of Electronics (1979–1981) and the chair of the solid state electronics department (1987–1989) at the Technical University of Gdansk. as a JSPS fellow (1975–1976). in integrated circuit design in 1977. Sendai. Professor Wilamowski is an IEEE fellow and an honorary member of the Hungarian Academy of Science. and computational intelligence. Wilamowski received his MS in computer engineering in 1966. his PhD in neural computing in 1970. Japan. Auburn. and an alumna professor in the electrical and computer engineering department at Auburn University. Wilamowski was with the Communication Institute at Tohoku University. Currently. and has 27 patents. He was also a visiting scholar at Auburn University (1981–1982 and 1995–1996) and a visiting professor at the University of Arizona. Wilamowski was the vice president of the IEEE Computational Intelligence Society (2000–2004) and the president of the IEEE Industrial Electronics Society (2004–2005). University of Idaho. Tucson (1982–1984). Japan (1968–1970). and spent one year at the Semiconductor Research Institute.Editors Bogdan M. Dr. His main areas of interest include semiconductor devices and sensors. and as a professor in the electrical and computer engineering department and in the computer science department at the same university.

David Irwin received his BEE from Auburn University.S. the Nominations and Appointments Committee. he joined Bell Telephone Laboratories. He served as head of the Department of Electrical and Computer Engineering from 1973 to 2009. He has also served as a member of the IEEE Educational Activities Board. and presentations. the ECE Honor Society. Williams Eminent Scholar in Electrical and Computer Engineering at Auburn. From 1982 to 1984. John Wiley & Sons Book Company. Irwin is a fellow of the American Association for the Advancement of Science. have been published by Macmillan Publishing Company. the American Society for Engineering Education. He has served as a member of the board of directors of IEEE Press. and was awarded the Bliss Medal by the Society of American Military Engineers in 1985. associate professor and head of department in 1973. He has served as chairman of the Southeastern Association of Electrical Engineering Department Heads and the National Association of Electrical Engineering Department Heads and is past president of both the IEEE Industrial Electronics Society and the IEEE Education Society. He has served on the Executive Committee of the Southeastern Center for Electrical Engineering Education. and his MS and PhD from the University of Tennessee. He has served as an IEEE Adhoc Visitor for ABET Accreditation teams. and in 1998 he was the recipient of the © 2011 by Taylor and Francis Group. including Basic Engineering Circuit Analysis.. and served as general cochair for IECON’05. Hornfeck Outstanding Service Award in 1986. the 1991 Eugene Mittelmann Achievement Award from the IEEE Industrial Electronics Society. published by John Wiley & Sons. he received the IEEE Education Society’s McGraw-Hill/Jacob Millman Award. in 1962 and 1967. LLC . and the Institute of Electrical and Electronic Engineers. and was president of the organization in 1983–1984. He is a member of the board of governors and past president of Eta Kappa Nu. he received a Meritorious Service Citation from the IEEE Educational Activities Board. He has also served as chair of the IEEE Undergraduate and Graduate Teaching Award Committee. and is the series editor for Industrial Electronics Handbook for CRC Press. He has also served as a member of the Secretary of the Army’s Advisory Panel for ROTC Affairs. Southeastern Region) Outstanding Engineering Educator in 1989. and the 1991 Achievement Award from the IEEE Education Society. in 1961. as a nominations chairman for the National Electrical Engineering Department Heads Association.xviii Editors J. and was the accreditation coordinator for IEEE in 1989. (IEEE) Computer Society as a member of the Education Committee and as education editor of Computer. New Jersey. Inc. He has served as a member of numerous IEEE committees. the Fellow Committee. including the Lamme Medal Award Committee. and professor and head in 1976. He received an IEEE Centennial Medal in 1984. respectively. he was also head of the Department of Computer Science and Engineering. In 1991. Dr. He is also the editor in chief of a large handbook published by CRC Press. He has been and continues to be involved in the management of several international conferences sponsored by the IEEE Industrial Electronics Society. papers. He is currently the Earle C. as a member of the technical staff and was made a supervisor in 1968. In 1993. Holmdel. In 1992. Dr. Dr. and as a member of the IEEE Education Society’s McGraw-Hill/Jacob Millman Award Committee. He then joined Auburn University in 1969 as an assistant professor of electrical engineering. His textbooks. and the Admission and Advancement Committee. Inc. He served for two years as editor of IEEE Transactions on Industrial Electronics. Inc. patent applications. Alabama. Williams Eminent Scholar and Head. In 1993. In 1967. He was made an associate professor in 1972. Irwin has served the Institute of Electrical and Electronic Engineers.. He received the IEEE Industrial Electronics Society’s Anthony J. he was named a Distinguished Auburn Engineer. He is a life member of the IEEE Industrial Electronics Society AdCom and has served as a member of the Oceanic Engineering Society AdCom. Knoxville. he was named Earle C. and IEEE Press. Irwin is the author and coauthor of numerous publications. 9th edition. and was named IEEE Region III (U. Prentice Hall Book Company. which is one among his 16 textbooks. which span a wide spectrum of engineering subjects.

In 2000. Beijing. Chinese Academy of Science. Phi Kappa Phi. and in 2008 he was awarded the IEEE IES Technical Committee on Factory Automation’s Lifetime Achievement Award. Irwin was made an honorary professor. Emberson Award. In 2001. He received the Diplome of Honor from the University of Patras. he is a member of the following honor societies: Sigma Xi. in 2007. China. © 2011 by Taylor and Francis Group.Editors xix IEEE Undergraduate Teaching Award. Pi Mu Epsilon. In 2005. he was awarded the electrical and computer engineering department head’s Robert M. Dr. Tau Beta Pi. he received the American Society for Engineering Education’s (ASEE) ECE Distinguished Educator Award. In 2010. and in 2006. he received the IEEE Education Society’s Meritorious Service Award. he received the IEEE Educational Activities Board Vice President’s Recognition Award. in 2004. Greece. Institute for Semiconductors. LLC . In addition. Janowiak Outstanding Leadership and Service Award. he received an IEEE Third Millennium Medal and the IEEE Richard M. Eta Kappa Nu. and Omicron Delta Kappa.

Victoria. Department of Mechanical Engineering National University of Singapore Singapore Ramón Barber Department of System Engineering and Automation University Carlos III Madrid. France A.Contributors Marcelo H. New Jersey J. Fairbanks Fairbanks. LLC . Mexico Seta Bogosyan Electrical and Computer Engineering Department University of Alaska. Ohio Christopher Edwards Department of Engineering University of Leicester Leicester. Australia Timothy N. Becerra School of Systems Engineering University of Reading Reading. Australia Emilia Fridman Department of Electrical Engineering—Systems Tel Aviv University Tel Aviv. Israel xxi © 2011 by Taylor and Francis Group. Alberta. United Kingdom Yong Feng School of Electrical and Computer Engineering RMIT University Melbourne. Alexis De Abreu Garcia Department of Electrical and Computer Engineering The University of Akron Akron. Ang Jr. Spain Victor M. Victoria. Nebraska Shan Chai School of Electrical and Computer Engineering RMIT University Melbourne. LLC Lincoln. United Kingdom Miguel Bernal Centro Universitavio de los Valles University of Guadalajara Jalisco. John Boye Department of Electrical Engineering University of Nebraska and Neurintel. Canada Alain Bouscayrol Laboratoire d’Electrotechnique et d’Electronique de Puissance de Lille University of Lille 1 Lille. Alaska Igor M. Boiko Department of Electrical and Computer Engineering University of Calgary Calgary. Chang Department of Electrical and Computer Engineering New Jersey Institute of Technology Newark.

Turkey Thierry Marie Guerra Laboratory of Industrial and Human Automation Control Mechanical Engineering and Computer Science University of Valenciennes and Hainaut-Cambresis Valenciennes. Lin Department of Electrical Engineering National Chung Cheng University Chia-Yi. LLC . Taiwan María Malfaz Department of System Engineering and Automation University Carlos III Madrid. Victoria. Hung Department of Electrical and Computer Engineering Auburn University Auburn. Turkey Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University Melbourne. Alabama Makoto Iwasaki Department of Computer Science and Engineering Nagoya Institute of Technology Nagoya. Hung Department of Electrical Engineering and Computer Science The University of Tennessee. France Fuat Gurleyen Control Engineering Department Istanbul Technical University Istanbul. Australia Tong Heng Lee Department of Electrical and Computer Engineering National University of Singapore Singapore Arie Levant Applied Mathematics Department Tel Aviv University Tel Aviv. Israel Chin F. Spain © 2011 by Taylor and Francis Group. Japan Raymond Jarvis Intelligent Robotics Research Centre Monash University Melbourne. Taiwan Ren C.xxii Contributors Leonid Fridman Control and Advanced Robotics Department National Autonomus University of Mexico Mexico City. Taiwan James C. Knoxville Knoxville. Tennessee John Y. Luo Department of Electrical Engineering National Taiwan University Taipei. Victoria. Japan Joel David Hewlett Department of Electrical and Computer Engineering Auburn University Auburn. Alabama Guan-Chyun Hsieh Department of Electrical Engineering Chung Yuan Christian University Chung-Li. Mexico Metin Gokasan Control Engineering Department Istanbul Technical University Istanbul. Turkey Fumio Harashima Tokyo Metropolitan University Tokyo. Australia Okyay Kaynak Department of Electrical and Electronic Engineering Bogazici University Istanbul.

Ontario. Hungary Roberto Oboe Department of Management and Engineering University of Padova Vicenza. Kansas Asif Šabanović Faculty of Engineering and Natural Sciences Sabanci University Istanbul. Hungary Satoshi Suzuki Department of Robotics and Mechatronics School of Science and Technology for Future Life Tokyo Denki University Tokyo. Turkey Sanjib Kumar Sahoo Department of Electrical and Computer Engineering National University of Singapore Singapore Miguel A. Japan © 2011 by Taylor and Francis Group. Italy Kouhei Ohnishi Department of System Design Engineering Keio University Yokohama. United Kingdom James R. Spain Naresh K. Japan István Nagy Department of Automation and Applied Informatics Budapest University of Technology and Economics Budapest. Sinha Department of Electrical and Computer Engineering McMaster University Hamilton. Mexico Toshiyuki Murakami Department of System Design Engineering Keio University Yokohama. Turkey Nadira Šabanović-Behlilović Faculty of Engineering and Natural Sciences Sabanci University Istanbul. Japan Nejat Olgac Department of Mechanical Engineering University of Connecticut Storrs. Rowland Department of Electrical Engineering and Computer Science University of Kansas Lawrence. Massachusetts Zoltán Süto ˝ Department of Automation and Applied Informatics Budapest University of Technology and Economics Budapest. Canada Rifat Sipahi Department of Mechanical and Industrial Engineering Northeastern University Boston. LLC . Salichs Systems Engineering and Automation Department University Carlos III Madrid.Contributors xxiii Jorge Angel Davila Montoya Aeronautic Engineering Department National Polytechnic Institute Mexico City. Connecticut Sanjib Kumar Panda Department of Electrical and Computer Engineering National University of Singapore Singapore Eric Rogers School of Electronics and Computer Science University of Southampton Southampton.

Alabama Jian-Xin Xu Department of Electrical and Computer Engineering National University of Singapore Singapore Choon-Seng Yee Department of Mechanical Engineering National University of Singapore Singapore Xinghuo Yu School of Electrical and Computer Engineering RMIT University Melbourne. Ohio Liuping Wang School of Electrical and Computer Engineering RMIT University Melbourne. Victoria. Singapore Robert J. Australia Changyun Wen School of Electrical and Electronic Engineering Nanyang Technological University Singapore Bogdan M. Australia Jing Zhou Petroleum Department International Research Institute of Stavanger Bergen. Wilamowski Department of Electrical and Computer Engineering Auburn University Auburn. Veillette Department of Electrical and Computer Engineering The University of Akron Akron. Victoria. Alabama Tiantian Xie Department of Electrical and Computer Engineering Auburn University Auburn.xxiv Contributors Kok Kiong Tan Department of Electrical and Computer Engineering National University of Singapore Singapore Chee Pin Tan School of Engineering Monash University Malaysia Kok Zuea Tang Department of Electrical and Computer Engineering National University of Singapore Singapore. LLC . Norway © 2011 by Taylor and Francis Group.

.....................................6 -1 Introduction  •  High-Accuracy Motion Control  •  Motion Control and Interaction with the Environment  •  Remote Motion Control  •  Conclusions  •  References 7 New Methodology for Chatter Stability Analysis in Simultaneous Machining  Nejat Olgac and Rifat Sipahi...............................Control System Analysis Introduction  •  Basics  •  Equilibrium Points  •  Limit Cycle  •  Quasi-Periodic and Frequency-Locked State  •  Dynamical Systems Described by DiscreteTime Variables: Maps  •  Invariant Manifolds: Homoclinic and Heteroclinic Orbits  •  Transitions to Chaos  •  Chaotic State  •  Examples from Power Electronics  •  Acknowledgments  •  References Basic Feedback Concept  •  Bibliography I 1 Nonlinear Dynamics  István Nagy and Zoltán Süto ˝................... Toshiyuki Murakami........................................... 2-1 3 Stability Analysis  Naresh K...................................... Makoto Iwasaki.....4-1 Relay Feedback Systems  •  Locus of a Perturbed Relay System Theory  •  Design of Compensating Filters in Relay Feedback Systems  •  References 5 Linear Matrix Inequalities in Automatic Control  Miguel Bernal and Thierry Marie Guerra............................ Kok Zuea Tang............ LLC ................................................................................. and Kok Kiong Tan.......... and Seta Bogosyan............. Boiko..................................... .... 1-1 2 Basic Feedback Concept  Tong Heng Lee............. ............................. 7-1 Introduction and a Review of Single Tool Chatter  •  Regenerative Chatter in Simultaneous Machining  •  CTCR Methodology  •  Example Case Studies  •  Optimization of the Process  •  Conclusion  •  Acknowledgments  •  References I-1 © 2011 by Taylor and Francis Group..... Sinha.3-1 Introduction  •  States of Equilibrium  •  Stability of Linear Time-Invariant Systems  •  Stability of Linear Discrete-Time Systems  •  Stability of Nonlinear Systems  •  References 4 Frequency-Domain Analysis of Relay Feedback Systems  Igor M...............................................................5-1 What Are LMIs?  •  What Are LMIs Good For?  •  References 6 Motion Control Issues  Roberto Oboe............................................

......................................................... New concepts and terms have entered the vocabulary to replace time functions and frequency spectra in describing their behavior............... 1-12 Dynamical Systems Described by Discrete-Time Variables: Maps........ 1-1 © 2011 by Taylor and Francis Group........ Poincaré map....10 Examples from Power Electronics.........................2 1.............................................................................1-14 Introduction  •  Fixed Points  •  Mathematical Approach  •  Graphical Approach  •  Study of Logistic Map  •  Stability of Cycles Introduction  •  Nonlinear Systems with Two Frequencies  •  Geometrical Interpretation  •  N-Frequency Quasi-Periodicity 1...... Three-Dimensional State Space (N = 3)  •  No-Intersection Theorem Introduction  •  Poincaré Map Function (PMF)  •  Stability Classification  •  Restrictions  •  Mathematical Description 1.....1 Nonlinear Dynamics 1.............................................................. CTM  •  Invariant Manifolds.. 1-41 References............... Lyapunov exponent. 1-5 Introduction  •  Basin of Attraction  •  Linearizing around the EP  •  Stability  •  Classification of EPs...... 1-27 Zoltán Süto ˝ Budapest University of Technology and Economics Acknowledgments......................9 Invariant Manifolds: Homoclinic and Heteroclinic Orbits...................... chaos and order have been viewed as mutually exclusive.......... 1-21 Transitions to Chaos.................. chaos....3 Introduction.........................1  Introduction A new class of phenomena has recently been discovered three centuries after the publication of Newton’s Principia (1687) in nonlinear dynamics..... Until recently...................... 1-1 Basics.............................................8 1.... Newton’s laws describe the processes in classical mechanics..................... period doubling. e......................5 1........ 1-26 Introduction  •  Lyapunov Exponent Introduction  •  High-Frequency Time-Sharing Inverter  •  Dual Channel Resonant DC–DC Converter  •  Hysteresis CurrentControlled Three-Phase VSC  •  Space Vector Modulated VSC with Discrete-Time Current Control  •  Direct Torque Control Introduction  •  Period-Doubling Bifurcation  •  Period-Doubling Scenario in General Introduction  •  Invariant Manifolds.......6 Limit Cycle...... etc....................g.................................................................................................................. LLC .1 1.............................. bifurcation............. 1-41 1..... 1-2 Equilibrium Points................................................ CTM István Nagy Budapest University of Technology and Economics 1.... fractal................................ DTM  •  Homoclinic and Heteroclinic Orbits.. 1-9 Quasi-Periodic and Frequency-Locked State.......4 1..... Maxwell’s equations govern the electromagnetic phenomena...............................7 1...... 1-24 Chaotic State.............................. and strange attractor..............

the application of the theory is illustrated in five examples from the field of power electronics. The uncertainty in the initial measurements will be amplified and become overwhelming after a short time. Although chaos and order have been believed to be quite distinct faces of our world. The number of differential equations equals the degree of freedom © 2011 by Taylor and Francis Group. Researchers on the frontier of natural science have recently proclaimed that this hope is unjustified because a large number of phenomena in nature governed by known simple laws are or can be chaotic. limit cycle. although admittedly still young. They can be viewed as the two principal paradigms of the theory of chaos. …] revealed the close link between chaos and order [5]. It came as an unexpected discovery that deterministic systems obeying simple laws belonging undoubtedly to the world of order and believed to be completely predictable can turn chaotic. In mathematics. five basic states or scenario of nonlinear systems are treated: equilibrium point. Instead. For example. Peitgen et al. The overview of nonlinear dynamics here has two parts.1  Classification The nonlinear dynamical systems have two broad classes: (1) autonomous systems and (2) nonautonomous systems. the location and the number of equilibrium points can change. Lorenz’s three differential equations derived from the Navier–Stokes equations of fluid mechanics describing the thermally induced fluid convection in the atmosphere. LLC . The main objective in the first part is to summarize the state of the art in the advanced theory of nonlinear dynamical systems.1-2 Control and Mechatronics They represent the world of order. which is predictable. The irony of fate is that without the aid of computers. impossible-to-measure discrepancies that shift the paths along very different trajectories. voltage control of a dual-channel resonant DC–DC converter. could have never been developed. has spread like wild fire into all branches of science. a sophisticated hysteresis current control. the fractals. quasi-periodic (frequency-locked) state. n = 0. Therefore. They are as follows: high-frequency time-sharing inverter. a discrete-time current control equipped with space vector modulation (SVM) and the direct torque control (DTC) applied widely in AC drives. There will be some words about the connection between the chaotic state and fractal geometry. In physics. the modern theory of chaos and its geometry. it has overturned the classic view held since Newton and Laplace. stating that our universe is predictable.2. This illusion has been fueled by the breathtaking advances in computers. In the second part. just the opposite has happened. promising ever-increasing computing power in information processing. and three different control methods of the three-phase full bridge voltage source DC– AC/AC–DC converter. there were tricky questions to be answered. we are unable to predict it. One of their principle properties is their sensitive dependence on initial conditions. Modifying the parameters of a nonlinear system. 1. and chaotic state. routes to chaos. Another very early example came from the atmospheric science in 1963. Processes were called chaotic when they failed to obey laws and they were unpredictable. Both are described by a set of first-order nonlinear differential equations and can be represented in state (phase) space. Within the overview. The chaos theory. or a fluid system can turn easily from order to chaos. governed by simple laws. Although the most precise measurement indicates that two paths have been launched from the same initial condition.2  Basics 1. One of the first chaotic processes discovered in electronics can be shown in diode resonator consisting of a series connection of a p–n junction diode and a 10–100 mH inductor driven by a sine wave generator of 50–100 kHz. 1. The theory of nonlinear dynamics is strongly associated with the bifurcation theory. The study of these problems is the subject of bifurcation theory. [9]. the study of the quadratic iterator (logistic equation or population growth model) [xn+1 = axn(1 − xn). knowing all the laws governing our global weather. from laminar flow into turbulent flow. our ability to predict accurately future developments is unreasonable. there are always some tiny. The existence of well-defined routes leading from order to chaos was the second great discovery and again a big surprise like the first one showing that a deterministic system can be chaotic. 2.

1.… . It is called the state space or phase space. 1.Nonlinear Dynamics 1-3 (or dimension) of the system. The set of nonlinear differential equations describing the system is dx = v = f (x . we confine our consideration to autonomous systems unless it is stated otherwise. x2 .2) Time t explicitly appears in u(t). m) dt (1. By this restriction.…µ N ] is the parameter vector T denotes the transpose of a vector t is the time N is the dimension of the system The time t does not appear explicitly. u(t ). v2 .1) where x T = [x1 .2 Restrictions The non-autonomous system can always be transformed to autonomous systems by introducing a new state variable xN+1 = t. The solution describes the state of the system as a function of time.1 Autonomous Systems There are no external input or forcing functions applied to the system. It is a set of nonlinear differential equations: dx = v = f (x . a point in the state space represents the state of the system.2) can be solved analytically or numerically for a given initial condition x0 and parameter vector 𝛍. © 2011 by Taylor and Francis Group. The solution can be visualized in a reference frame where the state variables are the coordinates. m) dt (1.1) and (1. µ 2 . As the time evolves.…v N ] is the velocity vector f T = [ f1 .2.2  Non-Autonomous Systems Time-dependent external inputs or forcing functions u(t) are applied to the system. LLC . 1.1) is dx N +1 dt = =1 dt dt (1. At any instant.… f N ] is the nonlinear vector function mT = [µ1 .2. the state point is moving along a path called trajectory or orbit starting from the initial condition. which is the number of independent state variables needed to determine uniquely the dynamical state of the system. f 2 .3) The number of dimensions of the state space was enlarged by one by including the time as a state variable. Now the last differential equation in (1.1. x N ] is the state vector vT = [v1 .1. (1. there is no loss of generality.2. From now on.

the subscript of x denotes the time instant.and a two-dimensional system. t. respectively. It is a stroboscopic sampling. or whatever geometric object in the state space. As time evolves. The Poincaré section is a hyperplane for systems with dimension higher than three. 𝛍) for three-dimensional system.3  Mathematical Description Basically. We focus our considerations on the long-term behavior of the system rather than analyzing the start-up and transient processes. the system behavior is described by a moving point in state space resulting in a trajectory (or flow) obtained by the solution of the set of differential equations (1..1  Trajectory described by 𝛟(x0. the PMF describes the relation between sampled values of the state variables. it implies the existence of a mathematical relation between xn and xn+1. between subsequent intersection points generated always from the same direction are described by the so-called Poincaré map function (PMF) x n +1 = P ( x n ) (1. not a component of vector x. Figure 1. The selection of the Poincaré plane is not crucial as long as the trajectory cuts the surface transversely. © 2011 by Taylor and Francis Group. xn. Figure 1. is chosen and the intersection points cut by the trajectory are recorded as samples. The first concept considers all the state variables as continuous quantities applying continuous-time model (CTM). dissipative systems. As time evolves. where x 0 is the initial point. curve. They are called the attractor for the particular system since certain dedicated trajectories will be attracted to these objects. the state variables for a mechanical system are recorded with a flash lamp fired once in every period of the forcing function [8]. t. t. The relation between xn and xn+1.4) Pay attention. The second concept takes samples from the continuously changing state variables and describes the system behavior by discrete vector function applying the Poincaré concept. i. 𝛍). μ) FIGURE 1. the existence Poincaré plane x2 xn+1 = P(xn) xn x3 x0 x1 Trajectory (x0. which are unique and deterministic. Again. area. e.1-4 Control and Mechatronics The discussion is confined to real.. LLC . at the beginning of the period. i. For a non-autonomous system having a periodic forcing function. Knowing that the trajectories are the solution of differential equation system. They cannot go to infinity. 1.e. The trajectories are assumed to be bounded as most physical systems are.1 shows the way how the samples are taken for a three-dimensional autonomous system.2)]. while it is a point and a straight line for a one. xn+1 are the intersection points of the trajectory with the Poincaré plane.2. the samples are taken at a definite phase of the forcing function.g. the state variables will head for some final point. in general Poincaré section.e.1 shows the continuous trajectory of function ϕ(x 0. two different concepts applying different approaches are used..1) [or (1. A so-called Poincaré plane.

The velocity of state variables v=x ˙ is zero in the EP: dx = v = f (x . … .2).2  Basins of attraction and their EP (N = 3). infinitesimal time step. It is a point in the state space approached by the trajectory of a continuous. They are * *. one of them is the equilibrium point (EP). they can advantageously be modeled by discrete-iteration function (1. x1. In the second concept. PMF retains the essential information of the system dynamics.2  Basin of Attraction The natural consequence of the existence of multiple attractors is the partition of state space into different regions called basins of attractions. x* k is a particular value of state space vector x .1  Introduction The nonlinear world is much more colorful than the linear one. The nonlinear systems can be in various states. Any of the initial conditions within a basin of attraction launches a trajectory that is finally attracted by the particular EP belonging to the basin of attraction (Figure 1.5) can result in more than one EPs. and x k (0) is an initial condition in the basin of attraction of x* k. x2 x* 2 x* 1 x* 2 (0) x* k (0) x* i x1 x* k x3 Linearized region by Jacobian matrix Jk Basins of attractions FIGURE 1. On the other hand. the system is inherently discrete. 1. m) = 0 dt (1. x2 . the discrete-time model (DTM) is used. They organize the state space in the sense that a trajectory born in a basin of attraction will never leave it. © 2011 by Taylor and Francis Group.3.5) The solution of the nonlinear algebraic function (1. Even though the state variables are changing continuously in many systems like in power electronics. x* k . xn 1. The Poincaré section reduces the dimensionality of the system by one and describes it by an iterative.3  Equilibrium Points 1. the discrete-time equation (1.4) can be solved analytically or numerically independent of the differential equation. x* 1 . finite-size time step function rather than a differential. However. x* k denotes an EP.3.Nonlinear Dynamics 1-5 of PMF.4). x2. In some other cases. their state variables are not changing continuously like in digital systems or models describing the evolution of population of species. … . nonlinear dynamical system as its transients decay. LLC . The stable EPs are attractors. The border between two neighboring basins of attraction is called separatrix. and x3 are coordinates of the state space v ­ ector  x.

6): ∆ x(t ) = ∑ m =1 N e me λ m t = ∑δ (t ) m m =1 N (1.8) Its nontrivial solution for λ must satisfy the Nth order polynomial equation det(J k − λI ) = 0 (1.9). λm. it has two important consequences: 1.10) where I is the N × N identity matrix. Correspondingly.7) in the form and substituting it back to (1. (1.10) (multiple roots are excluded). The direction of er is not changed. The direction of the perturbation of the velocity vector Δv will be the same as that of vector er . LLC .1): d ∆x = ∆v ≅ J k ∆ x dt (1. (1. Now .6) where  ∂f1  ∂x  1  ∂f 2 J k =  ∂x1    ∂f N   ∂x1 ∂f1 ∂x 2 ∂f 2 ∂x 2 ∂f N ∂x 2 … … … ∂f1  ∂x N   ∂f 2  ∂x N    ∂f N   ∂x N  (1. m ) + J ∆ x + . f (x *. 𝛍) = 0 was observed in (1.…. The product Jker (= λer) only results in the expansion or contraction of er by λ .e. time-independent N × N matrix.1-6 Control and Mechatronics 1.6): J k er = λer (1. Seeking the solution of (1. Neglecting the terms of higher order than Δx and k k k k substituting it back to (1.11) Selecting the direction of vector Δx in the special way given by (1.3  Linearizing around the EP Introducing the small perturbation ∆ x = x − x* k .9) ∆ x = er e λt (1. i.7) is the Jacobian matrix. its change in time depends only on one constant λ . and (1. ∆v = J k er e λt = λer e λt (1. λ2. We confine our consideration of N distinct roots of (1.6). we have N distinct eigenvectors as well.8).3. λN.1) can be linearized in the close neighborhood of * * the EP x* f ( x +∆ x . The partial derivatives have to be evaluated at x* k .8).12) © 2011 by Taylor and Francis Group. Jk is a real. From (1. λ is the eigenvalue (or characteristic exponent) of Jk.6). m ) = f ( x . The direction of er is called characteristic direction and er is the right-hand side eigenvector of Jk as Jk is multiplied from the right by er. They are λ1. 2.. The general solution of (1.

3b shows the EP having complex conjugate eigenvalues and the real em. © 2011 by Taylor and Francis Group.I around EP. LLC . Both cases.3  Stable (σm < 0) and unstable (σm > 0) orbits. I. the operation point P is attracted (σm < 0) or repelled (σm > 0). R sin ω mt   2j  ∑ N m =1 em . and the system settles back to EP.R. and em. R cos ω mt + em.I component of the complex eigenvector em.10) are either real sm (t ) = (1.10).R and imaginary em. Figure 1.14) When the eigenvalue is real λm = σm and the solution belonging to λm.4  Stability The EP is stable if and only if the real part σm of all eigenvalues belonging to the EP is negative.I Unstable σm > 0 P EP Stable σm < 0 P Stable σm < 0 Unstable σm > 0 em em.3. The roots of (1. Figure 1. sm (t ) = δ m (t ) = eme σmt (1. em. When we have complex conjugate pairs of eigenvalues λm = ˆ λm+1 = σm − jωm. em. ωm. σm = 0 is considered as unstable case. I cos ω mt − em. one or more solutions sm(t) goes to infinity.Nonlinear Dynamics 1-7 Here we assumed that the initial condition was ∆ x(t = 0) = or complex conjugate ones since the coefficients are real in (1. When σm < 0. respectively. I sin ω mt   2 1 σm t δ m (t ) − δ m +1(t ) =e  em. The orbit of the solution sm(t) is a spiral in the plane spanned by em. two linearly independent real solutions sm(t) and sm+1(t) can be composed: 1 σm t δ m (t ) + δ m +1(t ) =e  em. and (b) a pair of complex eigenvalues belongs to the eigenvector em of EP. I are all real and real-valued vectors.13) sm +1(t ) = (1.15) Note that the time function belonging to an eigenvalue or a pair of eigenvalues is the same for all state variables.R + jem. The orbit is a straight line along em. where the ∧ denotes complex conjugate and where σm. all small perturbation around EP dies eventually.3a shows one EP and one of its eigenvectors em having real eigenvalue λm = σm. the corresponding eigenvectors are em = êm+1 = em. Otherwise.R EP (a) (b) FIGURE 1. From the two complex solutions δm(t) = em exp(λmt) and δm+1(t) = em+1 exp(λm+1t). (a) Real eigenvalues.R and em. 1.

I e2. The operation points are attracted (repelled) by stable (unstable) EP.4  Classifications of EPs (N = 3). (a) Node.I (d) Re e2 Unstable manifold Im e2. and (h) spiral saddle point—index 2. (g) saddle point—2. (b) spiral node. The orbits (trajectories) are moving exponentially in time along the eigenvectors e1. (c) repeller.f.I (h) Re FIGURE 1. e2. The spiral repeller is also called repelling focus.4g and h) eigenvalues are on the right-hand side.h).4a and b). either one (Index 1) (Figure 1.R and e2. e2.4c and d).I are used when we have a pair of conjugate complex eigenvalues (Figure 1. e2. (e) saddle point—index 1. Trajectories heading directly to and directly away from a saddle point are called e2 Im e2.4e and f) or two (Index 2) (Figure 1. Saddle points play very important role in organizing the trajectories in state space.4c through f). The origin is the EP.I (f ) Re e2 Stable manifold Im e2. © 2011 by Taylor and Francis Group. Three eigenvectors are used for reference frame. A trajectory associated to an eigenvector or a pair of eigenvectors can be stable when its eigenvalue(s) is (are) on the left-hand side of the complex plane (Figure 1.3.g) and e1.R Im Hopf Re bifurcation e1 (a) e3 Re e1 (b) e2. All three eigenvalues are on the left-hand side of the complex plane for node and spiral node (Figure 1. Eigenvectors e1.c.R Im e1 e3 (g) Re e1 e2.e. Three-Dimensional State Space (N = 3) Depending on the location of the three eigenvalues in the complex plane.5  Classification of EPs.4a and b) or unstable when they are on the right-hand side (Figure 1. (f) spiral saddle point—index 1. and e3 are used when all eigenvalues are real (Figure 1.d. The spiral node is also called attracting focus. and e2.R Im e1 e3 (e) Unstable manifold Re e1 e2.1-8 Control and Mechatronics 1.I with initial condition in the plane. The orbits are spiraling in the plane spanned by e2. eight types of the EPs are distinguished (Figure 1. or e3 when the initial condition is on them. (d) spiral repeller. LLC . For saddle points.4).R.R e2 Im Im e1 e3 (c) Stable manifold Re e1 e2. All three eigenvalues are on the right-hand side for repeller and spiral repeller (Figure 1.4a.4b.

The trajectories in the reference frame eR –eI are circles (Figure 1. the EP is called center. The state of the system is unambiguously determined by the location of its operation point in the state space.6) © 2011 by Taylor and Francis Group. when the real part σm is zero in the pair of the conjugate complex eigenvalue and the dimension is N = 2. 1. The manifolds of a saddle point in its neighborhood in a two-dimensional state space divide it into four regions.5) having closed trajectory is that the trajectories starting from nearby initial points are attracted by stable limit cycle and sooner or later they end up in the LC. while the trajectories starting from different initial conditions will stay forever in different tracks determined by the initial conditions in the case of center. there is only one possible direction for a trajectory to continue its journey. the discrete Poinearé Map Function (PMF) relates the coordinates of intersection point Pn to those of the previous point Pn−1. the dimension is 3. The LC intersects the Poincaré plane at point Pk called Fixed Point (FP). The theorem is the direct consequence of the deterministic system. N = 2. There are stable (attracting) and unstable (repelling) LC. Instead of investigating the behavior of the LC.1  Introduction Two. The manifolds are part of the separatrices separating the basins of attractions. The operation point either on the stable or on the unstable manifold cannot leave the manifold. Eigenvalues are complex conjugate. stable and unstable invariant manifold or shortly manifold. the stable (unstable) manifolds are called insets (outsets). In this sense.6  No-Intersection Theorem Trajectories in state space cannot intersect each other. Their radius is determined by the initial condition. LLC .4.4. A trajectory born in a region is confined to the region. As the system is determined by (1. Figure 1. Finally.Nonlinear Dynamics e2 Im Re 1-9 Initial conditions e1 FIGURE 1. The PMF in a threedimensional state space is (Figure 1. It plays a crucial role in nonlinear dynamics. 1.5). the FP is studied. The basic difference between the stable LC and the center (see Figure 1. All points are the intersection points in the Poincaré plane generated by the trajectory.1).6 shows a stable LC together with the Poincaré plane. This behavior is represented by closed-loop trajectory called Limit Cycle (LC) in the state space. Here. the manifolds organize the state space. Consequently.or higher-dimensional nonlinear systems can exhibit periodic (cyclic) motion without external periodical excitation.5  Center. Sometimes.4  Limit Cycle 1.3. 1.2  Poincaré Map Function (PMF) After moving the trajectory from the LC by a small deviation. all of the derivatives are determined by the instantaneous values of the state variables.

18) can be rewritten for small perturbation around the FP zk as © 2011 by Taylor and Francis Group. vn −1 ) (1. un = P1 ( un −1.3  Stability The stability can be investigated on the basis of the PMF (1.18) The first benefit of applying the PMF is the reduction of the dimension by one as the stability of FP Pk is studied now in the two-dimensional state space rather than studying the stability of the LC in the threedimensional state space. the nonlinear function P(zn−1) has to be linearized by its Jacobian matrix Jk evaluated at its FP zk.1-10 Attracting limit cycle x2 Trajectory Control and Mechatronics Poincaré plane v zn Pk Pn Pn–1 x3 Pk is fixed point u x1 FIGURE 1. (1. and the second one is the substitution of the differential equation by difference equation. First. LLC .19) z n = P(z n −1 ) (1.4.16) where P1 and P2 are the PMF un and vn are the coordinates of the intersection point Pn in the Poincaré plane Introducing vector zn by T zn = un . 1. Knowing Jk. the PMF is At fixed point Pk z k = P(z k ) (1.17) where zn points to intersection Pn from the origin. vn   (1.18).6  Stable limit cycle and the Poincaré plane (N = 3). vn −1 ) vn = P2 ( un −1.

7 shows four different iteration processes corresponding to the particular value of eigenvalue λm associated to em. Figure 1.7a and λm > 1 in Figure 1. The FP is a saddle. the iterates are converging onto FP. (c) Saddle.7b. the FP is spiral repeller. In Figure 1.6) and (1. When |λm| > 1.Nonlinear Dynamics n ∆z n = J k ∆z n −1 = J k ∆z 0 1-11 (1.20) where Δz 0 is the initial small deviation from FP Pk. but its value is 0 < λm < 1 in Figure 1.7c. © 2011 by Taylor and Francis Group. Spiral node if ⏐λm⏐< 1 and spiral repeller if ⏐λm⏐> 1. In Figure 1.7d.I (c) em. In the stability analysis. (a) Node. both λm and λm+1 are real. ∆z n =    N −1   m =1  ∑ T  λn me mr e ml  ∆z 0     (1. The subsequent iterates are converging onto FP along eigenvector em while they are repelled along em+1.7a) and they are repelled from FP. Substituting Jk by its eigenvalues λk and right emr and left eml eigenvectors. respectively.21) Due to (1.14)]. The essential difference is that it determines the velocity vector for CTM and the next iterate for DTM. the iteration diverges. λm is real and 0 < λm < 1. both in continuous-time model (CTM) and in discrete-time model (DTM). em Δzn–1 Δzn Δzn+1 FP (a) FP (b) em.13) and (1.21). now it is spiral node. (d) λm and λm+1 are complex conjugate.R FP em+1 em em Unstable FP (d) Stable FIGURE 1. is real and λm > 1. (b) Repeller. the Jacobian matrix is operated on the small perturbation of the state vector [see (1. the eigenvalues are complex conjugates. In Figure 1. it is node (Figure 1. the LC is stable if and only if all eigenvalues are within the circle with unit radius in the complex plane. LLC .7  Movement of consecutive iteration points. both λm and λm+1 are real but 0 < λm < 1 and λm+1 > 1. it is repeller (Figure 1.20)]. When |λm| < 1. The consecutive iteration points along vector em are approaching the FP.7b). The consecutive iterates move along a spiral path around FP [see (1. Assume that the very first point at the beginning of iteration is placed on eigenvector em of the Jacobian matrix.7a and b. λm is real. 0 < λm < 1 and λm+1 > 1.

the motion—in theory—never exactly repeats itself..e. In chaotic state. LC.3  Geometrical Interpretation A two-frequency Qu-P trajectory on a toroidal surface in the three-dimensional state space is shown in Figure 1. the superposition of the independent frequencies is not valid. Introducing two angles Θr = ωrt = ω1t and ΘR = ωRt = ω2t.22). periodic state when the sum L = 0. L = k1ω1 + k2ω2 + + kmωm (1. The Qu-P motion is a mixture of periodic motions of several different angular frequencies ω1. the common factor of 2 will be removed and f1/f2 = p/q = 1/3 can be written. which are non-dissipative ones. two points in state space. Qu-P. The frequency-locked (F-L) state is a special case of Qu-P state. i. R is the large radius of the torus whose cross-sectional radius is r. Starting from (1. which are arbitrarily close. then the ratio is called rational in mathematical sense.e. Other terms used in literature are conditionally periodic or almost periodic. Because of nonlinearity. Qu-P state is possible in inherently discrete systems as well. i.5. i. The last two statements can easily be understood in the geometrical interpretation of the system trajectory. The center of the torus is at the origin. In Qu-P state. They do not have attractors. the two frequencies are commensurate. e. i.. k2. Here we assume that any common factors in the frequency ratio have been removed. Depending on the value of their linear combination L . In other words. When the frequency ratio is the ratio of two integers. The Qu-P motion plays central role in Hamiltonian systems. On the other hand.. The two angles ΘR and Θr are increasing as time evolves and therefore point P ′ is moving on © 2011 by Taylor and Francis Group.…. e. assume that ω1 T2 p = = ω2 T1 q (1.8. Qu-P state is not possible in N = 1 or 2 dimensional systems.5.e.23) where T1 = 2π/ω1 and T2 = 2π/ω2 are the periods of respective harmonic oscillations and p and q are positive integers.1-12 Control and Mechatronics 1.5.9). they determine point P of the trajectory on the surface of the torus provided that the initial condition point P0 belonging to t = 0 is known.g. if f1/f2 = 2/6.2  Nonlinear Systems with Two Frequencies Qu-P and F-L motions occur frequently in practice in systems having a natural oscillation frequency and a different external forcing frequency or two different natural oscillation frequencies. and the behavior is Qu-P. aperiodic when the sum L ≠ 0 or it can be F-L. ω2..5  Quasi-Periodic and Frequency-Locked State 1. Qu-P state is aperiodic. km are any positive (or negative) integer (k1 = k2 = … = km = 0 is excluded).1  Introduction Beside the EP and LC. ωm. It is in F-L state. two points that are initially close will remain close over time in Qu-P state. when the frequency ratio is irrational. 1.…. the behavior of the system is periodic. the chaotic system is extremely sensitive to initial conditions and to changes in control parameters.. Here k1. another possible state or motion is the quasi-periodic (Qu-P) motion in CTM. 1. The EP. LLC . N must be N ≥ 3..22) the motion can be Qu-P. will diverge. In contrast to chaotic state. the two frequencies are incommensurate. and F-L states are regular attractors while in chaotic state. in mechanical systems modeled without friction. the system has strange attractor (see later in Section 1.g. Similarly to chaos.e.

As long as point P rotates once around circle R . As ωr/ωR = p/q [see (1.9a) after 1–2–3 rotations around circle R . Figure 1. Point 3 ωr/ωR = p/q = 1/3 1 3 2 Poincaré plane 0 ree intersection points (a) ωr/ωR = p/q = 2π (b) Poincaré plane After long time the number of intersection points approaches in nity FIGURE 1. For example.9 shows the torus and the Poincaré plane intersecting the torus together with the trajectory on the surface of the torus. it is rational. The Poincaré plane illustrates its intersection points with the trajectory.8  Two-frequency trajectory on a toroidal surface in state space (N = 3).Nonlinear Dynamics x2 Torus P R 2r x3 ΘR P0 x1 Θr 1-13 FIGURE 1. x2. making ωR /2π rotations per unit time.24) The trajectory is winding on the torus around the cross section with minor radius r. x3]. LLC .23)] and assuming that p and q are integers.9  Example for (a) frequency-locked and (b) quasi-periodic state. point P makes three rotations around circle R as long as it makes only one rotation around circle r. respectively. the number of rotations around circle r and circle R in per unit is p and q. © 2011 by Taylor and Francis Group. it rotates 120° around circle r. Starting from point 0 on the Poincaré plane (Figure 1. if p = 1 and q = 3. The three components of the state vector x are given by the equations as follows: x1 = (R + r cos Θr )cos Θ R    x2 = r sin Θr   x3 = −(R + r cos Θr )sin Θ R   (1. point P intersects the Ponicaré plane successively at point 1–2–3. the surface of the torus tracking the trajectory of state vector xT = [x1. When the frequency ratio p/q = 1/3.


Control and Mechatronics

coincides with the starting point 0 as three times 120° is 360°. The process is periodic, the trajectory closes on itself, it is the F-L state. On the other hand, when the frequency ratio is irrational, e.g., p/q = 2π as long as the point P makes one rotation around circle R, it completes 2π rotations around circle r. The phase shift of the first intersection point on the Poincaré plane from the initial point P0 is given by an irrational angle = 360° × 2π(mod 1) where (mod 1) is the modulus operator that takes the fraction of a number (e.g., 6.28(mod 1) = 0.28). After any further full rotations around circle R, the phase shifts of the intersection points from P0 on the Poincaré plane remain irrational; therefore, they will never coincide with P0, and the trajectory will never close on itself. All intersection points will be different. As t → ∞, the number of intersection points will be infinite and a circle of radius r will be visible on the Poincaré plane consisting of infinite number of distinct points (Figure 1.9b). The system is in Qu-P state.

1.5.4  N-Frequency Quasi-Periodicity
We have treated up to now the two-frequency quasi-periodicity a little bit in detail. It has to be stressed that in mathematical sense, the N-frequency quasi-periodicity is essentially the same. The N frequencies define N angles Θ1 = ω1t, Θ2 = ω2t,… ΘN = ωNt determining uniquely the position and movement of the operation point P on the surface of the N-dimensional torus. Now again, the trajectory fills up the surface of the N-dimensional torus in the state space.

1.6  Dynamical Systems Described by Discrete-Time Variables: Maps
1.6.1  Introduction
The dynamical systems can be described by difference equation systems with discrete-time variables. The relation in vector form is x n +1 = f ( x n ) (1.25)

(1) (2 ) (K ) T = [ xn , xn ,… xn ], f is nonlinear vector function. State vecwhere xn is K-dimensional state variable xn tor xn is obtained at discrete time n = 1 by x1 = f (x0), where x0 is the initial condition. From x1, the value x2 = f (x1) can be calculated, etc. Knowing x0, the orbit of discrete-time system x0, x1, x2,… is generated. We can consider that vector function f maps xn into xn+1. In this sense, f is a map function. The number of state variables determines the dimension of the map.

Examples: One-dimensional map (K = 1): Logistic map:  xn+1 = axn(1 − xn)   axn Tent map: xn +1 =   a(1 − xn ) where a is constant. Two-dimensional map (K = 2): Henonmap: ´ where a, b, and c are constants.
(1) (2 ) (1) xn +1 = 1 + axn − bxn (2 ) (1) xn +1 = cx n

if xn ≤ 0.5 if xn > 0.5

© 2011 by Taylor and Francis Group, LLC

Nonlinear Dynamics


K-dimensional map: Poincaré map of an N = K + 1 dimensional state space. Maps can give useful insight for the behavior of complex dynamic systems. The map function (1.25) can be invertible or non-invertible. It is invertible when the discrete function xn = f −1(xn +1 ) (1.26)

can be solved uniquely for xn. f −1 is the inverse of f. Two examples are given next. First, the invertible Henon map and after, the non-invertible quadratic (logistic) map are discussed. Henon map is frequently cited example in nonlinear systems. It has two dimensions and maps the point with coordinate (1) (2 ) (1) (1) (2 ) (2 ) xn and xn in the plane to a new point xn +1 and xn +1. It is invertible, because xn +1 and xn +1 uniquely (2 ) (1) determine the value xn and xn , since
(2 ) xn +1 c (2 ) 2 x (1) − 1 + bxn +1 (2 ) = n +1 xn a (1) = xn

Here, a ≠ 0 and c ≠ 0 must hold. (For some values of a, b, and c, the Henon map can exhibit chaotic behavior.) Turning now to the non-invertible maps, the quadratic or logistic map is taken as example. It was developed originally as a demography model. It is a very simple system, but its response can be surprisingly colorful. It is non-invertible, as Figure 1.10 shows. We cannot uniquely determine xn from xn+1. As we see later, an invertible map can be chaotic only if its dimension is two or more (Henon map). On the other hand, the non-invertible map can be chaotic even in one-dimensional cases, e.g., the logistic map.

1.6.2  Fixed Points
The concept of FP was introduced earlier (see Figure 1.6). The system stays in steady state at FP where xn+1 = xn = x*. When the discrete function (1.25) is nonlinear, it can have more than one FP.  One-Dimensional Iterated Maps For the sake of simplicity, the one-dimensional maps are discussed from now on. The one-dimensional iterated maps can describe the dynamics of large number of systems of higher dimension. To throw light


FIGURE 1.10  The quadratic or logistic iterated map.

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics

T to the statement, consider a three-dimensional dynamical system with PMF z n +1 = [un +1 , vn +1 ] = Pn (un , vn ) [see (1.18)]. In certain cases, we have a relation between the two coordinates un and vn: vn = Fv(un). Substituting it into the map function un+1 = Pu(un, vn), we end up with

un +1 = Pu (un , Fv (un )) = f (un )


one-dimensional map function. More arguments can be found in the literature (see chapter 5.2 in Ref. [4] and page 66 of Ref. [5]) on the wide scope of applications on the one-dimensional discrete map functions. If the dissipation in the system is high enough, then even systems with dimension more than three can be analyzed by one-dimensional map. Return Map or Cobweb The iteration in the one-dimensional discrete map function or difference equation xn +1 = f (xn ) (1.28)

can be done with numerical or graphical method. The return map or cobweb is a graphical method. To illustrate the return map method, we take as example the equation xn +1 = axn 1 + (bxn )c (1.29)

where a, b, and c are constants. The iteration has to be performed as follows (Figure 1.11): 1. Plot the function xn+1(xn) in plane xn+1 versus xn. 2. Select x0 as initial condition. 3. Draw a straight line starting from origin with slope 1 called mirror line or diagonal. 4. Read the value x1 from the graph and draw a line in parallel with the horizontal axis from x1 to the mirror line (line 1′–1″ ). 5. Read the value x2 by vertical line 1″–2′. 6. Repeat the graphical process by drawing the horizontal line 2′–2″ and finally determining x3.

In order to find the FP x *, we have to repeat the graphical process.
xn+1 x2 2΄ 2˝ FP x1 1˝ 1΄

x1 x3


xn x2 x0

FIGURE 1.11  Iteration in return map.

© 2011 by Taylor and Francis Group, LLC

Nonlinear Dynamics

1-17  k th Return Map Periodic steady state with period T is represented by a single FP x* in the mapping, i.e., x* = f(x*). kth-order * )… xn * } , where x2 * = f (x1 *+1 = subharmonic solutions with period kT correspond to FPs {x* 1 …xk * * * f (xn )… x1 = f (x k ). The k th iterate of f (x) is defined as the function that results from applying f k times, its notation is f (k)(x) = f ( f (…f (x))), and its mapping is called k th return map and the process is called period-k.  Stability of FP in One-Dimensional Map The FP is locally stable if subsequent iterates starting from a sufficiently near initial point to the FP are eventually getting closer and closer the FP. The expressions attracting FP or asymptotically stable FP are also used. On the other hand, if the subsequent iterates move away from x *, then the name unstable or repelling FP is used.

1.6.3  Mathematical Approach
By knowing the nonlinear function f (x) and one of its FPs x *, we can express the first iterate x1 by applying a Taylor series expansion near x *: x1 = f (x0 ) = f (x *) + df ∆x0 + dx x * ≅ f (x *) + df ∆x0 dx x * (1.30)

where Δx0 = x0 − x * and the initial condition x0 is sufficiently near x *. The derivative λ = df/dx has to be evaluated at x *. λ is the eigenvalue of f (x) at x *. The nth iterate is  df  ∆xn = λ n (x0 − x *) = λ n ∆x0 =   ∆x0  dx x * 


It is obvious that x * is stable FP if | df / dx |x * < 1 and x * is unstable FP if | df / dx |x * > 1. In general, when the dimension is more than one, λ must be within the unit circle drawn around the origin of the complex plane for stable FP. The nonlinear f (x) has multiple FPs. The initial conditions leading to a particular x * constitute the basin of attraction of x *. As there are more basins of attraction, none of FPs can be globally stable. They can be only locally stable.

1.6.4  Graphical Approach
Graphical approach is explained in Figure 1.12. Figure 1.12a,c,e, and g presents the return map with FP x * and with the initial condition (IC). The thick straight line with slope λ at x * approximates the function f (x) at x *. Figure 1.12b,d,f, and h depicts the discrete-time evolution xn(n). |λ| < 1 in Figure 1.12a,b,c, and d. The subsequent iterates approach x *, FP is stable. In Figure 1.12e,f,g, and h, |λ| >1. The subsequent iterates explode, FP is unstable. Note that the cobweb and time evolution is oscillating when λ < 0.

1.6.5  Study of Logistic Map
The relation of logistic map is xn +1 = axn (1 − xn ); a = const. (1.32)

© 2011 by Taylor and Francis Group, LLC

xn+1 xn

Control and Mechatronics

xn (a) xn+1 x*

n (b) xn




xn (d)


xn (e) xn+1 x* IC (f ) xn



x* IC




FIGURE 1.12  Iterates in return map and their time evolution.

* = 0. The respective eigenvalues are λ1 = 2 − a and λ2 = a. * = 1 − 1 / a and x2 In general, it has two FPs: x1 Figure 1.13 shows the return map (left) and the time evolution (right) at different value a changing in * is unstable and range 0 < a < 4. When 0 < a < 1, then λ1 is outside and λ2 is inside of the unit circle, i.e., x1 * is stable (Figure 1.13a and b). When 1 < a < 3, then λ1 is inside and λ2 is outside of the unit circle, x1 * is x2 * is unstable (Figure 1.13c,d,e, and f). There is no oscillation in Figure 1.13c and d λ1 > 0 and stable and x2 we have oscillation in Figure 1.13e and f as λ1 < 0. Increasing a over a = 3, none of the two eigenvalues * are unstable. However, stable cycle of period-2 (Figure * and x2 λ1 and λ2 is inside the unit circle, both x1 1.13g and h and the period-4 Figure 1.13i and j) develops at a = 3.2 and a = 3.5, respectively (see later, the

© 2011 by Taylor and Francis Group, LLC

Nonlinear Dynamics


stability of cycles). Increasing a over a = 3.570, we enter the chaotic range (Figure 1.13k and l), the iteration is a periodic with narrow ranges of a producing periodic solutions. As a is increased, first we have period-1 in steady state, later period-2, then period-4 emerge, etc. The scenario is called period doubling cascade.

1.6.6  Stability of Cycles
We have already introduced the notation f (k)(x) = f ( f (…f (x)) for the k th iterate that results from apply* * and after applying f k-times we end up with x* ing f k-times. If we start at x1 k = x1 , then we say we have * * * * * = f (x k −1 ). period or cycle-k with k separate FPs: x1 , x2 = f (x1 ), … , x k * * * * )). Referring to In the simplest case of period-2, the two FPs are: x* 2 = f ( x1 ) and x1 = f ( x 2 ) = f ( f ( x1 * (1.30), we know that the stability of FP x1 depends on the value of the derivative λ (2 ) = df ( f (x )) dx x*




a 4 3 2 1 xn IC 1



(a) 0 1 xn+1



(b) 0 1 xn



a 4 3 2 xn 1 0

0 (c) 1

0 xn+1

IC 1

0 (d)

n 0 xn

a 4 3 2 xn 1 0


0 (e)

0 IC


0 (f )

n 0

FIGURE 1.13  Return map (left) and time evolution (right) of the logistic map.


© 2011 by Taylor and Francis Group, LLC

xn+1 a 4 3 2 1 IC (g) 0 1 xn+1 0 xn 1 0 (h) 0 1 xn 0 xn

Control and Mechatronics




a 4 3 2 1 xn 1

0 (i) 1


IC xn+1


0 (j)

n 0 xn

a 4 3 2 xn 1 0


0 (k)

0 IC


0 (l)

n 0

FIGURE 1.13 (continued)

Using the chain rule for derivatives, df ( f (x )) df df (x ) = = dx dx dx * x1 x* 1
(2 )


df df df ⋅ = ⋅ x x* dx dx d * * * f ( x1 ) x1 x2 1


d f (x ) d f (x ) = dx dx x* x* 1 2

(2 )

(2 )


(1.35) states that the derivatives or eigenvalues of the second iterate of f (x) are the same at both FPs belonging to period-2. As an example, Figure 1.14 shows the return map for the first (Figure 1.14a) and for the second (Figure 1.14b) iterate when a = 3.2. Both FPs in the first iterate map are unstable as we have just

© 2011 by Taylor and Francis Group, LLC

Nonlinear Dynamics


discussed. We have four FPs in the second iterate map. Two of * them, x* 1 and x 2 , are stable FPs (point S1 and S 2) and the other two (zero and x *) are unstable (point U1 and U2). The FP of the first iterate must be the FP of the second iterate as well: x * = f (x *) and x * = f ( f (x *)).

1.7  Invariant Manifolds: Homoclinic and Heteroclinic Orbits
1.7.1  Introduction
To obtain complete understanding of the global dynamics of nonlinear systems, the knowledge of invariant manifolds is absolutely crucial. The invariant manifolds or briefly the manifolds are borders in state space separating regions. A trajectory born in one region must remain in the same region as time evolves. The ­ manifolds organize the state space. There are stable and unstainitial ble manifolds. They originate from saddle points. If the ­ ­ ondition is on the manifold or subspace, the trajectory stays on c the manifold. Homoclinic orbit is established when stable and unstable manifolds of a saddle point intersect. Heteroclinic orbit is established when stable and unstable manifolds from different saddle points intersect.
(a) xn+2 S2 x*


U2 S1

xn U1 (b) x* 1 x* x* 2

1.7.2  Invariant Manifolds, CTM

The CTM is applied for describing the system. Invariant manifold is a curve (trajectory) in plane (N = 2) (Figure 1.15), or curve or surface in space (N = 3) (Figure 1.16), or in general a subspace (hypersurface) of the state space (N > 3). The manifolds are always associated with saddle point denoted here by x *. Any initial condition in the manifold results in movement of the operation point in the manifold under the action of the relevant differential equations. There are two kinds
Stable manifold

FIGURE 1.14  Cycle of period-2 in the logistic map for a = 3.3. (a) Return map for first iterate. (b) Return map for second iterate xn+2 versus xn.

Unstable manifold

t W u(x *) es x* eu

t W s(x*)

FIGURE 1.15  Stable Ws and unstable Wu manifold (N = 2). The CTM is used. es and eu are eigenvectors at saddle point x * belonging to Ws and Wu, respectively.

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics

Stable manifold W s(x*)

Unstable manifold W u(x*)
t ∞





FIGURE 1.16  Stable Ws and unstable Wu manifold (N = 3). The CTM is used. E s = span[es1, es2] stable subspace is tangent of Ws(x *) at x *. eu is an unstable eigenvector at x *. x * is saddle point.

W u(x*)


W s(x*)




FIGURE 1.17  Initial conditions (P1, P2, P3, P4) are not placed on any of the invariant manifolds. The trajectories are repelled from Ws(x *) and attracted by Wu(x *). Any of the trajectories (orbits) must remain in the region where it was born. Wu(x*) (or Ws(x *)) are boundaries.

of manifolds: stable manifold denoted by Ws and unstable manifold denoted by Wu . If the initial points are on Ws or on Wu, the operation points remain on Ws or Wu forever, but the points on Ws are attracted by x * and the points on Wu are repelled from x *. By considering t → –∞, every movement along the manifolds is reversed. If the initial conditions (points P1, P2, P3, P4 in Figure 1.17) are not on the manifolds, their trajectories will not cross any of the manifolds, they remain in the space bounded by the manifolds. The trajectories are repelled from Ws(x *) and attracted by Wu(x *). Any of these trajectories (orbits) must remain in the space where it was born. Wu(x *) (or Ws(x *)) are boundaries. Consequently, the invariant manifolds organize the state space.

1.7.3  Invariant Manifolds, DTM
Applying DTM, i.e., difference equations describe the system, then mostly the PMF is used. The fixed point x * must be a saddle point of PMF to have manifolds. Figure 1.18 presents the Ponicaré surface or plane with the stable Ws(x *) and unstable Wu(x *) manifold. s 0 and u 0 is the intersection point of the trajectory with the Poincaré surface, respectively. The next intersection point of the same

© 2011 by Taylor and Francis Group, LLC

Nonlinear Dynamics


W u (x*)


Poincaré surface W s (x*)



s2 s3 s4

x* u0 u–1 u–2 u–3


Points denoted by s (by u) approach (diverge from) x* as n ∞


FIGURE 1.18  Stable Ws and unstable Wu manifold. DTM is used. es and eu are eigenvectors at saddle point x * belonging to Ws and Wu, respectively.

trajectory with the Poincaré surface is s1 and u−1 the following s2 and u−2 , etc. If Ws , Wu are manifolds and s 0, u 0 are on the manifolds, all subsequent intersection point will be on the respective manifold. Starting from infinitely large number of initial point s 0(u 0) on Ws (or on Wu), infinitely large number of intersection points are obtained along W s (or Wu). Curve Ws(Wu) is determined by using infinitely large number of intersection points.

Homoclinic connection W s (x*)


1.7.4  Homoclinic and Heteroclinic Orbits, CTM

W u (x*) Homoclinic

In homoclinic connection, the stable Ws and unstable Wu manifold orbit of the same saddle point x * intersect each other (Figure 1.19). The FIGURE 1.19  Homoclinic contwo manifolds, Ws and Wu, constitute a homolinic orbit. The operanection and orbit (CTM). tion point on the homoclinic orbits approach x * both in forward and in backward time under the action of the relevant differential * ) of saddle point x1 * is connected to the equation. In heteroclinic connection, the stable manifold W s (x1 u * *, and vice versa (Figure 1.20). The two manifolds W s (x1 *) unstable manifold W (x2 ) of saddle point x2 u * ) [similarly W s (x* * )] constitute a heteroclinic orbit. and W u (x2 2 ) and W ( x1

Heteroclinic connection W s (x* 1)

W u (x* 1)

x1 *

Heteroclinic orbit W s (x* 2)

x* 2

W u (x* 1)

FIGURE 1.20  Heteroclinic connection and orbit (CTM).

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics

1.8 Transitions to Chaos
1.8.1  Introduction
One of the great achievements of the theory of chaos is the discovery of several typical routes from regular states to chaos. Quite different systems in their physical appearance exhibit the same route. The main thing is the universality. There are two broad classes of transitions to chaos: the local and global bifurcations. In the first case, for example, one EP or one LC loses its stability as a system parameter is changed. The local bifurcation has three subclasses: period doubling, quasi-periodicity, and intermittency. The most frequent route is the period doubling. In the second case, the global bifurcation involves larger scale behavior in state space, more EPs, and/or more LCs lose their stability. It has two subclasses, the chaotic transient and the crisis. In this section, only the local bifurcations and the period-doubling route is treated. Only one short comment is made both on the quasi-periodic route and on the intermittency. In quasi-periodic route, as a result of alteration in a parameter, the system state changes first from EP to LC through bifurcation. Later, in addition, another frequency develops by a new bifurcation and the system exhibits quasi-periodic state. In other words, there are two complex conjugate eigenvalues within the unit circle in this state. By changing further the parameter, eventually the chaotic state is reached from the quasi-periodic one. In the intermittency route to chaos, apparently periodic and chaotic states alternately develop. The system state seems to be periodic in certain intervals and suddenly it turns into a burst of chaotic state. The irregular motion calms down and everything starts again. Changing the system parameter further, the length of chaotic states becomes longer and finally the “periodic” states are not restored.

1.8.2  Period-Doubling Bifurcation
Considering now the period-doubling route, let us assume a LC as starting state in a three-dimensional system (Figure 1.21a). The trajectory crosses the Poincaré plane at point P. As a result of changing one system parameter, the eigenvalue λ1 of the Jacobian matrix of PMF belonging to FP P is moving along the negative real axis within the unit circle toward point −1 and crosses it as λ1 becomes −1. The eigenvalue λ1 moves outside the unit circle. FP P belonging to the first iterate loses its stability. Simultaneously, two new stable FPs P1 and P2 are born in the second iterate process having two eigenvalues λ2 = λ 3 (Figure 1.21b). Their value is equal +1 at the bifurcation point and it is getting smaller than one as the system parameter is changing further in the same direction. λ2 = λ3 are moving along the positive real axis toward the origin. Continuing the parameter change, new bifurcation occurs following the same pattern just described and the trajectory will cross the Poincaré plane four times (2 × 2) in one period instead of twice from the new bifurcation point. The period-doubling process keeps going on but the difference between two consecutive parameter values belonging to bifurcations is getting smaller and smaller.

1.8.3  Period-Doubling Scenario in General
The period-doubling scenario is shown in Figure 1.22 where μ is the system parameter, the so-called bifurcation parameter, x is one of the system state variables belonging to the FP, or more generally to the intersection points of the trajectory with the Poincaré plane. The intersection points belonging to the transient process are excluded. For this reason, the diagram is called sometime final state diagram. The name “bifurcation diagram” refers to the bifurcation points shown in the diagram and it is more generally used. Feigenbaum [7] has shown that the ratio of the distances between successive bifurcation points measured along the parameter axis μ approaches to a constant number as the order of bifurcation, labeled by k, approaches infinity:

© 2011 by Taylor and Francis Group, LLC

Nonlinear Dynamics


Limit cycle

Poincaré plane P


Before bifurcation

Parameter change

New limit cycle P1 P P2 Poincaré plane


After bifurcation

FIGURE 1.21  Period-doubling bifurcation. (a) Period-1 state before, and (b) period-2 state after the bifurcation.
x Final state of state variable α2 α1


μ δ1 δ2 δ3 S∞ Parameter

FIGURE 1.22  Final state or bifurcation diagram. S ∞ = Feigenbaum point.

δ = lim

k →∞

δk = 4.6693… δk +1


where δ is the so-called Feigenbaum constant. It is found that the ratio of the distances measured in the axis x at the bifurcation points approaches another constant number, the so-called Feigenbaum α α = lim
k →∞

αk = 2.5029… α k +1


δ is universal constant in the theory of chaos like other fundamental numbers, for example, e = 2.718, π, and the golden mean ratio ( 5 − 1)/2 .

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics

1.9  Chaotic State
1.9.1  Introduction
The recently discovered new state is the chaotic state. It can evolve only in nonlinear systems. The conditions required for chaotic state are as follows: at least three or higher dimensions in autonomous systems and with at least two or higher dimensions in non-autonomous systems provided that they are described by CTM. On the other hand, when DTM is used, two or more dimensions suffice for invertible iteration functions and only one dimension (e.g., logistic map) or more for non-invertible iteration functions are needed for developing chaotic state. In addition to that, some other universal qualitative features common to nonlinear chaotic systems are summarized as follows: • • • • • The systems are deterministic, the equations describing them are completely known. They have extreme sensitive dependence on initial conditions. Exponential divergence of nearby trajectories is one of the signatures of chaos. Even though the systems are deterministic, their behavior is unpredictable on the long run. The trajectories in chaotic state are non-periodic, bounded, cannot be reproduced, and do not intersect each other. • The motion of the trajectories is random-like with underlying order and structure.

As it was mentioned, the trajectories setting off from initial conditions approach after the transient process either FPs or LC or Qu-P curves in dissipative systems. All of them are called attractors or classical attractors since the system is attracted to one of the above three states. When a chaotic state evolves in a system, its trajectory approaches and sooner or later reaches an attractor too, the so-called strange attractor. In three dimensions, the classical attractors are associated with some geometric form, the stationary state with point, the LC with a closed curve, and the quasi-periodic state with surface. The strange attractor is associated with a new kind of geometric object. It is called a fractal structure.

1.9.2  Lyapunov Exponent
Conceptually, the Lyapunov exponent is a quantitative test of the sensitive dependence on initial conditions of the system. It was stated earlier that one of the properties of chaotic systems is the exponential divergence of nearby trajectories. The calculation methods usually apply this property to determine the Lyapunov exponent λ . After the transient process, the trajectories always find their attractor belonging to the special initial condition in dissipative systems. In general, the attractor as reference trajectory is used to calculate λ . Starting two trajectories from two nearby initial points placed from each other by small distance d0, the distance d between the trajectories is given by d(t ) = d0e λt (1.38)

where λ is the Lyapunov exponent. One of the initial points is on the attractor. The equation can hold true only locally because the chaotic systems are bounded, i.e., d(t) cannot increase to infinity. The value λ may depend on the initial point on the attractor. To characterize the attractor by a Lyapunov exponent, the calculation described above has to be repeated for a large number of n of initial points distrib– uted along the attractor. Eventually, the average Lyapunov exponent λ calculated from the individual λn – – values will characterize the state of the system. The criterion for chaos is λ > 0. When λ ≤ 0, the system is in regular state (Figure 1.23).

© 2011 by Taylor and Francis Group, LLC

The basic operation of the inverter can be understood by the time functions of Figure 1. the frequent nonlinearities can also be found in the systems. After firing a thyristor. an output current pulse i0 is flowing into the load.23  Positive and negative Lyapunov exponents.2  High-Frequency Time-Sharing Inverter 1. Figure 1. piece-wise linear. Various system states and bifurcations will be presented without the claim of completeness in this section. Furthermore.Nonlinear Dynamics x2 x2 1-27 d(t) = d0eλt d(t) = d0eλt x1 x3 d0 λ>0 Chaotic x3 d0 λ<0 Non-chaotic x1 FIGURE 1. Our main objective is to direct the attention of the readers. The thyristor turnoff time can be a little bit longer than two and half cycles of the output voltage (Figure 1. In order to show the applications of the theory of nonlinear dynamics summarized in the previous sections.25a. the inverter output current pulses i0. In Figure 1. Each sub-inverter pair works in the same way as it was previously described. the so-called positive and negative sub-inverters. © 2011 by Taylor and Francis Group. which changes the polarity of the series condenser voltage vc. the respective input and output terminals of the sub-inverters are paralleled. The primary source of nonlinearity in these systems is that the switching instants depend on the state variables [1]. 1. In order to explain the mode of action of the inverter.10. the “high” frequency approximately sinusoidal output voltage v0. The parallel oscillatory circuit Lp −Cp −Rp represents the load. The structure of these systems keeps changing due to consecutive turn-on and turn-off processes of electronic switches.10  Examples from Power Electronics 1.1 The System It is applied for induction heating where high-frequency power supply is required with frequency of several thousand hertz Figure 1. ideal components are assumed.10.24 shows the inverter configuration in its simplest form. The sensitive dependence on initial conditions.2.1  Introduction Power electronic systems have wide applications both in industry and at home. respectively.25b). Here. five examples have been selected from the field of power electronics [15. nonlinear dynamic controlled systems.25. They are variable structure. are encircled by dotted lines.10. The supply is provided by a centertapped DC voltage source. engineers to the possible strange phenomenon that can be encountered in power electronics and explained by the theory of nonlinear dynamics.16]. for instance. Thyristors T1 and T2 are alternately fired at instants located at every sixth zero crossings on the positive and on the negative slope of the output voltage. from −Vcm to Vcm. and the condenser voltage vc can be seen. I+ and I−. LLC . 1. The numbering of the sub-inverters corresponds to the firing order.26 presents a configuration with three positive and three negative sub-inverters.

24  The basic configuration of the inverter.and for closedloop control.10. © 2011 by Taylor and Francis Group. with 11 state variables. one Lp inductance.2. a more accurate model was used. The laboratory tests verified this theoretical conclusion. three Cs series capacitances.25  Time functions of (a) output voltage vo. and one Cp capacitance. output current io. LLC . condenser voltage vc.2 Results One of the most interesting results is that by using an approximate model assuming sinusoidal output voltage. io Vcm Vcm 1 To 2 vc vo Vom 1 2 t (a) vi vT1 t1 vcritic = (vi + Vom) – Vcm Vcm + vi – Vom Vcm > (vi + Vom) t (b) FIGURE 1. The independent energy storage elements are six Ls series inductances. The accurate analysis was performed by simulation in MATLAB • environment for open.1-28 Control and Mechatronics + Ls T1 vL vT1 Cs/2 vC Cs/2 vo I– iin – vi iLp iC vT2 I+ Load Rp Lp Cp io iip vi p T2 Ls FIGURE 1. [6]. no steady-state solution can be found for the current pulse and other variables in certain operation region and its theoretical explanation can be found in Ref. altogether 11 elements. 1. To describe the phenomena in the region in quantitative form. and (b) thyristor voltage v T1 in basic operation.

In Proceedings of the 29th Annual Conference on Industrial Electronics. Similarly.28). pp. stored.10. Having just one single value Vom for a given To. The peak values of the output © 2011 by Taylor and Francis Group. period-5 state develops.27). at firing period To = 277 μs. CD Rom ISBN:0-7803-7907-1.2. © [2003] IEEE. 961–971.26  The power circuit of the time-sharing inverter. the output voltage vo repeats itself in each period To. Now. where To is the period between two consecutive firing pulses in the positive or in the negative sub-inverters (Figure 1.3  Open-Loop Control Bifurcation diagram was generated here. I. 2003. (Reprinted from Bajnok. and plotted as a function of the control parameter To. LLC .. Now. for example.Nonlinear Dynamics + Ls T1 vT1 io1 + io4 T4 Ls T6 Ls T2 Ls vC1 Ls T3 io3 + io6 Ls T5 iip io5 + io2 Cs io iL iC vo iin – Rp Lp Cp vi 1-29 0 vi FIGURE 1. The study is concerned with the effect of the variation of the DC control voltage level VDC on the behavior of the feedback-controlled inverter.) 500 450 Vom [V] 400 350 Period-5 300 265 270 275 To [μs] 280 285 290 FIGURE 1. there are five consecutive distinct Vom values. Again. 1. the approximately sinusoidal output voltage vo is compared with a DC control voltage VDC and the thyristors are alternatively fired at the crossing points of the two curves. the bifurcation diagram is used for the presentation of the results (Figure 1. vol. vo is still periodic.2.10. VA. Surprises stemming from using linear models for nonlinear systems: Review. Roanoke. et al. it repeats itself after 5To has elapsed (Figure 1.27  Bifurcation diagram of inverter with open-loop control.4  Closed-Loop Control A self-control structure is obtained by applying a feedback control loop. the peak values of the output voltage Vom were sampled.29). 1. M. Control and Instrumentation (IECON’03). November 2–6. With permission. This state is called period-1.

Surprises stemming from using linear models for nonlinear systems: Review. 1. the so-called positive and negative ones. To = 277 μs. coupled by a resonating capacitor. The family has 12 members. The respective variables in the two channels © 2011 by Taylor and Francis Group. (Reprinted from Bajnok. 961–971.) voltage Vom were sampled.. stored. With permission. Roanoke. CD Rom ISBN:0-7803-7907-1.29  Bifurcation diagram of inverter with feedback control loop.1 FIGURE 1. As in the previous study. 2003.10. The results are basically similar to those obtained for open-loop control.) 500 450 Vom [V] 400 350 300 –20 0 20 VDC [V] 40 60 80 FIGURE 1. With permission.. et al. pp.3  Dual Channel Resonant DC–DC Converter 1. Control and Instrumentation (IECON’03). et al.1 The System The dual channel resonant DC–DC converter family was introduced earlier [3]. CD Rom ISBN:0-7803-7907-1. I. io1 [A] 400 200 0 3T5 To T5 = 5To vo Control and Mechatronics io1 –200 –400 0.1-30 800 600 vo [V]. © [2003] IEEE.094 0.096 t [s] 0. M. I.10.28  vo(t) and io1(t). Control and Instrumentation (IECON’03). A common feature of the different entities is that they transmit power from input to output through two channels. © [2003] IEEE. Period-5 state. the feedbackcontrolled inverter generates subharmonics as the DC voltage level is varied. vol. pp. and plotted as a function of the control parameter VDC . Surprises stemming from using linear models for nonlinear systems: Review. November 2–6. LLC . 2003. VA. (Reprinted from Bajnok.098 0. Roanoke.3. November 2–6. VA. vol. 961–971. In Proceedings of the 29th Annual Conference on Industrial Electronics.092 0. M. In Proceedings of the 29th Annual Conference on Industrial Electronics. The converter can operate both in symmetrical and in asymmetrical mode.

respectively. C. Scp is off. Scn are applied. and vip (Figure 1.e. To simplify. L.31)). At discontinuous current-conduction mode (DCM). vary symmetrically or asymmetrically in the two modes. The second one applies controlled switches conducting current in the direction of arrow. By turning on switch Scp at αp. © 2011 by Taylor and Francis Group.30) in symmetrical operation. the choke current commutates from Sp to Scp.. iin ion ion iip αen iop iin ion ωt One period vc ωTs Ts ωt t Vcn (b) 2 FIGURE 1. the configuration capacitance βC is replaced by short circuit and Scp. The capacitor voltage vc swings from Vcn to Vcp (Vcn < 0). when Sp is on. The energy change between the two channels is accomplished by a capacitor in asymmetrical operation. The currents are iip = iop = icp in interval 0 < ωt < αp. a sinusoidal current pulse iip is developed from ωt = 0 to αp ( ω = 1/ LC in circuit  Sp. Suffix i and o refer to input and output while suffix p and n refer to positive and negative channel. the stored energy is entirely depleted in interval α < ωt < αep where αep denotes the extinction angle of the inductor iop iip αp Circuit Subperiod (a) Vcp 1 1 2 3 1 2 iin αep αn 2 3 1 iop .Nonlinear Dynamics iip + vip 0 vin – iin Sn L ion Scn Cn R C Scp βC Sp iop + Cp R vop 0΄ von – 1-31 L FIGURE 1. The controlled switches within one channel are always in complementary states (i. vop. There is no energy exchange between the positive and the negative channels in the symmetrical case. Two different converter versions can be derived from the configuration.31  Time functions of (a) input and output currents and (b) condenser voltage. The first version contains diodes in place of clamping switches Scp and Scn. LLC . Our description is restricted to the buck configuration (Figure 1.30  Resonant buck converter with double load. and vice versa). By turning on switch Sp. The energy stored in the choke at ωt = αp is depleted in the interval αp < ωt < ωTs where Ts = 1/fs is the switching period. ion iip .

The switching frequency of these switches is half of the frequency of the sawtooth wave.6 73.. In DCM. Sp and Scp are in complementary states).32b). 1. LLC ..8 vok (V) 49. © 2011 by Taylor and Francis Group.e. the capacitor voltage vc stops changing. The controlled switches are Sp and Sn (the switches within one channel. 49. the switch control signal (Figure 1. e. the switch is off.32  PWM control: (a) Block diagram.4 73. The control signal vcon is compared to the repetitive sawtooth waveform (Figure 1. The same process takes place at the negative side. the difference between the actual output voltage vo. and its desired value Vref .8 Kv 74 74. The control voltage signal vcon is obtained by amplifying the error.10.33  Enlarged part of the bifurcation diagram.2 The PWM Control For controlling the output voltage vo = vop + von by PWM switching. and they are controlled alternatively. resulting in a negative condenser current pulse and voltage swing after turning on Sn. the current is zero between αep and ωTs. It keeps its value Vcp.2 FIGURE 1.1-32 vramp Vref (a) – Kv vcon Comparator Switch control Control and Mechatronics Converter vo VU VL (b) vramp vcon Switch control t T High t Low (c) FIGURE 1. i.3. When the amplified error signal vcon is greater than the sawtooth waveform.6 73. Otherwise.g. After turning on Scp.32a). the inductor current flows continuously (iop > 0). The inductor current iop decreases in both cases in a linear fashion. In the continuous current-conduction mode (CCM) of operation.7 49. the switch control signal is generated for the switch in one channel in one period of the sawtooth wave and in the next period. the signal is generated for the switch in the other channel. (b) input and (c) output signals of the comparator. a feedback control loop is applied (Figure 1.32c) becomes high and the selected switch turns on. current.

0476 environment.2000 diagram is shown in Figure 1. is 9 · 2T = 9 · Ts in period-9 state. All voltage and current vectors used are three-phase space vectors.34  Condenser voltage in period-9 operation.35 shows the Poincaré map for chaotic behavior at Kv = 74.10.1. Figure 1. Space vector is the complex representation of three-phase quantities.55 19.2 in plane of output voltage vok = vo(kTs) vs.75 19. The analysis was performed by simulation in MATLAB 73. There are some benefits by using rotating reference frame (RRF) – over SRF.10. LLC . The three-phase AC terminals of the VSC are tied through – rotating with angular speed ω. and the HCC.5 19. The period-9 subharmonic state is clearly visible.502. the next one by μ1.33 1. Note also that it lies in a bounded region of state space.. A representative bifurcation 1 2 73.925 4. It consists of a three-phase full-bridge VSC with six bidirectional switches. The time function of condenser voltage vc is plotted in Figure 1. Denoting the first bifurcation point by μ0 where μ0 = Kv = 73. This simple circuit can series L–R components to voltage space vector e s approximately model either the AC mains or an induction motor.34 at Kv = 73.36. etc. Space vectors with suffix s are written in stationary reference frame (SRF). fixed to voltage space vector e s 800 600 400 vc [V] 200 0 18 T –200 –400 –600 19. With sufficient number of sets of steady-state data. a comparator. the bifurcation diagram can be obtained by plotting vertically the sampled output voltage whereas the voltage gain Kv is drawn horizontally as a bifurcation parameter. a simple circuit modeling the AC side. in steady state.4. This map reveals a highly organized structure. at the start of every switching 3 cycle (vok = vo(kTs)).2. the values of the first several bifurcation points are in Table 1.945 voltage vo was sampled and stored.Nonlinear Dynamics 1-33 Table 1.65 t [ms] 19. Table 1.3 Results The objective is the calculation of the controlled variable vo in steady state in μk δk order to discover the various possible states of this nonlinear variable structure k • 0 73.36)]. the output 73.3. © 2011 by Taylor and Francis Group.501 dynamical system. together with the period of the other state variables.4  Hysteresis Current-Controlled Three-Phase VSC 1. a reference frame transformation. Beautiful period-doubling scenario can be seen.1 contains δ1 and δ2 where δk = (μk − μk–1)/(μk+1 − μk). The period of vc.33. The block diagram of the system is shown in Figure 1. Space vectors without suffix s are in RRF.8 FIGURE 1.841 4. The effect of variation Kv was studied.7 19. To generate this kind of diagram.1 The System The objective is to highlight the complex behavior of a three-phase DC–AC voltage source vonverter (VSC) controlled by a closed-loop hysteresis AC current controller (HCC) [10]. Even the first two δk values are not far from the Feigenbaum constant [see (1.10.6 19. 1.1  Bifurcation Points in Figure 1. inductor current iopk = iop(kTs).

which real component of the error current vector i e s © 2011 by Taylor and Francis Group.. two concentric circles as TBs are applied with radius ri and ro (ro > ri). In RRF. Eds H.37a shows a small part of the bifurcation diagram of the system. Applied Electromagnetics and Computational Technology II. The HCC selects the switching state of the VSC to keep the space vector of error – within the TB. 233–244. Tsuboi and I. T is the sampling time. – v7 – v ω 1 – vsk Model of AC side R L – es ω + – 6 Hysteresis current control – ie – i* – + – i –Vd – is ω Reference frame transformation FIGURE 1. The sample points of the – are on the vertical axis. IOS Press. To avoid certain shortcomings. and Nagy.5 Control and Mechatronics vok [V] 0 2 4 iopk [A] 6 8 FIGURE 1.4.1-34 51 50. Z. Amsterdam. VSC +Vd ro/ri – v* – v2 – v – v3 – v4 – v5 – v0. the Netherlands. The shape of the tolerance band (TB) centered around the end point of i assumed to be a circle. Whenever i – reaches the periphery of TB. At first. (Reprinted from Süto ˝.2).5 50 49.2  Hysteresis Control The controller is a somewhat sophisticated hysteresis AC current control loop.5 49 48. With permission. pp. the AC current – is transformed into RRF revolving with angular frequency ω. Figure 1. a new converter voltage v –k is procurrent i e e – from the periphery of TB to –k forces to rebound i duced by the new switching state of VSC. Vajda.) 1.35  Poincaré map for chaotic operation (Kv = 74.36  Voltage source converter with its hysteresis AC current controller. The new v e the interior of TB. the reference current i –* is a i s –* is stationary space vector. 2000.10. I. LLC .

the system trajectory finally becomes chaotic.1281 and fourth-order subharmonic appears. the Lyapunov exponent becomes zero. The structure and order of the diagram can be considered as an outcome of the deterministic background of the chaotic systems. m = 1.5 – λ 0 −0. With permission. The trajectory is restricted to a smaller region of state space at this ro/r i value. pp.17 –0. the Netherlands.37  (a) Period-doubling cascade.1349 to 1. Z. pp. IOS Press. Tsuboi and I. A beautiis the sixth of the period of vector e s o i ful period-doubling cascade can be seen.128 1. Eds H.124 1.06 0. Amsterdam.37b presents the Lyapunov exponent for the region of the period-doubling cascade showing that at the bifurcation points.126 1. After a series of period doubling at successively closer border values of ro/r i. and Nagy.125.126 1.5 −1 (b) 1.124 1.13 ro/ri 1..Nonlinear Dynamics 0. m} 0.128 1. m} –0.5 1 0.125. Vajda. Starting from the latter value down to ro/r i ≅ 1.m} 0.) © 2011 by Taylor and Francis Group.05 – Re { ie. the Netherlands. Eds H. (b) Lyapunov exponent.38 for a chaotic state at ro/r i = 1. the period is doubled.055 – Re{ ie. Vajda.169 –0. Tsuboi and I.132 1-35 (a) 1.13 ro/ri 1.1705 0. The trajectory is restricted to a smaller region of state space. and Nagy.06 FIGURE 1. Figure 1. (Reprinted from Süto ˝. Proceeding from right to left.05 0.1307. Applied Electromagnetics and Computational Technology II. The Poincaré section of the system is shown in Figure 1. Z. Period doubling occurs again from ro/r i ≅ 1. 233–244.132 1. The Poincaré section is plotted by the real and imaginary components of the sampled error current space vector. –0.38  Poincaré section at ro/ri = 1. after a short chaotic range. With permission.) – .04 1. I.1695 – Im { ie. I. 2… The bifurcation parameter is the ratio r /r .134 1. IOS Press. 233–244. (Reprinted from Süto ˝. LLC .04 0.045 0.134 FIGURE 1.1281.. suddenly the system response becomes periodic from ro/r i ≅ 1. Amsterdam. 2000. The response remains periodic with second-order subharmonic oscillation. Applied Electromagnetics and Computational Technology II. 2000.

XI.39). (Reprinted from Süto ˝. The real and imaginary components of space vectors in RRF space vector e s are distinguished by means of suffix d and q. Only one FP XI can exist in the linear region when u m m – FPs. © [2003] IEEE.1-36 Control and Mechatronics 1. It was studied by the behavior of the – =y – . I. On the DC side.10. XII+. by the actual To track the reference current i – AC current i . The space vectors can be expressed in stationary α-β reference frame (SRF) with suffix s or in rotating d–q reference frame (RRF) revolving with ω . respectively.10. One crucial point in dynamical systems is the stability. and the z−1 blocks delay their input signals with one sampling period Ts.3 Analysis and Results Omitting here all the mathematics. Z. It can be shown that the complex control parameters K0 and – K1 can be calculated from the following equations: – εm−1 z−1 – K1 – K0 – um−1 – zm – y z−1 – s (– y) Vdc SVM – um Ts S&H – is – vks L AC side − Vdc + DC side VSC R – es. 1064. 1. Syst: Fund. Interested readers can turn to paper [11].10.39  Block diagram of current-controlled voltage source converter with space vector modulation. 1.5  Space Vector Modulated VSC with Discrete-Time Current Control 1. Suffix m denotes the discrete-time index.5. XII− in the saturation region. The three-phase AC variables are written in space vector form.5. a proportional-integral (PI) digital controller with complex parameters K0 and K1 – including a saturation function – s (y ) to avoid overmodulation in VSC. only the final results are presented. a reference frame transformation transforming the three-phase current i s – – d–q coordinates. which is usually the output signal of an other controller.5. This power part of the system is the same as the one in our previous example. The DCC consists of a Sample&Hold (S&H) – from SRF to RRF with unit.) © 2011 by Taylor and Francis Group. and a SVM algorithm. Appl. source e s s This simple circuit can model either the AC mains or an induction or synchronous machine in steadystate operation. a digital current controller (DCC) is applied. a constant and ideal voltage source with voltage VDC is assumed. the complex representation of three-phase quantities (Figure 1. IEEE Trans. 2003. the angular frequency of – and denoted without suffix s. With permission. and Nagy.. Theor. The VSC is a simple three-phase bridge connection with six controlled lossless bidirectional switches. the system studied is a VSC connected to a symmetrical three-phase configuration on the AC side consisting of series R–L circuits in each phase and a three-phase symmetrical voltage – where e – is space vector. The SVM has to generate a switching sequence with proper timing of state changes in VSC over a fixed modulation period that equals to Ts.. Cir.1 The System [11] In this example. LLC . 50(8).2 The Control and the Modulation –*.ω + + m Digital current controller – i* + – εm − – im αβ dq FIGURE 1. but there are three FPs in the Poincaré plane.10.

LLC .053. © [2003] IEEE.5e j45° and λ1 was changed along line B1. The FP X II+ becomes a spiral © 2011 by Taylor and Francis Group. This bifurcation can be classified as fold bifurcation. another critical value μ3 at 1.41a. In Proceedings of the 29th Annual Conference on Industrial Electronics. λ0 – – was kept constant in Figure 1.q 0 Unit circle −1 −2 −2 −1 0 λ1. the FPs XII± develop in the state space. CD Rom ISBN: 0-7803-7907-1. where Z = R + jωL and x d q 1 of FPs XII±. Eigenvalues show that a saddle X II− and a node X II+ are born at the left side at μA . K0 and K1 are determined from the last – – two equations.292 can be seen. Since λ crosses the existence region introduced in place of current i . By increasing μ very near to μA. It can – be shown that all three FPs can develop in the shaded area in Figure 1.d 1 2 FIGURE 1. Control and Instrumentation (IECON’03). spiral node XII+ becomes unstable.5301.39) K1 = (1.40  Bifurcation direction B1 selected to enter and cross the existence region of fixed points XII±. 961–971. In the analysis x i   + e was – – – – = x + jx . a spiral node develops from the node due to the creation of a pair of conjugate complex eigenvalues from the merge of two real eigenvalues at X II+.40 even within the unit circle. pp.Nonlinear Dynamics 1-37 2 B1 1 λ1. the pair of conjugate complex eigenvalues leaves the unit circle.40 at value λ0 = 0. –– – –=Z Bifurcation diagram was calculated and it is shown in Figure 1. vol. by modifying – λ1 = μe j40° where μ is the bifurcation parameter. There are three more bifurcation values of μ between μA and μB.. Surprises stemming from using linear models for nonlinear systems: Review. With permission. VA. (Reprinted from Bajnok.41b. Here. Reaching the other critical value μB = 2. 2003. Looking at the eigenvalues.) K0 = 1 + F − λ 0 − λ1 G λ 0 λ1 − F G (1. M.41a).40) where – – λ0 and λ1 are the eigenvalues of the Jacobian matrix calculated at the FP in the linear region – – F and G are complex constants – – – – Selecting λ0 and λ1 on the basis of the dynamic requirements. at the critical value μA = 0. The magnitudes of eigenvalues belonging to FPs X II± are also presented in Figure 1. Roanoke. The eigenvalues λ0 and λ1 must be inside the unit circle to ensure stable operation. et al. they collide and disappear (Figure 1. November 2–6. I.

Roanoke.002 0 – | λ| µA µ3 µB 0 0. (e) Global phase portrait after the homoclinic connection. the bifurcation is also called border collision pair bifurcation.9 0. (Reprinted from Bajnok.2 > µ2. Limit cycle developed. I.1 1 0.41  (a) Bifurcation diagram along B1.5 1 0 xq XII− XI 1 0 −1 −2 −3 (c) −2 −1 XII+ xq −1 −2 2 (d) −3 −2 −1 0 xd 1 2 Γµ2 0 xd 1 1 0 xq 2 0 −2 Lµ −2 −1 0 xd 1 2 (f ) −4 −1 −2 −3 (e) xq −2 0 xd 2 FIGURE 1. 961–971.5 (b) xq Control and Mechatronics 1. LLC . At μB. © [2003] IEEE. M. In Proceedings of the 29th Annual Conference on Industrial Electronics. (d) Global phase portrait before the homoclinic connection. and the spiral saddle becomes a saddle.17 < μ2. et al. CD Rom ISBN: 0-7803-7907-1. μ = 1. μ = 1. vol. μ = μ2. (c) Global phase portrait at the homoclinic connection. μ = 1. the curve of the complex eigenvalue splits into two branches indicating that two real eigenvalues develop. With permission.5 µB 2 2. (b) Magnitudes of eigenvalues belonging to XII± (continuous curves to XII+ while dashed curves to XII–). VA..4 > μ3. 2003. Control and Instrumentation (IECON’03).5 µ1 µ2 µ3 1 µ 1. November 2–6. Surprises stemming from using linear models for nonlinear systems: Review. pp.5 1 µ 1. when a pair of FPs appears at some bifurcation value and one FP is unstable and the other one is either stable or unstable.5 2 2. as in our case. (f) Global phase portrait after fixed point XII+ became unstable.1-38 6 4 2 0 −2 −4 −6 (a) 0 µA 0. In non-smooth nonlinear system. Just before μB. the two saddles disappear. © 2011 by Taylor and Francis Group.) saddle at μ3.004 0.

and finally is directed to the stable FP XII+. © [2005] IEEE. When μ3 is reached.41d and e. 2005.41b. the homoclinic connection disappears. The error signals τe = τr − τ and ψe = ψ r − ψ determine the output sτ and sψ of the two hysteresis comparators.1 The System The control methods applied frequently for induction machine are the field-oriented control (FOC) and the direct torque control (DTC) [2. global phase portraits are calculated (see in Figure 1. With permission. At both sides of the homoclinic bifurcation. Hong Kong. which is a kind of local bifurcation. Vdc Direct torque control (DTC) ψr τr + − + − τ ψ ψe τe sψ sτ sω Voltage vector selection – ψ Flux and torque estimation – s − + FIGURE 1. the spiral node XII+ becomes a spiral saddle. Based on the approximate PMF. – . CD Rom ISBN: 0-7803-9484-4. and Nagy.6  Direct Torque Control 1. LLC . Bifurcation phenomena of direct torque controlled induction machines due to discontinuities in the operation. It is a topologically unstable structure of the state space. respectively. 1. the so-called homoclinic bifurcation of the saddle XII− takes place.) © 2011 by Taylor and Francis Group. a “limit cycle” develops. In Figure 1. the global structure of manifolds belonging to XII− is presented at the critical value μ2. Z. the unstable manifold comes to the inner side of the stable manifold. The other unstable and stable branches of manifolds of XII− are directed to the stable XII+ and come from the unstable fixed point XI. I. when μ is increased over μ2. in the region [μ2.6.41c. as can be seen from the global phase portrait shown in Figure 1. It can be seen that one branch of unstable manifolds connects to the stable manifold of XII− creating a closed so-called “big” homoclinic connection denoted as Γµ2. (Reprinted from Süto ˝. The explanation is that at the critical value μ2. the only attractor remained is the “limit cycle” Lμ (Figure 1. a global bifurcation.42). Looking at Figure 1. In DTC. The four input signals. μ3]. respectively. The various system states and bifurcations are due to the variations of the complex coefficients – – K0.42  Direct torque controlled induction machine.M. However.Nonlinear Dynamics 1-39 When a pair of conjugate complex eigenvalues of a FP leaves the unit circle. In Proceedings of the IEEE International Conference on Industrial Technology (ICIT2005). In that case.13]. the stable “limit cycle” Lμ and the spiral node XII+ exist. K1 of digital PI controller. Either increasing or decreasing the bifurcation parameter in respect to μ2. the digital signal s prescribing the direction of rotation together with s and s the flux orientation ∠ψ ω τ ψ DC side AC side VSC – i – v I. the instantaneous value of the electric torque τ and that of the absolute value ψ of stator flux linkage are estimated from the space vector of stator voltage – v and stator current – i and compared to their respective reference value τr and ψ r (Figure 1. K1 of the controller [11]. December 14–17.) Decreasing μ from μ2. a pair of conjugate complex eigenvalues of FP XII+ leaving the unit circle at the critical value μ3 can be detected. So.10. a closed curve looking like a limit cycle develops in the xd–xq plane corresponding to a quasi-periodic “orbit” in the state space. the so-called Neimark-Sacker bifurcation occurs. two attractors.10. The bifurcation diagram supports this finding but seemingly there is no difference in the bifurcation diagram before and after μ3.41f). design conditions were derived to avoid – – the undesired operations of the system and to select the coefficients K0. the unstable manifold runs to the other side of the stable manifold of XII– and connects to a stable “limit cycle” Lμ that surrounds all three FPs.41f. Indeed.

14].1-40 Control and Mechatronics determine the value of – s . Bifurcation parameter μ is the torque reference value τr. the largest Lyapunov exponent is calculated and plotted in Figure 1.01 0.2 0 −0. The Lyapunov exponent becomes zero (or positive) at these points. (Reprinted from Süto ˝. indicating also bifurcations. 0.3  Numerical Results Numerical values used are given in Ref. there is a wider region from μ ≈ 0. A bifurcation diagram can be seen in Figure 2005.76 0. a sequence of sudden changes (bifurcations) between various orders of subharmonics can be seen.015 0.7645.75 µ 0. With permission.m are plotted along the vertical axis. 1. where the points suddenly spread vertically.) © 2011 by Taylor and Francis Group. sample points of τe. the system is settled down to regular periodic state (one or more FPs appear in the bifurcation – diagram). (See at the right side of the diagram. I. Consequently. Z. The relations needed for investigating the stability of FPs and the Lyapunov exponents in the function of reference torque are derived. making the whole model nonlinear.7645.77 (b) FIGURE 1.73 (a) – λ 0.73 0.025 τe. None of them reveals regular smooth local types of bifurcations such as flip.) Various system states and bifurcations can be discovered in the diagram.2 −0.43. © [2005] IEEE.43. At the left side of Figure 1. In Ref.6 −0. which becomes chaotic at μ ≈ 0. In order to identify system states.m covers vertically the full width of the torque hysteresis.43  (a) Bifurcation diagram of the torque error τe.2  Mathematical Model The mathematical model of the system is based on the Poincaré map. fold. Note that proceeding from left to right.7626. In Proceedings of the IEEE International Conference on Industrial Technology (ICIT2005). Mostly. Bifurcation phenomena of direct torque controlled induction machines due to discontinuities in the operation.43. the periodic motion does not become suddenly chaotic at μ ≈ 0. The bifurcation parameter μ is the reference torque τr.m (m = 1.74 0. Hong Kong. In most cases. but the switching instants depend on the state variables. 2. [12]. [12.77 0. the system state is periodic but it must be some higher order subharmonic motion. December 14–17. A continuous-time model is developed for describing the system between two consecutive switchings under the assumptions that the speed of the rotor and the rotor flux is constant due to the large inertia of the rotating masses and rotor time constant.03 0.m 0. CD Rom ISBN: 0-7803-9484-4.10. or Neimark ones. Non-smooth bifurcations like border-collision bifurcations and non-standard bifurcations due to discontinuity of the Poincaré map are present. The bifurcation parameter spans only a quite narrow region here.02 0.8 −1 0. the Lyapunov exponent is negative.…) are taken from the trajectory when the flux space vector is at an appropriate position [12]. the Lyapunov exponent is still clearly negative here and starts to increase more or less linearly. which selects the output voltage vector of the VSC out of the eight possible values [12]. 1. Although the points in the bifurcation diagram suddenly messed. and the sample points of the torque error τe.76 0.75 µ 0. The model is a piece-wise linear one between each switching. τe. At the right side. and Nagy. LLC . where the positive value of λ implies the presence of chaotic operation. The Poincaré map and the PMF is generated and its Jacobian matrix is also calculated for stability analysis [12]. the DTC system is studied in more detail. (b) The largest Lyapunov exponent.4 −0.74 0.

Chaos in Dynamical Systems. I. Nagy. E. 14. April 2000. 2001. Applied Electromagnetics and Computational Technology II. Nonlinear dynamics in direct torque controlled induction machines analyzed by recurrence plots. In Proceedings of the 29th Annual Conference on Industrial Electronics. Süto ˝ and I. Springer-Verlag. pp.-O. CD Rom ISBN: 9789075815108. Journal for Control. Banerjee and G. and P. In H. B. Nagy. Bifurcation phenomena of direct torque controlled induction machines due to discontinuities in the operation. I. September 2–5. Nagy. 9. Cambridge University Press. This project is supported by the New Hungary Development Plan (Project ID: TÁMOP-4. Surprises stemming from using linear models for nonlinear systems: Review. R. In Proceedings of the IEEE International Conference on Industrial Technology (ICIT2005). 1992. M.1/B-09/1/KMR-2010-0002). Süto ˝ and I. Noguchi. Parker and L. M. 2000. T. October 1988. Moon. Control and Instrumentation (IECON’03). 10. Hong Kong. Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers. Amsterdam. Direct self-control (DSC) of inverter fed induction machine. I. Denmark. H. © 2011 by Taylor and Francis Group. 42(3–4):117–132. A Publication of Industry Applications Society. 2. November 2–6. Peitgen. Süto ˝. Oxford University Press. and I. 2005. I.2. New York. September/October 1986. Control and Applications. IA-22(5):820–827. C. 1989. S. Electronics. February 2003. 6. Study of subharmonic generation in a high frequency timesharing inverter. 2001. IEEE Transactions on Power Electronics. 1993. Saupe. and E. 12. 7. 3. 2007. 4. Tsuboi and I. H. Aalborg. Nagy. IEEE Transactions on Industrial Electronics. O. F. and E. Computing and Communications (AUTOMATIKA). Croatia. Buti. Nagy. and I. Nonlinear phenomena in power electronics: Review. 15. 120-D(4):574–580. eds. Zagreb. Buti. Hilborn. Takahashi and T. August 2003. IOS Press. Practical Numerical Algorithms for Chaotic Systems. vol. Stability analysis of a feedback-controlled resonant DC–DC converter.. Dranga. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications—Special Issue on Switching and Systems. VA. Masada. 5. Ott. Nagy. Z. Süto ˝ and I. The Transactions of The Institute of Electrical Engineers of Japan. I.Nonlinear Dynamics 1-41 Acknowledgments This chapter was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences (HAS). E. 11. 2003. This work is connected to the scientific program of the “Development of quality-oriented and cooperative R+D+I strategy and functional model at BME” project. In Proceedings of the 12th European Conference on Power Electronics and Applications (EPE2007). Bajnok. and D. The collaboration of B. pp. 13. Chaos and Fractals: New Frontiers of Science. the Netherlands. Chaotic and Fractal Dynamics. IEEE Transactions on Industry Applications. LLC . John Wiley & Sons. New York. 1993. Depenbrok. O. 8. 233–244. Chua. CD Rom ISBN: 0-7803-9484-4. Analysis of nonlinear phenomena and design aspects of three-phase spacevector modulated converters. C. Buti. Chaos. Z. S. SpringerVerlag. C. New York. 50(8):1064–1071. Nagy. Nagy. References 1. by the Hungarian Research Fund (OTKA K72338). New York. Dranga. 16. Z. O. Z. Masada. 50(1):141–152. Z. IEEE Press. Dranga. Stumpf is gratefully acknowledged. B. O. Verghese. and the Control Research Group of the HAS. vol. 961–971. Roanoke. New York. 16 of Studies of Applied Electromagnetics and Mechanics. Vajda. PE-3(4):420–429. CD Rom ISBN: 0-7803-7907-1. A new quick-response and high-efficiency control strategy of an induction motor. Masada. Measurement. Nonlinear Phenomena in Power Electronics: Bifurcations. Süto ˝. 1994. December 14–17. New York. Jürgens. Study of chaotic and periodic behaviours of a hysteresis current controlled induction motor drive.

. position... The difference between the controlled variable y of the system G(s) under control and the reference input r is amplified by the controller C(s) and used to control the system so that this difference is continually reduced.... A simple closed-loop feedback control system is shown in Figure 2.. An input signal or command r is applied to the controller C(s).. While both classes of systems have common objectives to provide a desired system response.. so the operator must observe and counter any speed deviation through the trigger which varies Vi. utilizes a controller or control actuator in order to obtain the desired response of the controlled variable y... or noise affecting the plant. The user will move the trigger that activates an electronic circuit to regulate the voltage Vi to the motor.3 represents the simplest type of feedback systems and it forms the backbone of many more complex feedback systems such as those involving multiple feedback as in cascade control.. solid lines are drawn to indicate electrical connections.. In contrast to an open-loop control system.. etc..... temperature. and dashed lines show the mechanical connections..... For example. The shaft speed w is strongly load dependent.. It is interesting to note that many systems in nature also inherently employ a feedback system in them. the driver gets visual feedback on the speed and position of the car by looking at a speedometer and checking through the mirrors and windows. In the figure... or pressure associated with a process or a servomechanism device.. A simple position control system for a dc motor is shown in Figure 2... Smith predictor control. An open-loop control system.. therefore.. voltage... A position 2-1 © 2011 by Taylor and Francis Group....2. while driving the car.. Figure 2..... whose output acts as the ­ actuating ­ signal e.. LLC . typically shown in Figure 2..3.1 Basic Feedback Concept.....1. be a part of a closed-loop system to provide this feedback...... which then actuates the controlled plant G(s) to drive the controlled variable y toward the desired value. a closed-loop control system utilizes an additional measure and feedback of the controlled variable in order to compare the actual output with the desired output response. they differ in physical configurations as well as in performance. The external disturbance.. Internal Model Control.4...2 Basic Feedback Concept Tong Heng Lee National University of Singapore Kok Zuea Tang National University of Singapore 2.... is represented in the figure by a signal n.. A measurement system will... appearing at the input of the plant.... An example of an open-loop control system is the speed control of some types of variable-speed electric drills as in Figure 2.1  Basic Feedback Concept Control systems can be broadly classified into two basic categories: open-loop control systems and closedloop control systems. The controlled variable is often a physical variable such as speed... 2-1 Effects of Feedback  •  Analysis and Design of Feedback Control Systems  •  Implementation of Feedback Control Systems  •  Application Examples Kok Kiong Tan National University of Singapore Bibliography������������������������������������������������������������������������������������������������ 2-10 2..

representative of the command via an input potentiometer. command θr is dialed into the system and it is converted into an electrical signal Vr.1. which thus represents the output variable θo. feedback systems or closed-loop systems are relatively complex and costly as expensive measurement devices such as sensors are often necessary to yield the feedback. When this situation is reached. A gear connection on the output shaft transfers the output angle θo to a shaft. © 2011 by Taylor and Francis Group. 2. n(s) r(s) e(s) + + y(s) + – C(s) Controller G(s) Plant FIGURE 2. The output of the amplifier is thus zero.1  Effects of Feedback Compared to open-loop control systems.4  Example of a closed-loop control system—position control system for a dc motor. The motor turns until the output shaft is at a position such that Vf is equal to Vr. LLC . The power amplifier amplifies the difference Ve between Vr and Vf and drives the dc motor. Position command θr Vr Power ampli er Ve + – dc motor θo Output Gears Vf Feedback θf FIGURE 2. and the motor stops turning. feeding back the sensed output θf into an electric signal Vf.2-2 n(s) r(s) e(s) + + Control and Mechatronics C(s) Controller G(s) Plant y(s) FIGURE 2.2  Example of an open-loop control system—variable-speed electric drill. Power amplifier + Vi Voltage input – Angular speed output w (rad/s) dc motor FIGURE 2.1  Block diagram of an open-loop control system.3  Block diagram of a closed-loop control system. the error signal Ve is zero.

We shall now show that feedback also brings about other desirable characteristics such as a reduction of the system sensitivity. Then. and improvement of the system stability. reduction of the effects of external disturbance and noise. 2. the transient response of control systems often must be adjusted until it is satisfactory. LLC . improvement of the transient response. 2.1. consider a simple proportional controller given by C(s) = 10 and a unit step reference signal. and suppose for the illustration. is reduced by a factor of 1 + G(s)C(s). respectively. due to parameter variations. all the components of the open-loop transfer function G(s)C(s) must be very carefully selected.1 and 2. Thus.3. r(s) = 1/s. they are highly popular because of many beneficial characteristics associated with feedback.1. The continual self-reduction of system error is but one of the many effects that feedback brings upon a system.1/(10s + 1). the winding resistance of an electric motor changes as the temperature of the motor rises during operation.1. G(s) represents the transfer function between the speed of the motor and its input voltage. leading to ΔG(s) = 0. where |G(s)| ≫ |ΔG(s)|. The performance of an open-loop control system will thus be extremely sensitive to such changes in the system parameters. Further.1. Consider the open-loop © 2011 by Taylor and Francis Group.1. the output is given by y(s) + ∆y(s) = [G(s) + ∆G(s)]C(s)r (s) Hence. the resultant change in the controlled speed variable would be Δy(s) = 1/(s(10s + 1)). In the case of a closed-loop system. G(s) = 1/(10s + 1). In this case. In practice. consider the speed control problem of a simple dc motor as an example. Since the purpose of control systems is to provide a desired response. Now suppose there is a 10% drift in the static gain of the plant. To illustrate. G(s) is changed to G(s) + ΔG(s). the sensitivity to parameter variations is reduced and the tolerance requirements of the components can be less stringent without affecting the performance too much.Basic Feedback Concept 2-3 Despite the cost and complexity. all physical elements will have properties that change with environment and age.3. due to the parameter variations in G(s). in the open-loop system shown in Figure 2. so that they respond accurately to the input signal.1 Reduction of the System Sensitivity To construct a suitable open-loop control system.1. To illustrate this. For instance. the corresponding variation in y is given by Δy(s) = 1/(s(10s + 11)). clearly reduced as compared to open-loop control. consider the open-loop system and the closed-loop system shown in Figures 2. feedback systems have the desirable advantage of being less sensitive to inevitable changes in the parameters of the components in the control system. Suppose that. With open-loop control as in Figure 2. y(s) + ∆y(s) = [G(s) + ∆G(s)] C(s)r (s) 1 + [G(s) + ∆G(s)]C(s) ∆G(s) C(s)r (s) 1 + C(s)G(s) ∆y(s) ≈ The change in the output of the closed-loop system. Feedback provides such a means to adjust the transient response of a control system and thus a flexibility to improve on the system performance.3.2  Improvement of the Transient Response One of the most important characteristics of control systems is their transient response. Under feedback control as in Figure 2. the change in the output is given by ∆y(s) = ∆G(s)C(s)r (s) In the closed-loop system shown in Figure 2.

a system is said to be unstable if its output is out of control or increases without bound. consider again the open-loop and closed-loop systems of Figures 2. clearly reduced as compared to open-loop control. with a nonzero disturbance/ noise component n. A good control system should be reasonably resilient under these circumstances. To elaborate.2-4 Control and Mechatronics control system shown in Figure 2.1. 2. feedback can reduce the effect of disturbance/noise on the system performance. it suffices to say that feedback could be a double-edged knife. In many cases.1. With an open-loop system. the change in the output is given by ∆y(s) = G( s ) n(s) 1 + G(s)C(s) Clearly. In comparison.1. Examples of these signals are thermal noise voltage in electronic amplifiers and brush or commutator noise in electric motors. Under closed-loop control. considering a room temperature control problem. at the input of the plant G(s). one might want to employ a linear design method such as the classical root-locus or frequency-response method for the controller.. the pole is shifted deeper into the left-hand half of the complex plane at s = −(11/10).3. the change in the output for the open-loop system is given by ∆y(s) = G(s)n(s) For the closed-loop control system. external disturbance often occurs when the doors and windows in the room are opened or when the outside temperature changes. In a non-rigorous manner. the time constant of the system is T. If properly used. the time constant of the closed-loop system is computed as Tc = (1/11)T = 10/11. the effects of the disturbance/noise has been reduced by a factor of 1 + G(s)C(s) with feedback. The pole of the original plant is at s = −(1/10). n.1/(s(10s + 1)).1/(s(10s + 11)). Here. Again consider the speed control problem of the dc motor.1 with G(s) = K/(Ts + 1) and C(s) = a.3 with the same feedforward transfer function as that for Figure 2. n(s) = 0. Assume that both systems have been designed to yield a desired response in the absence of the disturbance/noise component. for instance. 2. However. say. but with the motor subject to a step type of load disturbance.1. T = 10.e. the closed-loop system is about 10 times faster compared to the open-loop system. LLC . The reduction in the time constant implies a gain in the system bandwidth and a corresponding increase in the system response speed. if improperly applied.1 and 2. it can cause an originally unstable system to be stable. In process control. it may possibly destabilize a stable system. Consider the same speed control problem of a dc motor.3 Reduction of the Effects of External Disturbance and Noise All physical control systems are subjected to some types of extraneous signals and noise during operation.1.1. respectively. It is straightforward to show that the time constant of this system has been reduced to T/(1 + Ka).e. Clearly. Using the speed control problem of a dc motor as an example (i.2 Analysis and Design of Feedback Control Systems There are numerous analysis and design techniques for feedback control systems. K = 1. Now consider the closed-loop control system shown in Figure 2. In the presence of the ­ disturbance/noise. or resort to nonlinear © 2011 by Taylor and Francis Group.1/s. 2. Depending on the complexity of the plant and the control objectives. a = 10). This phenomenon implies that an improved system stability is achieved (more details on stability analysis will be given in a later chapter). These signals may cause the system to provide an inaccurate output. the variation of the controlled variable is Δy(s) = 0. i. the effect of the loading on the motor speed y is given by Δy(s) = 0.4  Improvement of the System Stability Stability is a notion that describes whether the system will be able to follow the input command as it changes. under closed-loop control..

a digital controller will be more useful. Level sensors installed in the tanks provide feedback signals of the liquid levels in the tanks. selection of control parameters. Some of the software tools allow rapid product prototyping and hardware-in-the-loop tests. and capacitors. more will be discussed on these topics. 2. LabVIEW. etc. More will be dealt with on the subject of digital control in a later section. feedback controllers have been implemented on dedicated real-time processors. Each sample (or measurement) of the controlled variable is converted to a binary number for input to a digital computer. MATRIXx. An electronic analog controller typically consists of operational amplifiers. optimal control. resistors.1.6. As the target control system design becomes more complex. An analog implementation of control system is more suited to fairly simple systems such as those using a three-term PID controller as C(s) in the feedback system of Figure 2. The following are some sample examples of feedback control systems: 2. there are many tools available for computer-based simulation and the design of feedback control systems such as MATLAB •. and expert systems.4. These software design and analysis features are valuable for controller debugging and fine-tuning purposes. Control valves are used to manipulate the flow of liquid to the two tanks.1  Process Control Coupled tanks are commonly encountered in the process control industry.3  Implementation of Feedback Control Systems Feedback control systems may be implemented via an analog circuit patch up or with a digital computer. The ability of the microprocessor to communicate over a field bus or local area network is another reason for the wide acceptance of the digital controller. A digital controller measures the controlled variable at specific times which are separated by a time interval called the sampling time. Microprocessor-based digital controllers have now become very popular in industrial control systems. These tools allow real-time signals and control commands to be viewed by the users. and the resistors and capacitors are arranged to implement the transfer functions of the desired control combinations. though they may not fulfill the time-critical demands of the systems. Based on these sampled information. With these tools available. implementing to actual hardware. system identification. These simulation tools are mainly Windows-based applications which support either text-based or graphical programming. hardware manufacturers of controllers provide library functions and drivers for users to port their controller designs to actual system implementation. A pilot-scale setup is shown in Figure 2. LLC . In these cases. Δt. The schematic block diagram of this process is shown in Figure 2. VisSim. In later sections. Interesting user interfaces can also be created using these design tools. the digital computer will execute an algorithm to calculate the controller output. Currently. These techniques are fields in their own rights and entire books can be dedicated to each one of them. For systems with time-critical requirements.5. These valves can be electrically controlled by control signals from the controller via the DAQ card. Workbench. There are many reasons for the popularity of such controllers.3. In many cases. 2. PC-based controllers provide an economical solution to implement feedback controllers. the analog implementation will become increasingly bulky and tedious to handle. The operational amplifier is often used as the function generator. multivariable control. The variables to be controlled are the liquid levels in the two tanks which are interconnected via a small linking pipe.1.1. © 2011 by Taylor and Francis Group. The power of the microprocessor provides advanced features such as adaptive self-tuning.4 Application Examples Feedback control systems can be readily found in diverse applications and industries. They provide a seamless path from the design of controllers.Basic Feedback Concept 2-5 design methods useful for adaptive control. and final testing. the analysis and design of feedback systems can be efficiently done. etc.

including the equipment design and temperature-sensing techniques. the temperature sensitivity of the postexposure thermal process is now significant. A multizone programmable thermal-processing system is developed to improve the uniformity of temperature during these thermal processing steps as shown in Figure 2.5  Application example—coupled-tanks in process control. new photoresist materials. The application of advanced computational and control methodologies have seen increasing utilization in recent years to improve yields.7.2  Semiconductor Manufacturing Lithography is a key process in silicon semiconductor manufacturing.6  Schematic of the coupled-tanks. with the introduction of chemically amplified photoresists.2-6 Control and Mechatronics FIGURE 2.1. The rapid transition to smaller microelectronic feature sizes involves the introduction of new lithography technologies. LLC . throughput. Reference input 2 Reference input 1 Controller 2 Controller 1 Liquid inlet Liquid inlet Control valve 1 Tank 1 Liquid outlet Liquid level sensor Control valve 2 Tank 2 Liquid outlet FIGURE 2. There are specific applications in the lithography process where feedback control has made an impact. This transition has become increasingly difficult and costly. The temperature control system for this process requires careful consideration. The system has been demonstrated for uniform temperature control and the optimization of post-apply and postexposure thermal-processing © 2011 by Taylor and Francis Group. 2. which requires a high-precision control of processing parameters. to enable the actual process to manufacture smaller devices. and. For example. and tighter processes specifications. Poor nonuniformity directly impacts the linewidth distribution and the chip performance.4. in some cases.

All of these result in low efficiency and low productivity associated with the process. positioning accuracy to sub-micrometer or even nanometer regimes is often necessary and to achieve © 2011 by Taylor and Francis Group. In the schematic block diagram of this system (Figure 2. Hence. conditions for chemically amplified photoresists used in the fabrication of quartz photomasks. 2. micromachining and machine tools. which can arise directly from errors and a lack of repeatability of human operators. Besides.10).Basic Feedback Concept 2-7 FIGURE 2.4. In spite of the increasing interest in biomanipulation. the control signal enables the piezoelectric actuator to move precisely while the current position of the actuator is obtained from the encoder attached to the moving part.4.1.9) using a piezoelectric motor can be used to efficiently control the whole injection process.1.11. an automated bio-manipulation system (Figure 2. An example of a long-travel position stage used for wafer features’ inspection is given in Figure 2. highlighting the concepts of feedback control. an improper operation may cause irreversible damage to the tissue of the cell due to the delicate membrane. and MEMS/MST. in order to improve the biological injection process.3  Biomedical Systems Biological injection has been widely applied in transgenic tasks. The skilled operators require professional training and success rate has not been high. including metrology systems.4  Precision Manufacturing Highly precise positioning systems operating at high speed are enablers behind diverse applications. LLC . In these systems. Actuator 1 Controller Actuator n Plant Input heat to wafer temperature Measurement circuitry Temperature measurements FIGURE 2.8  Schematic diagram of the multi-zone temperature control system. it is still mainly a time-consuming and laborious work performed by skilled operators. The schematic block diagram of this system is shown in Figure 2.7  Application example—multi-zone thermal cycling. 2.8. relying only on the visual information through the microscope.

11  Application example—Precision manufacturing. Piezo actuator Amplifier Control signal Encoder signals Computer with data acquisition card installed FIGURE 2. © 2011 by Taylor and Francis Group.9  Application example—Bio-manipulation system. FIGURE 2.10  Schematic of the piezoelectric motor injection system. LLC .2-8 Control and Mechatronics Piezo actuator Oocyte-holding pipette Dish Support rod Injection pipette FIGURE 2.

openended assignments Feedback Oh dear!! Engineering students FIGURE 2. force ripples. including friction.13  Application example—Feedback control in the university.Basic Feedback Concept 2-9 these objectives. concepts of feedback control can be used to design multiple feedback controllers to achieve high precision and high speed control. and other disturbances. Strive for better education University president Cover more material Engineering dean Process control professor Deliver lectures Give more thoughtprovoking. the feedback of multiple variables is necessary. so that other effects impeding fine control can be compensated. In the face of such challenges.12  Schematic of a precision motion control system. A schematic of a precision motion control system is given in Figure 2. Apart from the design of parameters for the main positioning controller. other variables as in velocity and acceleration of the moving part will be necessary to feedback to the control system. © 2011 by Taylor and Francis Group. LLC . Two-axial linear motor stage Composite control signal Amplifier Y X Two-axial position encoder signals Industrial computer with DSP controller installed FIGURE 2.12.

B. LLC . D. Modern Control Systems. A.. K..4. 2002. It is able to respond to students’ problems faster and provides more frequent and direct feedback. 2. 2000. 1988. NJ. © 2011 by Taylor and Francis Group. 1989. The control professor will deliver the lectures to the students in a way as to meet the faculty’s directions and he will use students’ feedback to gauge how well he is functioning at the level. Design of Feedback Control Systems. Miron.2-10 Control and Mechatronics 2..13 shows a feedback system in operation at the university to ensure efficient operations at various entities of the university.C. CA. NJ. Prentice-Hall Inc. Bibliography 1. Figure 2. Dorf. Design of Control Systems. Ogata.F.. Upper Saddle River. D’ Souza.5  Management Systems Feedback concepts are utilized far beyond engineering applications. 4. The president is concerned with high-level university objectives and will use feedback on the performance of the university in the various aspects of education to set goals for the faculties. Such a multiple feedback and cascade system functions better than the single-loop case where the president deals directly with the students. 3. Modern Control Engineering.4. Prentice-Hall Inc. Harcourt Brace Jovanovich. Upper Saddle River. 9th edn. The engineering dean will translate the high-level objectives into faculty-level programs and curriculum. R. San Diego. Prentice-Hall Inc. based on feedback on the position of that faculty measured with respect to the university goals. Englewood Cliffs. NJ.

..... .......................... 3-1 © 2011 by Taylor and Francis Group............. 3-27 3............... One advantage of this negative feedback is that the system is made less sensitive to small variation in the parameters of the forward path.... For the general case..................... where n is a positive integer......1 3................ 3-1 States of Equilibrium...... Such a differential equation can be easily converted into a set of n first-order differential equations that can be expressed in the following compact form: x = f (x ....... u) y = g (x ....3 3............. Since the systems are dynamic in nature..... 3-21 Linearization  •  Lyapunov’s Method Routh–Hurwitz Criterion  •  Jury Stability Test  •  Nyquist Criterion Routh–Hurwitz Criterion  •  Relative Stability  •  Stability under Parameter Uncertainty  •  Stability from Frequency Response Naresh K.............................. Sinha McMaster University 3.......... ...... they are modeled mathematically through differential equations................ 3-15 Stability of Nonlinear Systems............................ respectively The elements of the state vector x are said to be the state variables of the system.......................... these will be nonlinear differential equations of order n............... this advantage is coupled with the danger that the system may become unstable.... u) (3.. an important consideration in making the system more economical to manufacture.1) (3....2 3..................1  Introduction The basis of automatic control is negative feedback since the actual output is compared with the desired output and the difference (or error) is utilized in such a way that the two (actual output and the desired output) are brought closure.. 3-3 Stability of Linear Discrete-Time Systems..............5 References..4 Introduction....2) where x(t) is the n-dimensional state vector u(t) is the m-dimensional input vector y(t) is the p-dimensional output vector the vectors f and g are nonlinear functions of x and u.............. 3-2 Stability of Linear Time-Invariant Systems.................... stability analysis of systems is very important.......3 Stability Analysis 3......... LLC ......... In view of this..... This is due to the fact that inherent time lag (phase shift) in the system may change the negative feedback into positive feedback at certain higher frequencies with the possibility that the system may become oscillatory................... As with most good things........

3. we can state formally that all values of x that satisfy the equation f (x ) = 0 (3. Later on. the vectors b and c are replaced with the matrices B and C of appropriate dimensions.9) Ax = 0 (3.6. For a linear time-invariant autonomous system. it has one or more zero eigenvalues. that is.2  States of Equilibrium An autonomous system. Hence. It is equivalent to saying that an object is at a standstill when its velocity is zero. LLC . © 2011 by Taylor and Francis Group. as defined in Equation 3.8) Nontrivial solutions exist if the matrix A is singular. 3.6) From Equation 3.5. it follows immediately that the poles of the transfer function G(s) and the eigenvalues of the matrix A are identical. it is easily shown that the transfer function of the system is given by G(s) = Y (s ) cadj(sI − A)b = c(sI − A)−1b = U (s ) det(sI − A) (3. Therefore. It will be seen that Equation 3. Taking the Laplace transform of both sides of Equations 3. we shall first study the stability of linear systems in Section 3.7 may have many possible solutions. we shall return to nonlinear systems.3 -2 Control and Mechatronics The system is said to be autonomous if the input u(t) is identically zero. a nonlinear system will normally have several states of equilibria.2 take the simplified form x = Ax + bu y = cx (3.7) are states of equilibrium. will reach a state of equilibrium when the derivative of the state vector is identically zero.4 and 3. respectively. Equations 3.5) where A is an n × n matrix with constant real elements b is an n-dimensional column vector c is an n-dimensional row vector (for a single-output system) These are the equations for a single-input single-output system.1 is reduced to x = f (x ) (3. Equation 3. For a multivariable system.4 leads to This equation has a “trivial” solution given by x =0 (3.1 and 3. Therefore.3) For the special case of linear time-invariant system. stability analysis of nonlinear systems involves examining each state of equilibrium and determining if it is a stable state.4) (3. Equation 3.3. For this case. Since the case of nonlinear systems is rather complicated.

In this case. Some alternative definitions of stability.9 will also be ruled out for stable linear systems. consider a transfer function with a pair of poles ±jb on the imaginary axis.3  Stability of Linear Time-Invariant Systems Stability of a system can be defined in many possible ways. and (3) a decaying exponential. Therefore. D. w(t). LLC . and φ are constants. In the general case. and (3) tn. for example. we shall give a definition of stability of linear systems that will rule out zero eigenvalues. It may be * A function of time is said to be bounded if it is finite for all values of time. It is evident from the above that the most direct approach for investigating the stability of a linear system is to determine the location of the poles of its transfer function. To understand this further. there are robust programs for finding the eigenvalues of a square matrix. Fortunately. all the poles of the transfer function of the system) have negative real parts. one must use numerical search techniques to find the roots. © 2011 by Taylor and Francis Group. the degree of which may be high.3. An analytical method for evaluating the roots of a polynomial of degree greater than five is not known. for example. application of a step input to the system will cause the steady-state output to be the unbounded ramp input. Some examples of bounded functions are (1) a step function. These can be used whenever a computer and the required software are available. From this definition.Stability Analysis 3 -3 In Section 3. consider a transfer function with a pole at the origin (equivalent to the  A-matrix having an eigenvalue at zero. G(s). a singular A). 4e−3t cos 4t.3. if we apply the input C sin  bt.11 in terms of the impulse response of the system: t →∞ ∞ lim w(t ) = 0 2 (3. it is not necessary to determine the actual location of the poles of the transfer function for investigating the stability of a linear system. is the inverse Laplace transform of the transfer function. t. the steady-state output will be an unbounded function of the form Dt sin(bt + ϕ). There are good computer programs to find the roots of a polynomial using these techniques. In this case. For a linear system.1. Similarly. the following is a rather simple way to define stability: Definition: A linear system is stable if and only if its output is bounded* for all possible bounded inputs. and will be bounded if and only if all the poles of the transfer function have negative real parts.10 and 3. Also. e2t or e3t sin 4t. (2) an increasing exponential. where C. where n > 0. We only need to find out if the number of poles in the right half of the s-plane is zero or not. in other words. Often this is not very convenient as it requires evaluating the roots of a polynomial. (2) a sinusoid. 3.10) (3. all poles must be on the left of the jω -axis of the s-plane.e.11) ∫ w (t )dt < ∞ 0 These can be easily proved due to the fact that the impulse response. are given in Equations 3. Some examples of unbounded functions are (1) a ramp function. This is also known as bounded-input bounded-output stability. i. therefore. for example. the existence of nontrivial solutions to Equation 3. A procedure for determining the number of roots of a polynomial P(s) in the right half of the s-plane without evaluating the roots is discussed in Section 3.. it follows that a linear time-invariant system will be stable if and only if all the eigenvalues of the matrix A (or correspondingly. which are equivalent to the above. which are also the eigenvalues of the matrix A.

. The elements of the following rows are obtained as shown below: b1 = b3 = a1a2 − a0a3 a1 a1a4 − a0a5 a1 b1a3 − a1b3 b1 (3. α .15) and so on. The procedure will be clearer from the examples that follow. sn −3 .12) Then the following Routh table is obtained: s n −1 s n −2 a1 b1 c1 a3 b3 c3 a5 b5 c5 .. Consequently. The Routh–Hurwitz criterion states that the number of roots with positive real parts is equal to the number of changes in sign in the first column of the Routh table..1 ∆( s ) = s 4 + 5s 3 + 20s 2 + 30s + 40 (3.J. .14) c1 = (3.3 -4 Control and Mechatronics noted that even this will be unnecessary if any coefficient of the polynomial is zero or negative as that indicates a factor of the form (s − a) for (s − α ± jβ). In calculating the entries in the line that follows. where a. Let the characteristic polynomial (the denominator of the transfer function after cancellation of common factors with the numerator) be given by ∆(s) = a0 s n + a1s n −1 + ⋅⋅⋅ + an −1s + an (3. LLC . s0 h1 where the first two rows are obtained from the coefficients of Δ(s).. ..13) (3.3. .. . Example 3. these elements are considered to be zero.. In preparing the Routh table for a given polynomial as suggested above. Hurwitz (1895) in Germany and E. 3.1 Routh–Hurwitz Criterion The Routh–Hurwitz criterion was developed independently by A.. some of the elements may not exist. the system is stable if and only if there are no sign changes in the first column of the table. Routh (1892) in the United States..16) © 2011 by Taylor and Francis Group. and β are positive numbers.

Example 3. where zero in the third row is replaced by ε. Example 3.18) The Routh table is shown below. this is the characteristic polynomial of an unstable system. ε .17) Two sign changes in the first column indicate that there are two roots in the right half of the s-plane.2 The following Routh table is obtained s3 s2 s1 s0 1 2 −11 30 4 30 ∆( s ) = s 3 + 2s 2 + 4s + 30 (3.Stability Analysis The Routh table is as follows: s4 s3 s2 s1 s0 1 5 14 220 14 40 20 30 40 40 3 -5 No sign changes in the first column indicate no root in the right half of the s-plane. The sign of the elements of the first column is then examined as ε approaches zero. Therefore. The following example will illustrate this. s5 s4 s3 s2 s1 s0 1 2 ε 6 ε 18ε − 18 − 18ε 2 6ε − 6 15 6− 3 6 3 15 12 18 © 2011 by Taylor and Francis Group.1:  If an element in the first column turns out to be zero. Two special difficulties may arise while obtaining the Routh table for a given characteristic polynomial.3 ∆( s ) = s 5 + 2s 4 + 3s 3 + 6s 2 + 12s + 18 (3. and hence a stable system. it should be replaced by a small positive number. Case 3. LLC . in order to complete the table.

and is a factor of the characteristic polynomial. The number of roots of the characteristic polynomial in the right half of the s-plane will be the sum of the number of right-half plane roots of the auxiliary polynomial and the number determined from the Routh table of the lower order polynomial obtained by dividing the characteristic polynomial by the auxiliary polynomial.19) ω zero row The auxiliary row gives the equation 11s 2 + 44 = 0 (3.2:  Sometimes we may find that an entire row of the Routh table is zero.21) © 2011 by Taylor and Francis Group.20) Hence. This indicates the presence of some roots that are negative of each other. Example 3. we should form an auxiliary polynomial from the row preceding the zero row.3 -6 Control and Mechatronics As ε approaches zero. s2 + 4 is a factor of the characteristic polynomial. starting with the highest power indicated by the power of the leftmost column of the row. This auxiliary polynomial has only alternate powers of s. Case 3.4 The Routh table is as follows: s5 s4 s3 s2 s1 1 6 25 6 11 0 10 35 50 3 44 24 44 ∆( s ) = s 5 + 6s 4 + 10s 3 + 35s 2 + 24s + 44 (3. LLC . the first column of the table may be simplified to obtain s5 s4 s3 s2 s1 s0 1 2 ε −6 ε 3 18 Two sign changes in this column indicate two roots in the right half of the s-plane. and we get The Routh table for q(s) is as follows: s3 s2 s1 s0 1 6 10 6 11 6 q( s ) = ∆( s ) = s 3 + 6s 2 + 11s + 6 s2 + 4 (3. In such cases. The following example will elucidate the procedure.

if we replace s by p − α . we need more information.22) The auxiliary polynomial is p( s ) = 3s 2 + 18s 2 + 75 = 3( s 2 + 2s + 3)( s 2 − 2s + 3) (3.25) © 2011 by Taylor and Francis Group. Consequently. Δ(s) has two roots in the right half of the s-plane and is the characteristic polynomial of an unstable system.3. if the Routh test shows that a system is stable. Example 3. Example 3. we often like to know how close it is to instability. Applying the Routh test to this polynomial will tell us how many roots Δ(p) has in the right half of the p-plane.1. This information can be obtained from the Routh criterion by shifting the vertical axis in the s-plane to obtain the p-plane.6 Consider the polynomial ∆( s ) = s 3 + 10. According to our definition. The Routh table for q(s) is as follows: s2 s1 s0 1 2 3 3 q( s ) = ∆( s ) 2 = s + 2s + 3 p( s ) (3. that is.2s 2 + 21s + 2 (3. as shown in Figure 3.24) Therefore. This is also the number of roots of Δ(s) located toward the right of the line s = −α in the s-plane. in the polynomial Δ(s). as discussed above.2 Relative Stability Application of the Routh criterion. Δ(s) is the characteristic polynomial of an oscillatory system. Hence.5 The Routh table is as follows: s6 s5 s4 s3 1 2 3 0 9 12 18 0 43 50 75 75 ∆( s ) = s 6 + 2s 5 + 9s 4 + 12s 3 + 43s 2 + 50s + 75 (3. how far from the jω -axis is the pole closest to it. 3. with one pair of roots s = ±j2 on the imaginary axis. In many cases.Stability Analysis 3 -7 Since this shows no sign changes in the first column. we get a new polynomial Δ(p). LLC . only tells us whether the system is stable or not. it follows that all roots of Δ(s) are in the left half of the s-plane. For example. this is an unstable system.23) which has two roots in the right half of the s-plane. Also.

2 (3.2 20. LLC . we get ∆( p) = ( p − 0.27 into Equation 3. The Routh table shown below tells us that the system is stable.6 p2 + 170.2( p − 2)2 + 21( p − 2) + 2 = p3 + 9.3 -8 s-plane Control and Mechatronics p-plane p= s+ α s σ α FIGURE 3.1  Shift of the axis to the left by α .26) (3.6 17.27) Substituting Equation 3.25.2 or s = p − 0.2 in the s-plane. This implies one root between 0 and −0.80 It follows that shifting by a smaller amount would enable us to get a better idea of the location of this root.80 (3.8 −1. We see from this example that although Δ(s) is the characteristic polynomial of a stable system.2)2 + 10.804 2 21 2 Now we shall shift the axis to the left by 0. that is.4 p − 1. © 2011 by Taylor and Francis Group.804 17.28) The Routh table shows that Δ( p) shows that it has one root in the right half of the p -plane. it has a root between 0 and −0. p = s + 0. p3 p2 p1 p0 1 9. This indicates a large time constant of the order of 5 s. s3 s2 s1 s0 1 10.27 −1.2.2 in the s-plane.

min] α 2 k  c2 k =  β2k   β2k +1 c2 k +1 =    α 2 k +1 if k is even if k is odd if k is even if k is odd if k is even if k is odd if k is even if k is odd if k is even if k is odd if k is even if k is odd © 2011 by Taylor and Francis Group. we discuss how one can obtain these four Kharitonov polynomials. for (2k + 1) < n: ∆1(s)[max.3. LLC .max. c2k.31) where the coefficients ci are defined in a pairwise fashion. k = odd} name as shown below. k = 0.min] β2k  c2 k =  α 2 k  β2k +1  c2 k +1 =   α 2 k +1  ∆ 2 (s)[min.min.Stability Analysis 3 -9 3. This is called robust parametric stability.30) Consequently. ….32) Based on the pairwise assignment of the coefficients. c2k+1. i = 0. 1….30. m. Now. According to an interesting result proved by Kharitonov (1978).min.max]  α 2 k c2 k =   β2k  α 2 k +1  c2 k +1 =  β2k +1  ∆ 2 (s)[min. it is necessary to investigate the stability for only four polynomials formed from Equations 3. the system will be stable if the roots of Δ(s) will be in the left half of the s-plane for each set of values of the coefficients within the ranges defined by Equation 3. the four polynomials are assigned a mnemonic {k = even. 1.3  Stability under Parameter Uncertainty In our discussion so far. n (3.max.29) are unknown except for the bounds α i ≤ ai ≤ βi .max. These are defined in the form of the polynomial ∆(s) = c0 s n + c 1 s n −1 + ⋅⋅⋅ + cn −1s + cn (3. As a result. In practice.30 [K78]. some tolerance must be allowed in the values of the components.max. for a product to be economical. for most systems. we have assumed that the characteristic polynomial Δ(s) for the system is known precisely. the coefficients ai of the characteristic polynomial ∆(s) = a0 s n + a1s n −1 + ⋅⋅⋅ + an −1s + an (3.29 and 3. where n   m = 2 n −1   2 if n is even if n is odd (3.

© 2011 by Taylor and Francis Group.34) G( s ) = K s( s + a)( s + b) (3. and b = 4 ± 0.min] β2k  c2 k =  α 2 k    α 2 k +1 c 2 k +1 =   β2k +1 if k is even if k is odd if k is even if k is odd Proofs of this remarkable result can be found in references [A82.min.3 -10 Control and Mechatronics ∆1(s)[max.33) where.2 (3. Example 3. and b are as given below: K = 10 ± 2.35) We determine the stability of the closed-loop system using Kharitonov’s four polynomials. the uncertainty ranges of the parameters K.G63]. LLC . the coefficients of the characteristic polynomial are a0 = 1 6.7 9.2 with the forward path transfer function and H( s ) = 1 (3.36) R(s) + – G(s) C(s) H(s) FIGURE 3.B85.2  A closed-loop system. We illustrate its usefulness by a simple example. a.5 ≤ a2 ≤ 14.37) (3. First we determine the characteristic polynomial ∆( s ) = a0 s 3 + a1s 2 + a2s + a3 = s 3 + (a + b)s 2 + abs + K From the given ranges on the parameters.5.min.7 Consider a feedback system shown in Figure 3. a = 3 ± 0.3 ≤ a1 ≤ 7.7 8 ≤ a3 ≤ 12 (3.

However.Stability Analysis We now obtain the four polynomials as ∆1( s ) = s 3 + 7.3  Polar plot of the frequency response of the forward path of a conditionally stable system.2 will be stable.7s + 8 ∆ 4 ( s ) = s 3 + 6. provided that the open-loop frequency response GH( jω) does not have gain of 1 or more at the frequency where the phase shift is 180°. as will be the case when the forward path of a closed-loop system contains an ideal delay. In such cases.2. the transfer function may not be known.7s + 12 ∆ 3 ( s ) = s 3 + 7.3. one can use the Nyquist criterion to investigate the stability of the closed-loop system from the frequency response of the open-loop transfer function.3s 2 + 14.3s 2 + 9. the phase shift is 180° and the gain is more than one (at points A and B). it may turn out to be positive feedback if at some frequency a phase lag of 180° occurs in the loop.5s + 8 ∆ 2 ( s ) = s 3 + 6.5s + 12 3 -11 (3. It is based on a theorem Imaginary ω=∞ A B –1 Real ω inc re asi n g FIGURE 3. From superficial considerations. it will appear that the closed-loop system shown in Figure 3. and yet this system will be stable when the loop is closed. that is the ratio of two polynomials of finite degree. The main advantage here is that the frequency response can be obtained experimentally if the transfer function is not known. LLC . In many practical situations. Consider the closed-loop system shown in Figure 3. If the feedback at this frequency is sufficient to sustain oscillations.37. The frequency response of the open-loop system can therefore be utilized for investigating the stability of the closed-loop system. or it is not a rational function. 3.38) Application of the Routh–Hurwitz criterion to each of these four polynomials shows that the first column of Routh table is positive for all of them.3. this is not always true. This shows that the system in this example is stable for all possible sets of parameters within the ranges given by Equation 3. A thorough understanding of the problem is possible by applying the criterion of stability developed by H. © 2011 by Taylor and Francis Group. Nyquist in 1932.7s 2 + 14. It is seen that two values of ω. the system will act as an oscillator. For example.7s 2 + 9. consider the polar plot of the frequency response shown in Figure 3.4  Stability from Frequency Response The method described above assumes that the transfer function of the closed-loop system is known and is in the form of a rational function of the complex frequency variable s. The main reason for possible instability of this system is that although it is designed as negative feedback.

2 is given by T (s ) = G(s) 1 + G(s)H (s) (3. we must determine whether T(s) has any pole in the right half of the s-plane (including the jω -axis).4b. It is based on a theorem in complex variable theory. If F(s) has a pole or zero at the origin of the s-plane or at some points on the jω -axis.3. A feedback system will be stable if and only if the number of counterclockwise encirclements of the point −1 + j0 by the map of the Nyquist contour in the GH-plane is equal to the number of poles of GH(s) inside the Nyquist contour in the s-plane. the system is stable if and only if N = −P.40) Thus.3 -12 Control and Mechatronics in complex variable theory. Hence. as shown in Figure 3. so that Z will be zero. we get the following criterion in terms of the loop transfer function GH(s). and not known. called the principle of the argument. For two values of ω. due to Cauchy. called the principle of the argument. which is related to mapping of a closed path (or contour) in the s-plane for a function F(s). but the zeros of F(s) are different from those of GH(s). For example. where N =Z −P (3. and our object is to determine the number of zeros of F(s) inside this contour. Let F(s) have P poles and Z zeros within the Nyquist contour. Further. For this purpose. © 2011 by Taylor and Francis Group. jω R=∞ jω R=∞ R=ε σ σ (a) (b) FIGURE 3. due to Cauchy. the phase shift is 180° and the gain is more than one (at points A and B. This is called the Nyquist contour. Note that the poles of F(s) are also the poles of GH(s). that is. whether F(s) = 1 + G(s)H(s) has any root in this part of the s-plane. Nyquist in 1932. The overall transfer function of the system shown in Figure 3. the characteristic polynomial must not have any root within the Nyquist contour. we must have Z = 0. LLC . that is. yet this system will be stable when the loop is closed). note that the origin of the F-plane is the point −1 + j0 in the GH-plane. consider the polar plot of the frequency response shown in Figure 3. we must take a contour in s-plane that encloses the entire right half plane as shown in Figure 3.4a. A thorough understanding of the problem is possible by applying the criterion of stability developed by H. From the principle of the argument.39) To find out if the closed-loop system will be stable. For the system to be stable. we must make a detour along an infinitesimal semicircle.4  The Nyquist contour. and related to mapping of a closed path (or contour) in the s-plane for a function. a map of the Nyquist contour in the F-plane will encircle the origin of the F-plane N times in the clockwise direction.

0 Gain 0.309 0.5.90 0. The polar plot of the frequency response of GH(s).77 5.0 2. which is the negative part of the jω -axis maps into the curve D ′A′.41.76 Phase Shift 0° −24. The various steps of the procedure for completing the Nyquist plot are given below: 1.8 Consider the transfer function 60 ( s + 1)( s + 2)( s + 5) GH( s ) = (3.4° jω B R=∞ Im A C σ –1 B΄.25 0. 2. LLC . The part DA. 3.41) The frequency response is readily calculated and is given in Table 3. The procedure of completing the rest of the plot will be illustrated by a number of examples. C΄. This follows from the fact that GH(−jω) is the complex conjugate for GH( jω). Example 3.0° −46.72 1. Part AB in the s-plane is the jω -axis from ω = 0 to ω.9° −130.18 3.007 Phase Shift −158.8° −180° −191. Sketches of the Nyquist contour and the Nyquist plot are shown in Figure 3.6 1.0 4.1  Frequency Response of the Transfer Function Given by Equation 3. which is the map of the positive part of the j ω -axis. TABLE 3. and maps into the polar plot A′B ′ in the GH-plane.0 Gain 0 5.2° ω 3.0 10.3° −82. Consequently. D΄ A΄ Re (a) D (b) FIGURE 3.41 ω 0 0.4° −247.5  Nyquist contour (a) and plot for the transfer function (b) given by Equation 3.476 0.0 20. is an important part of the Nyquist plot and can be obtained experimentally or by computation if the transfer function is known.052 0. It may be noted that D ′A′ is the mirror image of A′B ′ about the real axis.Stability Analysis 3 -13 The map of the Nyquist contour in the GH-plane is called the Nyquist plot of GH(s).123 5. © 2011 by Taylor and Francis Group. it is easily sketched from the polar plot of the frequency response.1.9° −226. The infinite semicircle BCD maps into the origin of the GH -plane.

The infinite semicircle BCD in the s-plane maps into the origin of the GH-plane. D΄ A΄ Re FIGURE 3. C΄. The resulting Nyquist ­ contour and plot are shown in Figure 3.42) Since this time we have a pole at the origin of the s-plane. Also. with Z = 2. The Nyquist plot does not encircle the point −1 + j0 in the GH-plane.7. If we increase the open-loop gain of this system by a factor of 2.41 if the open-loop gain is increased by a factor of more than 2. with two poles in the right half of the s-plane. ε. the function GH(s) does not have any pole inside the Nyquist contour. LLC . Part AB of in the s-plane is the positive part of the jω -axis from jε to j∞ and maps into the polar plot of the frequency response A′B ′ in the GH-plane (shown as a solid curve). as shown in Figure 3. Note that s = εe jθ in the denominator will cause the infinitesimal semicircle to map into an infinite semicircle with traversal in the opposite direction due to the change in the sign of θ as it is brought into the numerator from the denominator.6.6  Nyquist plot for the transfer function given by Equation 3. The part DE (negative jω -axis) maps into the image D ′E′ of the frequency response A′B ′. N = 2. 3. consequently. since GH(s) can be approximated as K /12εe jθ for small s. the Nyquist contour must take a small detour around this pole while still attempting to enclose the entire right half of the s-plane. It maps into the infinite semicircle. for this case. The infinitesimal semicircle EFA is the plot of the equation s = εejθ. This makes the system stable since Z = N + P = 0. 4. Consequently. the closed-loop system is unstable. © 2011 by Taylor and Francis Group. since N = 0 and Z = 0. The various steps in obtaining the Nyquist plot are described below: 1. with the result that the system will be stable. For the plot shown. with θ increasing from −90° to 90°. Since P is still zero.15 or more. Hence.9 Consider the open-loop transfer function G( s ) = K s( s + 2)( s + 6) (3. including the j ω -axis.3 -14 Im Control and Mechatronics –1 B΄. Hence we have N = 0. the point −1 + j 0 will not be encircled. 2. we get the semicircle EFA of infinitesimal radius. the resulting Nyquist plot will encircle the point −1 + j0 twice in the GH-plane.15. If the gain is sufficiently reduced. N = 2 and P = 0. the system will be unstable. Example 3. E′F′A′.

C΄. The essential difference with Figure 3. D΄ Re D (a) P=1 (b) A΄ N = 1 (–1 for small K ) FIGURE 3. Both of these results agree with those obtained in the previous example. as shown in Figure 3. E΄ jω B R=ε F R=∞ Im A E C –1 σ F΄ B΄.42 with the Nyquist contour enclosing the pole at the origin: (a) s-plane and (b) GH-plane. Also.7  Nyquist contour and plot for the transfer function given by Equation 3. C΄. Here. © 2011 by Taylor and Francis Group.8b. and the system will be unstable. the system will be stable for small values of K. where k is an integer and T is the sampling interval. Thus.10 We shall reconsider the transfer function of the preceding example.7b is in the map of the infinitesimal semicircle in the Nyquist contour. with the result that now we shall have Z = N + P = 0. the Nyquist plot encircles the point −1 + j0 once in the clockwise direction. 3. reducing the gain sufficiently will cause the Nyquist plot to encircle the point −1 + j0 once in the counterclockwise direction. but this time we shall take a different Nyquist contour. The detour around the pole at the origin will be taken from its left.42: (a) s-plane and (b) GH-plane. we have P = 1. the input and the output are sampled at t = kT. Example 3.Stability Analysis Im 3 -15 jω B R=∞ E΄ A E R=ε F C –1 σ B΄.8  Nyquist plot for the transfer function give by Equation 3. giving us N = 1. The resulting Nyquist plot is shown in Figure 3. so that Z = N + P = 2. as expected. making N = −1. which is an infinite semicircle in the counterclockwise direction in this case.8a. This has the effect of converting the continuous-time input and output functions into discrete-time functions defined only at the sampling instants. LLC . D΄ F΄ Re D (a) A΄ (b) N = 2 (0 for small K ) FIGURE 3. Consequently.4  Stability of Linear Discrete-Time Systems In order to use a digital computer as part of control systems.

10.44) (also called the Möbius transformation) that maps the unit circle of the z-plane into the imaginary axis of the w-plane and the interior of the unit circle into the left half of the w-plane. This follows from the transformation given by Equation 3. Using the state-variable formulation.4. Since it enables us to determine the number of roots with positive real parts. These are called sequences and are related by difference equations instead of differential equations as in the continuous-time case.9  Mapping from s-plane to z-plane. the two basic techniques studied for continuous-time systems—(a) the Routh– Hurwitz criterion and (b) the Nyquist criterion—can be used with slight modifications for discrete-time systems.43) The necessary and sufficient condition for the stability of a discrete-time system is that all the poles of its z-transfer function lie inside the unit circle of the z-plane.9. Alternatively.3 -16 Imaginary j Control and Mechatronics Unit circle Real s-plane z-plane FIGURE 3. is an attractive method for investigating system stability without requiring evaluation of the roots of the characteristic polynomial. LLC .1 Routh–Hurwitz Criterion The Routh–Hurwitz criterion. where we need to find the number of roots outside the unit circle.1. 3. it cannot be used directly for discrete-time systems. We can now apply the Routh criterion to Q(w) as in the s-plane. the system will be stable if and only if the magnitude of all eigenvalues of F is less than one. This is shown in Figure 3.43. Just as continuous-time systems are represented either by differential equations or by transfer functions obtained by taking the Laplace transform of the differential equation. Keeping this in mind. a difference equation of order n can be written as x(kT + T ) = Fx(T ) + Gu(T ) (3. © 2011 by Taylor and Francis Group. It is possible to use the Routh–Hurwitz criterion to determine if a polynomial Q(z) has roots outside the unit circle by using the bilinear transformation w = u + fv = z +1 w + 1 u + 1 + jv or z = = z −1 w − 1 u − 1 + jv (3. discrete-time systems can be represented either through difference equations or by their z-transfer function where z = e sT The mapping between the s-plane and the z-plane is shown in Figure 3. discussed in Section 3.

6 4.3 w + 4.10  Mapping in bilinear transformation.4. Q(z) is the characteristic polynomial of a stable system. © 2011 by Taylor and Francis Group.9 = 0 (3. Example 3. in a manner reminiscent of the conventional Routh test in the s-plane. It will be seen that this method requires a lot of algebra.4(w − 1)2 = 0 2 2 Q( z ) = z 3 − 2z 2 + 1.9 2. although requiring more terms and more computation.3 4.4 = 0  w − 1   w − 1   w − 1  or (w + 1)3 − 2(w − 1)(w + 1)2 + 1.9 No change in sign in the first column of the Routh table indicates that all roots of Q(w) are in the left half of the w-plane. In Section 3. Hence. all roots of Q(z) are inside the unit circle. LLC .5  − 0.5z − 0.7 1.1 w 3 + 0.5(w + 1)(w − 1)2 − 0.4 = 0 (3.46) The Routh table is shown below: w3 w2 w1 w0 0.Stability Analysis Im Unit circle jv 3 -17 Re u z-plane w-plane FIGURE 3.2. and correspondingly.1 0. we discuss another test for stability that can be performed directly in the z-plane.11 The characteristic polynomial of a discrete-time system is given by Then  w + 1  w + 1  w + 1 Q(w ) =  − 2 + 1.7 w 2 + 2.45) This can be further simplified to obtain the polynomial 0.

a2 an−2 b2 bn−3 c2 cn−4 . b0.7 0.2567 1. d0.3 that one can use the frequency response of the open-loop to determine stability of a closed-loop continuous-time system using the Nyquist criterion. Example 3. Q(z). then all roots of Q(z) are inside the unit circle if and only if b 0.3 -18 Control and Mechatronics 3.4 −1.47) Then we form the array shown in the following table.2  Jury Stability Test This is similar to the Routh test and can be used directly in the z-plane. The main objective there was to determine the number of zeros of 1 + GH(s) in the right half of the s-plane from the number of encirclements of the point −1 + j0 in the GH-plane. LLC . c0. Furthermore.909 Since the coefficients a0. the first element in each odd-numbered row) are all positive. The last row has only one element. Let the characteristic polynomial be given by Q(z ) = a0 z n + a1z n −1 + + an −1z + an .4 α2 = 5/6 α1 = −0. etc.. it follows that all roots of Q(z) are inside the unit circle and it is the characteristic polynomial of a stable system. The process is continued until there are 2n + 1 rows.11.3.5 −2 0. The fourth row is the third row written in the reverse order. … … … … … … … an−1 a1 bn−1 b0 an a0 αn = an/a0 αn−1 = bn−1/b0 αn−2 = cn−2/c0 The elements of the first and second rows are the coefficients of Q(z). © 2011 by Taylor and Francis Group.2567 −0. given in Example 3.84 −0. a0 an b0 bn−1 c0 cn−2 a1 an−1 b1 bn−2 c1 cn−3 . The procedure is shown below. a0 > 0 (3. The Jury’s table is shown below: 1 −0.4. … … … … … … … ak an−k bn−k bk−1 cn−k ck−2 . The sixth row is the fifth row written in the reverse order. by multiplying the fourth row by bn−1/b0 and subtracting this from the third row.5 −1. (i. respectively. the number of negative elements in this set is equal to the number of roots of Q(z) outside the unit circle and a zero element implies a root on the unit circle.4 1 α3 = −0.12 Consider the characteristic polynomial. 3.7 0.2333 0.e. Jury’s stability test states that if a 0 > 0. The fifth row is obtained in a similar manner.4 −0.4 0.4.2333 0. c0.84 0.0445 −2 1. in the forward and the reverse order. and d0 are all positive. that is. The third row is obtained by multiplying the second row by αn = an/a0 and subtracting this from the first row.3  Nyquist Criterion It was seen in Section 3.

4 − jθ = e εe jθ (0.4) (e − 1)(e jwT − 0. Let N be the number of counterclockwise encirclements of the Nyquist plot of the point −1 + j0 in the GH(z)-plane..13 The Nyquist criterion will be illustrated using the unity-feedback system. LLC . For z on the unit circle.49) This leads to an arc of infinite radius on the Nyquist plot. where the transfer function of the forward path is G( z ) = 0.4) jwT (3.4) ( z − 1)( z − 0.Stability Analysis Imaginary z=1 Detour × Real 3 -19 FIGURE 3.6(1.50) z = 1+ εe jθ .6(e jwT + 0. and we enclose this region if it is to our right while we traverse along the Nyquist contour.51) © 2011 by Taylor and Francis Group. Example 3.e.11. ε << 1 (3.11. The Nyquist criterion states that for a stable system.6( z + 0. we must have N = −P. with a detour around that point. in the region to our right while we traverse the contour in the z-plane). traversed in the counterclockwise direction. with detours around poles on the path. To apply the Nyquist criterion to determine stability of discrete-time systems. the Nyquist contour was taken in the clockwise direction.48) since the transfer function has a pole at z = 1. On this detour and G( z ) z =1+ ze jθ = 0. we need to determine the number of roots of 1 + GH(z) in the region outside the unit circle of the z-plane. we have G( z ) z =e jwT = 0. This can be understood by appreciating that the right half of the s-plane maps outside the unit circle in the z-plane.4) (3.6) ε (3. as shown in Figure 3.4) 1.11  Nyquist contour in the z-plane. The Nyquist contour in this case is the unit circle of the z-plane. where P is the number of poles of GH(z) inside the Nyquist contour (i. the Nyquist contour will be as shown in Figure 3. Note that in the s-plane.

4 −114. It can.91 6. In Equation 3.16 0.12  Nyquist contour and plot for Example 3.49 ωT 0.6 −187. it is evident that the system is stable.2 −190.1 0.2  Frequency Response of the Transfer Function Given by Equation 3.4 −184.94 1.18 0.6 2.52 0. so that the point −1 + j0 is enclosed within the Nyquist plot.4 1.4 R= 0. Since G(ejωT ) is the complex conjugate of G(e−jωT ).30 1.3 −121. 0 < ωT < 2π.13 φ Control and Mechatronics −166.41 0.4 −188.20 2.26 0.6 −191.0 M 13. w= 2 z −1 T z +1 (2/T ) + w Z= (2/T ) − w (3.0 Imaginary Imaginary z=1 Detour × Real 0.5 −190. be forced into instability if the gain is increased by a factor of 2.6 −147. however.0 2.9 1.14 0. © 2011 by Taylor and Francis Group.2 0. LLC .83 4. The plot is completed from the map of the detour.8 2. it is necessary to calculate the frequency response only for 0 < ω < π.69 0.57 1. A minor modification of the bilinear transformation due to Tustin is often used and is described in Equation 3.7 0.52) where T is the sampling interval.3 0.44.3 − (a) Nyquist contour and (b) Nyquist plot. Tustin had originally used this transformation for approximate design of digital filters. which is a semicircle of infinite radius. From the Nyquist plot.4 0. obtained from Equation 3.3 -20 TABLE 3.6 1.12 shows the Nyquist plot for this system.5.5 −141.50.5 0. As an alternative to using the z-plane. The lower half maps as the mirror image of this response about the real axis.49. Figure 3.09 0.93 φ −98.9 −135.45 1.2 1.22 0.33 0.3 −152.43 3.4 2.4 ωT 1.8 −128.8 π M 0.5 −157. one may apply the bilinear transformation to the w-plane making it similar to the plot for continuous-time systems.3 −106. with the Nyquist contour as in Figure 3. Some values are shown in Table 3.13 ∞ Real (a) (b) FIGURE 3.8 0.0 −173.9 −180. The polar plot of the frequency response is the map of the upper half of the unit circle of the z-plane.6 0.2 2.

5  Stability of Nonlinear Systems Unlike linear systems. Some of these may be points of stable equilibrium.Stability Analysis 3 -21 To understand the implications of this transformation. v. while others may be points of unstable equilibrium. which is the unit circle in the z-plane.56) Thus. is related to the s-plane frequency ω through the relationship v= 2 ωt tan T 2 (3. if the sampling frequency is chosen so that Equation 3. For any nonlinear system with many states of equilibrium. A good example is the bistable multivibrator. visualized as a particular point in an n-dimensional space. nonlinear systems may have more than one state of equilibrium. the study of stability of nonlinear systems is more complicated than for linear systems. we get Z = ejwT. each of these states must be examined for stability. must be investigated for stability. then it is possible to use the w-plane frequency response of a discrete-time system exactly like the s-plane frequency of a continuous-time system. and w = 0 + jv = j 2 ωt tan T 2 (3. it eventually returns to that state. We shall consider here mainly the case of an autonomous or unforced system. in genera. 3.53) For s = jω. for any perturbation (small or large). after a small perturbation.54) Thus. two of which are stable and one unstable. the w-plane frequency. Several definitions of stability have been used in the literature for nonlinear systems. A system is said to have global stability at an equilibrium state if. Hence. the stability of nonlinear systems depends not only on the physical properties of the system but also on the magnitude and nature of the input as well as the initial conditions.2. it eventually returns to that state.57) are states of equilibrium. LLC . Each of these states. As stated in Section 3. an electronic circuit with three states of equilibrium. The error in this approximation is less than 4% for ωt π 2π ω s ≤ or ω ≤ = 2 10 10T 10 (3. it is necessary to investigate stability at each of these points. let us consider the stability boundary in the w-plane by writing w = u + jv = 2 e sT − 1 2 sT = tanh T e sT + 1 T 2 (3. © 2011 by Taylor and Francis Group. The approximation is valid as long as tan ωt/2.55) Note that v is approximately equal to ω for small values of ωT. A system is said to have local stability at an equilibrium state if. all values of the n-dimensional state vector that satisfy the equation f (x ) = 0 (3. Since.56 is satisfied over the bandwidth of the system. for the study of system stability as well as for the design of compensators based on frequency response.

61)  ∂f1  ∂u  1  ∂f 2 B =  ∂u1    ∂fn   ∂u1 ∂f1 ∂u2 ∂f 2 ∂u2 ∂fn ∂u2 ∂f1  ∂um   ∂f 2  ∂um    ∂fn   ∂um  (3.62) © 2011 by Taylor and Francis Group. 3. It can be written as f (x ) = f (x0 ) + df dx (x − x0 ) + x =x 0 x = x0 We get a linear approximation of Equation 3. This can be done by obtaining an approximate linear model at each of these points and testing it for stability. f (x).59) ∂f1 ∂x2 ∂f 2 ∂x2 ∂fn ∂x2 and ∂f1  ∂xn   ∂f 2  ∂xn    ∂fn   ∂xn  (3.5. u)   n (3. This is the main idea behind the incremental linear models used for the analysis of electronic circuits. For example. this will be a good approximation if either (x − x0) is very small or the higher order derivatives of f are very small. Assuming that the dimension of x is n. differential equation leads to the linearized model (assuming x0 = 0. LLC .60)  f1(x . Clearly.3 -22 Control and Mechatronics We can investigate local stability of nonlinear systems by examining the effect of small perturbations at each point of equilibrium. consider a nonlinear function.e. we rewrite Equation 3.1  Linearization Linearization is based on the Taylor series expansion of a nonlinear function about an operating point.58) Ignoring the higher order terms in the Taylor series expansion of this vector. i. We shall now generalize this to the case of the state equations for nonlinear systems.58 if we ignore all terms except the first two. u)  f1(x . the coordinates have been transformed to make the origin the point of equilibrium. u) =  ⋅   ⋅     ⋅   f (x . u)     f 2 (x .. u)    x = f (x .1 as d2 f d2x (x − x0 )2 + 2! (3. This is described below. and u 0 = 0) where  ∂f1  ∂x  1  ∂f 2 A =  ∂x1    ∂fn   ∂x1 x = Ax + Bu (3.

63) Then we obtain the following state equations f1 = x1 = x 2 (3. Determine the points of equilibrium. this linear model will be valid only for small deviations around the point of equilibrium.67) The characteristic polynomial for this case is s det( sI − A) =  1 4 2 = s −4 s  (3. The resulting Jacobian matrix is shown in Equation 3. it can be used for investigating local stability around the point of equilibrium (for the autonomous) case by simply applying the Routh criterion to the characteristic polynomial of A.14 Consider the following second-order nonlinear differential equation: d2 x dx + 2x + 2x 2 − 4 x d2t dt a. Nevertheless.64) (3.0) and (2. b.57 to zero and are readily found as (0. It is to be understood that x represents the variation of the state from the point of equilibrium or “set-point” in the terminology of process control. 0 A=  4 1 0  (3. we have used x instead of the deviation x − x0. Note that in Equation 3.Stability Analysis 3 -23 A and B are said to be Jacobian matrices. © 2011 by Taylor and Francis Group. LLC . Solution We shall first derive the state equations for the given differential equation.68) indicating an unstable system with a pole in the right half of the s-plane. Again. even if x0 is not the origin of the state space.52. Let x1 = x    x2 = x   (3. Example 3.66) For the equilibrium point given by x1 = x2 = 0. This is a common practice. Investigate the stability of the system near each point of equilibrium. The following example illustrates the main idea behind this approach.0).58  ∂f1  ∂x  1  ∂f2  ∂x1  ∂f1  0 ∂x 2   = − − ∂f2   x x2 + 4 4 1  ∂x 2   1  −2 x1  (3.65) The points of equilibrium are obtained by setting the two derivatives in Equation 3.

we discuss the stability in the sense of Lyapunov. In this section. 3. As stated earlier.2  Lyapunov’s Method The simple stability criteria developed for linear systems are not applicable to nonlinear systems since the concept of the roots of a characteristic polynomial is no longer valid.14b shows monotonic stability.3 -24 For the equilibrium point given by x1 = x2 = 0. like limit cycles. Lyapunov. * After A. we have 0 A=   −4 1 −4  Control and Mechatronics (3. This permits the existence of oscillations of limited amplitude.14a shows asymptotic stability.70) which has roots in only left half of the s-plane. The trajectory in Figure 3. that is. We can investigate local stability of nonlinear systems by examining the effect of small perturbations at each point of equilibrium. it is common practice to transform coordinates in the state space so that the origin becomes the point of equilibrium.* For any given state of equilibrium. This indicates that the system will be stable in the neighborhood of this point. it is not necessary for the trajectory to approach the point of equilibrium. This can be done by obtaining an approximate linear model at each of these points and testing it for stability. the state remains within the small specified region S(R). Then this is a point of stable equilibrium provided that there is a region δ(ε) contained within ε such that any trajectory starting in the region δ does not leave the region ε. A system is said to be globally stable if the regions S(δ) and S(R) extend to infinity.71) in the n-dimensional state space. This is described below.69) The characteristic polynomial for this case is s det( sI − A) =  4 −1  2 = s + 4s + 4 s + 4  (3. Note that with this definition. Let this region be denoted by S(R). This is convenient for examining local stability and can be done for each point of equilibrium. This is illustrated in Figure 3. © 2011 by Taylor and Francis Group.13 for the two-dimensional case. A system is said to be locally stable if the region S(δ) is small and when subjected to small perturbations. n = 2.5. a Russian mathematician who did pioneering work in this area during the late nineteenth century.M. It is only required that the trajectory be within the region ε. The system is said to be stable in the sense of Lyapunov if there exists a region S(ε) such that a trajectory starting from any point x(0) in this region does not go outside the region S(R). The system is said to be monotonically stable if it is asymptotically stable and the distance of the state from the origin decreases monotonically with time. many different classes of stability have been defined for nonlinear systems. The system is said to be asymptotically stable if there exists a δ > 0 such that the trajectory starting from any point x(0) within S(δ) does not leave S(R) at any time and finally returns to the origin. the set of points described by the equation 2 2 x1 + x2 + 2 xn = R2 (3. Consider a region ε in the state space enclosing an equilibrium point x0. The trajectory in Figure 3. Let us now consider a hypersphere of finite radius surrounding the origin of the state space (the point of equilibrium). LLC .

Given a set of nonlinear state equations. It is based on the simple concept that the energy stored in a stable system cannot increase with time.Stability Analysis X2 3 -25 Unstable Stable R X1 δ FIGURE 3.1 A system described by ˙ x = f (x) is asymptotically stable in the vicinity of the point of equilibrium at the origin of the state space if there exists a scalar function V such that 1. LLC .14  (a) Asymptotic stability and (b) monotonic stability. © 2011 by Taylor and Francis Group. Theorem 3. V(x) is often called a Lyapunov function. Lyapunov’s direct method provides a means for determining the stability of a system without actually solving for the trajectories in the state space. V(˙ x) < 0 for all x ≠ 0 Note that these conditions are sufficient but not necessary for stability. V(x) > 0 for x ≠ 0 and V(0) = 0 3. V(x) is continuous and has continuous first partial derivatives at the origin 2. X2 X2 R X1 δ R X1 δ (a) (b) FIGURE 3.13  Stability in the sense of Lyapunov. one first defines a scalar function V(x) that has properties similar to energy and then examines its derivative with respect to time.

Evidently. it is ˙ is negative if 1 – 2 x1x2. and hence the system is asymptotically stable. it should be noted that these conditions are sufficient but not necessary.5. It will be seen that V Example 3. It is not described here. but the interested reader may refer to Section 8.16 Consider the system described by Let V= 1 2 2 x1 + x 2 2 (3.2 A system described by ˙ x = f (x) is unstable in a region Ω about the equilibrium at the origin of the state space if there exists a scalar function V such that 1.74) ˙ < 0 for all nonzero values of x2. Gibson (1963) has described the variable gradient method for generating Lyapunov functions [G63]. LLC . V(x) is continuous and has continuous first partial derivatives in Ω 2.3 -26 Control and Mechatronics Theorem 3. This defines a region of stability in the state space.75) which satisfies conditions 1 and 2. Then 2 2 V = x1x1 + 2 x 2 x 2 = − x1 (1− 2 x1x 2 ) − 2 x 2 (3.1 of the Gibson text.15 Consider the system described by the equations x1 = x 2 If we make 2 2 V = x1 + x2    3 x 2 = − x1 − x 2   (3. Atherton (1982) provides further information on Lyapunov functions [A82]. V(˙ x) > 0 for all x ≠ 0 Again. Unfortunately. the main problem with this approach is the selection of a suitable function V(x) such that its derivative is either positive definite or negative definite. bounded by all clear that V points for which x1x2 < 0.73) which satisfies conditions 1 and 2. there is no general method that will work for every nonlinear system.72) (3.77) Although it is not possible to make a general statement regarding global stability for this case.76) x1 = − x1(1− 2 x1x 2 ) (3. © 2011 by Taylor and Francis Group. V(x) ≥ 0 for x ≠ 0 and V(0) = 0 3. Example 3. then we get 3 4 V = 2 x1x1 + 2 x 2 x 2 = 2 x1x 2 + 2 x 2 ( − x1 − x 2 ) = −2 x 2 (3.

J. N. 1978. An elementary proof of Kharitonov’s stability theorem with extensions. K. Anagnost. IEEE Transactions on Automatic Control. C. Wiley Eastern. and Desoer. 1963. Nonlinear Control Engineering. © 2011 by Taylor and Francis Group. Van Nostrand-Reinhold. 1982. 1994. Contemporary Mathematics. A systematic approach to stability of sets of polynomials. N. LLC . J. [G63] Gibson. Control Systems. [S94] Sinha. P. Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. U.Stability Analysis 3 -27 References [A82] Atherton. 1989. D. 47:25–34. London. K. R.K. E. V. New Delhi.. J. 14:1483–1485. India.. AC-34:995–998. [K78] Kharitonov. [MAD89] Minnichelli. McGraw-Hill. New York. L. Differential’llye UraFclliya. J. 1985. [B85] Bose.. 2nd edn. A. Nonlinear Automatic Control.

the nonlinear function also received the name “relay” comprising a number of discontinuous nonlinearities. The term “relay” came from electrical applications where on–off control has been used for a long time....................6].............. cheaper components.... process identification and automatic tuning of PID controllers techniques [9..... hydraulic and pneumatic servo systems.. they also provide a higher open-loop gain and a 4-1 © 2011 by Taylor and Francis Group.............4...... As a rule.. It is enough to mention an enormous number of residential temperature control systems available throughout the world to understand how popular the relay control systems are. Thus............ A number of industrial examples of relay systems were given in the classical book on relay systems [7]....... The use of relay control provides a number of advantages over the use of linear control: simplicity of design......................... sigma–delta modulators.....1  Introduction Relay feedback systems are one of the most important types of nonlinear systems..... many existing sliding mode algorithms [8] can be considered and analyzed via the relay control principle... 4 -6 Introduction to the LPRS  •  Computation of the LPRS from Differential Equations  •  Computation of the LPRS from Plant Transfer Function  •  LPRS of Low-Order Dynamics  •  Some Properties of the LPRS Introduction  •  Relay Feedback Systems  •  From Describing Function Analysis to LPRS Analysis 4............2 Relay Feedback Systems......... LLC .....1 4............4 Frequency-Domain Analysis of Relay Feedback Systems 4........... when applied to the type of the control systems... 4 -21 Analysis of Performance of Relay Feedback Systems and LPRS Shaping  •  Compensator Design in the Relay Feedback Systems  •  Conclusions Igor M.. etc.. To describe the nonlinear phenomenon typical of electrical relays.. and the ability to adapt the open-loop gain in the relay feedback system as parameters of the system change [1.......10].. 4 -24 4........3 Design of Compensating Filters in Relay Feedback Systems.................. Furthermore.....1 Relay Feedback Systems 4..... DC motors....................... 4 -1 Locus of a Perturbed Relay System Theory......... the term “relay” is now associated not with applications but with the kind of nonlinearities that are found in the system models...... Boiko University of Calgary References............ The applications of the relay feedback principle have evolved from vibrational voltage regulators and missile thruster servomechanisms of the 1940s [3] to numerous on–off process parameter closed-loop control systems........1....................

the problem of analysis of the effect of the external signals on the system characteristics is an important part of system performance analysis. the system is supposed to respond to this disturbance in such a way as to provide its compensation. However. we can consider only one signal applied to the system and consider it being a disturbance or a reference input signal depending on the system task. smoothing of the Coulomb friction and of other plant nonlinearities can also be achieved. if we consider a model of the system. does not normally occur. Analysis of the frequency and the amplitude of these oscillations is an objective of analysis of relay systems. stability of limit cycles. so that the output can be brought in alignment with this external input. In some applications. Moreover. In the second case. and input–output problem (set point tracking or external signal propagation through the system).4 -2 Control and Mechatronics higher performance [5] than the linear systems. For the reason of this obvious connection between the presented method and the describing function method.1) where A ∈ Rn×n.2 presents the locus of a perturbed relay system (LPRS) method. Traditionally. the latter is presented as a logical extension of the former.2 Relay Feedback Systems It is a well-known fact that relay feedback systems exhibit self-excited oscillations as their inherent mode of operation. The theory of relay feedback systems is presented in a number of classical and recent publications. which also includes the disturbance attenuation problem. B ∈ Rn×1. However. Relay control systems theory has been consistently receiving a lot of attention since the 1940s from the worldwide research community. In a general case (which includes time-delay linear plant). Therefore. the problem of propagation of external signals cannot be solved without the autonomous mode analysis having been carried out first. in application of relay feedback systems. Naturally. some of which are given in the list of references. In both cases. the presented method is exact and the notions that are traditionally used within the describing function method are redefined. LLC . needs to be carried out in practical applications. C ∈ R1×n are matrices A is nonsingular x ∈ Rn×1 is the state vector y ∈ R1 is the system output © 2011 by Taylor and Francis Group. An external signal always exists either in the form of an exogenous disturbance or a set point.3 gives the methodology of compensator design in relay feedback system. Since we are going to deal with models in this chapter. and from the point of the methodology of analysis. they are not different. based on the LPRS method. 4. Section 4. relay feedback systems can be described by the following equations: x(t ) = Ax(t ) + Bu(t − τ) y(t ) = Cx(t ) (4. which is supposed to include the analysis of the autonomous mode and the analysis of external signal propagation. Some concepts (like the notion of the equivalent gain) are the same. a complex analysis. so that they now describe the system properties in the exact sense. In the first case. and Section 4. when no external signals are applied to the system. the difference between those two signals would only be in the point of application. And some introductory material of the chapter is devoted to the describing function method. One can see that the problem of analysis of external signal propagation applies to both types of systems considered above. the system is supposed to respond to the input.1. The method of analysis presented in this chapter is similar from the methodological point of view to the describing function method. the scope of research is comprised of the following problems: existence and parameters of periodic motions. the autonomous mode.

Frequency-Domain Analysis of Relay Feedback Systems 4 -3 τ is the time delay (which can be set to zero if no time delay is present) u ∈ R1 is the control defined as follows: +c   u(t ) =   −c   if σ(t ) = f (t ) − y(t ) ≥ b or σ(t ) > −b. Consider the autonomous mode of system operation ( f (t) ≡ 0). 4. Due to the character of the nonlinearity. If the system does not have asymmetric nonlinearities. The following sections give a general methodology of such an analysis based on the frequencydomain concepts.1. the relative degree of W l(s) is one or higher. © 2011 by Taylor and Francis Group. i. The key approach to the analysis of the relay feedback systems is a method of analysis of self-excited oscillations (symmetric and asymmetric). u(t −) = −c (4. if a constant external input (disturbance) is applied to the system that has a periodic motion. which is also referred to as a self-excited periodic motion or a limit cycle.1.. which can be obtained from the matrix-vector description (4. ε > 0 We shall consider that time t = 0 corresponds to the time of the error signal becoming equal to the posi˙  > 0): σ(0) = b and call this time the time of switch initiation.3  From Describing Function Analysis to LPRS Analysis 4.1  Symmetric Oscillations in Relay Feedback Systems A mode that may occur in a relay feedback system is a self-excited (nonvanishing) oscillation.1 cannot have c f0 + – σ –b –c b u e–τs . which results in the control having only two possible values. tive half-hysteresis value (subject to σ We represent the relay feedback system as a block diagram (Figure 4.3) We shall assume that the linear part is strictly proper.1  Relay feedback system. However.3.1. this periodic motion is symmetric in the autonomous mode (no external input applied). LLC . the system in Figure 4. x = Ax + Bu y = Cx Wl (s) y FIGURE 4.1).e.2) where f (t) is an external relatively slow input signal to the system σ is the error signal c is the relay amplitude 2b is the hysteresis value of the relay u(t −) = lim u(t − ε) is the control at time instant immediately preceding time t ε → 0. u(t −) = c if σ(t ) = f (t ) − y(t ) ≤ −b or σ(t ) < b. W l(s) is the transfer function of the linear part (of the plant in the simplest case). the self-excited oscillations become biased or asymmetric.1) as Wl (s) = e − τsC(Is − A)−1 B (4. In Figure 4.

an approximate method known as the describing function (DF) method is used in engineering practice.6) which is a complex equation with two unknown values: frequency Ω and amplitude a.7 that the imaginary part does not depend on the amplitude a. This periodic solution is not exact. For this method to provide an acceptable accuracy.6.5) For the hysteretic relay. Wl ( jΩ) = − 1 N (a) (4. A comprehensive coverage of the DF method is given in Refs. which results in a simple solution of Equation 4. (a ≥ b) 4c 4c a   2 (4. so that the input to the relay could approximately be considered a sinusoid. the formula of the DF can be obtained analytically. The DF method provides a simple and efficient but not exact solution of the periodic problem. Finding the values of the frequency and amplitude is one of the main objectives of the analysis of relay feedback systems.5): −N −1(a) = − πa πb b 1 −   − j . In accordance with the DF method concepts. It is given as follows [1]: N (a) = 4c 4cb b 1 −   − j 2 . However.4 -4 Control and Mechatronics an equilibrium point. which is a result of the approximate nature of the DF method itself that is based upon the assumption about the harmonic shape of the input signal to the relay. ω) = πa 2π/ ω ∫ 0 ω u(t )sin ωt dt + j πa 2π/ ω ∫ u(t )cos ωt dt 0 (4. so that the higher harmonics of the control signal are attenuated well enough. © 2011 by Taylor and Francis Group. The main concepts of this method were developed in the 1940s and 1950s [6].4]. and the so-called DF of the nonlinearity is found as a function of the amplitude and frequency as per the following definition: ω N (a. the periodic solution of the equations of the relay feedback system would correspond to the point of intersection of the Nyquist plot of the linear part (being a function of the frequency) and of the negative reciprocal of the DF of the hysteretic relay (being a function of the amplitude) given by formula (4. (a ≥ b) πa πa a 2 (4.7).6 is the frequency response (Nyquist plot) of the linear part of the system. an assumption is made that the input to the nonlinearity is a harmonic signal. LLC . The expression in the left-hand side of the Equation 4. if the linear part of the system has the property of the low-pass filter. in the complex plane. The periodic solution in the relay feedback system can be found from the equation of the harmonic balance [1]. the DF is a function of the amplitude only and does not depend on the frequency. For the hysteretic relay nonlinearity. Very often. Let us obtain it from formula (4.4) DF given by formula (4. the DF method may give a relatively precise result.4) is essentially a complex gain of the transformation of the harmonic input by the nonlinearity into the control signal with respect to the first harmonic in the control signal. the linear part of the system must be a low-pass filter to be able to filter out higher harmonics. Graphically. We can assume that a symmetric periodic process of unknown frequency Ω and amplitude a of the input to the relay occurs in the system. [1. The value in the right-hand side is the negative reciprocal of the DF of the hysteretic relay.7) One can see from Equation 4.

the system would exhibit a stable oscillation and measure the values of the constant term of the control (mean control.2.2 Asymmetric Oscillations in Relay Feedback Systems and Propagation of External Constant Inputs Now we turn to the analysis of asymmetric oscillations in the relay feedback system.2  Asymmetric oscillations at unequally spaced switches. © 2011 by Taylor and Francis Group.1. A typical bias function is depicted in Figure 4. so that each signal now has a periodic and a constant term: u(t) = u0 + up(t). This function will be further referred to as the bias function. and subscript p refers to the periodic term of the function (the sum of periodic terms of the Fourier series). y(t) = y0 + yp(t). so that at each value of the input. The constant term is the mean or averaged value of the signal on the period. where subscript “0” refers to the constant term in the Fourier series.Frequency-Domain Analysis of Relay Feedback Systems 4 -5 4.3  Bias function and equivalent gain. Then. u0 c tan α = kn α σ0 –c FIGURE 4. Now let us imagine that we quasi-statically (slowly) slew the input from a certain negative value to a positive value.3. which would be not a discontinuous but a smooth function u0 = u0(σ0). The described u c θ1 T θ2 t –c σ b –b t FIGURE 4. which is the key step to the analysis of the system response to constant and slow varying disturbances and reference input signals. which can quantitatively be represented by the shaded area in Figure 4. we can determine the constant term of the control signal as a function of the constant term of the error signal. Assume that the input to the system is a constant signal f0: f (t) ≡ f0. By doing this. with account of the signs) versus the constant term of the error signal (mean error).3.2). σ(t) = σ0 + σp(t). LLC . an asymmetric periodic motion occurs in the system (Figure 4.

The “slow” subsystem deals with the forced motions caused by an input signal or by a disturbance and the component of the motion due to nonzero initial conditions.8) where a is the amplitude of the oscillations. we can obtain the DF of the relay and the derivative of the mean control with respect to the mean error for the symmetric sine input kn( DF ) = ∂u0 ∂σ 0 = σ0 = 0 2c πa 1 1− b a () 2 (4.9) From (4. the motions in relay feedback systems are normally analyzed as motions in two separate dynamic subsystems: the “slow” subsystem and the “fast” subsystem. This decomposition of the dynamics is possible if the external input is much slower than the self-excited oscillations. finding the equivalent gain value is the main point of the input– output analysis.8) and (4. The “fast” subsystem pertains to the self-excited oscillations or periodic motions. 4. the relay feedback system behaves similarly to a linear system with respect to the response to those input signals. which is described in [5]. The two dynamic subsystems interact with each other via a set of parameters: the results of the solution of the “fast” subsystem are used by the “slow” subsystem. Exactly like within the DF method. The equivalent gain of the relay is = lim f0 → 0 (u0 / σ0 ).1 caused by a nonzero constant input f (t) ≡ f0 ≠ 0.4 -6 Control and Mechatronics smoothing effect is known as the chatter smoothing phenomenon. σ0 ) = + 1−   1−      a   πa  a    2 2       −j 4cb . πa2 (a ≥ b + σ ) 0 (4.1  Introduction to the LPRS As we considered above. LLC . which conceptually is similar to the so-called the incremental gain [1. Since at the slow inputs. σ0 ) = c b + σ0 b − σ0 arcsin − arcsin a a π     (4. The DF of the hysteresis relay with a biased sine input is represented by the following well-known formula [1]: 2c   b + σ0   b − σ0   N (a.9). we shall proceed from the assumption that the external signals applied to the system are slow in comparison with the oscillations.10) which is the equivalent gain of the relay (the subscript DF refers to the DF method that was used for finding this characteristic). which is normally the case. all subsequent analysis of propagation of the slow input signals can be carried out exactly like for a linear system with the relay replaced with the equivalent gain.4] of the DF method. used as a local approximation of the bias function kn = du0 / dσ0 σ0 = 0 Now we carry out the analysis of asymmetric oscillations in the system Figure 4.2  Locus of a Perturbed Relay System Theory 4. The model obtained via the replacement of the relay with the equivalent gain would represent the model of the averaged (on the period of the oscillations) motions in the system. © 2011 by Taylor and Francis Group. The mean control as a function of a and σ0 is given by the following formula: u0 (a.2. It usually pertains to the averaged (on the period of the self-excited oscillation) motion. The derivative of the mean control with respect to the mean error taken in the point of zero mean error σ0 = 0 (corresponding to zero constant input) provides the equivalent gain of the relay kn. Once it is found.

10). the frequency Ω of the oscillations can be computed via solving the equation Im J (Ω) = − πb 4c (4.11) that the frequency of the oscillations and the equivalent gain in the system can be varied by changing the hysteresis value 2b of the relay. Applying the chain rule.6). offers the computation of the frequency of the oscillations and of the equivalent gain kn of the relay.. We will refer to the function J(ω) defined above and to its plot in the complex plane (with the frequency ω varied) as the LPRS. M2:b → kn.8). With LPRS of a given system computed.14) © 2011 by Taylor and Francis Group. which lies at the distance πb/(4c) below (if b > 0) or above (if b < 0) the horizontal axis and parallel to it (line “−πb/4c”). we are able to determine the frequency of the oscillations (as well as the amplitude) and the equivalent gain kn (Figure 4. contains information about the frequency of the oscillations.12).10). in which we     treat frequency ω as an independent parameter. we will be able to obtain the exact values of the frequency of the oscillations and of the equivalent gain. consider the mapping M2  M1−1  : Ω → b → kn. With the use of formulas of the negative reciprocal of the DF (4. and the imaginary part of J(ω) comprises the condition of the switching of the relay and. y(0) = −b is the condition of the relay switch) and the gain kn can be computed as kn = − 1 2Re J (Ω) (4. It follows from (4. (4.4): the point of intersection of the LPRS and of the straight line. LLC .7) and the equivalent gain of the relay (4.Frequency-Domain Analysis of Relay Feedback Systems 4 -7 Consider again the harmonic balance equation (4.6) as follows: Wl ( jΩ) = − 1 1 π + j y( DF ) (0) 2 kn( DF ) 4c (4. we can rewrite formula (4. M1−1 : Ω → b.11.8). ω ∈  at zero time should be exact values. the following two mappings can be considered: M1:b → Ω . Applying mapping M2  0. Now let us define a certain function J exactly as the expression in the right-hand     side of formula (4.∞ ).11) In the imaginary part of Equation 4. If we derive the function that satisfies the above requirements.11) but require from this function that the values of the equivalent gain and the output M1−1  : ω → b → kn. we write the following definition of this function: π 1 1 + j y(t ) 2 kn 4c t =0 J (ω) = − where kn = M2 M1−1(ω) (4.13) −1 1 (i. (4.10).12) ( ) y(t ) t = 0 = M (ω) t = 0 is the time of the switch of the relay from “−c” to “+c” Thus. Therefore. consequently. (4. we consider that the condition of the switch of the relay from minus to plus (defined as zero time) is the equality of the system output to the negative half ­ hysteresis (−b): y(DF)(t = 0) = −b. According to (4.e.11) for the DF analysis and is proved below via deriving an analytical formula of that mapping). (4. J(ω) comprises the two mappings and is defined as a characteristic of the response of the linear part to the unequally spaced pulse input u(t) subject to f0 → 0 as the frequency ω is varied. Assume that M1 has an inverse mapping (it follows from (4. The real part of J(ω) contains information about the gain kn.

We can define the hyperplane for our analysis by the following equation: f0 − Cx = b. and it can be considered c = 1 when deriving the LPRS formula) is determined as follows: η p = e Aθ1 ρ p + A −1 e Aθ1 − I B ( ) (4. Formula (4.2. Its solution can be traced back to the works of Poincaré who introduced a now widely used geometric interpretation of this problem as finding a closed orbit in the state space.1) and (4.1  Matrix State-Space Description Approach We now derive the LPRS formula for a time-delay plant in accordance with its definition (4. which corresponds to the initiation of the relay switch from −c to +c (subject to the time derivative of the error signal being positive). it can be easily derived from the parameters of the linear part without employing the variables of formula (4.2. Derivation of the LPRS formula necessitates finding a periodic solution of (4.2. t > τ ( ) and also an expression for the state vector at time t = τ: x(τ) = eAτx(0) − A−1(eAτ − I)B. It is shown below that although J(ω) is defined via the parameters of the oscillations in a closed-loop system. therefore.4  LPRS and analysis of relay feedback system.2  Computation of the LPRS from Differential Equations 4. The approach of Poincaré involves finding a certain map that relates subsequent intersection by the state trajectory of certain hyperplane. 4.* Formula (4. including orbital stability of the obtained periodic solution and initial conditions.12) is only a definition and not intended for the purpose of computing of the LPRS J(ω).4 -8 Control and Mechatronics Im J πb 4c 0 Re J ω=Ω ω 1 2kn FIGURE 4.15) * The actual existence of a periodic motion depends on a number of other factors too. Consider the solution for the constant control u = ±1: x(t ) = e A(t − τ ) x(τ) ± A −1 e A(t − τ ) − I B.12).2).1) and (4. A fixed point of this map would give a periodic solution in the dynamic system.12). which for an arbitrary nonlinearity is a fundamental problem in both mathematics and control theory. from the system description given by formulas (4.13) provides a periodic solution and is. LLC . © 2011 by Taylor and Francis Group. A fixed point of the Poincaré return map for asymmetric periodic motion with positive pulse length θ1 and negative pulse length θ2.2). and the unity amplitude (the LPRS does not depend on the control amplitude. a necessary condition of the existence of a periodic motion in the system.

Then (4. we obtain the following formula: ηp = I − e ( A ( θ1 + θ2 ) ) −1 Aθ1 A −1  2e − e A ( θ1 + θ2 ) − I B Similarly.16): or (I − e A ( θ1 + θ2 ) ρp = e A ( θ1 + θ2 ) ρ p + e Aθ2 A −1 e Aθ1 − I B − A −1 e Aθ2 − I B ( ) ( ) )ρ p = e Aθ2 A −1(e Aθ1 − I)B − A −1(e Aθ2 − I)B. solution of (4.18) ( −1 AT Aθ2  A −1  e − 2e + I  B © 2011 by Taylor and Francis Group.16) for ρp in (4.15) into (4.Frequency-Domain Analysis of Relay Feedback Systems 4 -9 where ρp = x(τ) = x(T + τ) ηp = x(θ1 + τ) T is the period of the oscillations ρ p = e Aθ2 η p − A −1 e Aθ2 − I B ( ) (4.6) results in η p = I − e AT ρ p = I − e AT ( ) ) −1 Aθ1 AT  A −1  2e − e − I  B (4.17) (4. Regroup the above equation as follows: ( ) ( ) (I − e A ( θ1 + θ2 ) )η p =  −e Aθ1 A −1 e Aθ2 − I + A −1 e Aθ1 − I  B   ( ) ( ) Express ηp from the above formula: ηp = I − e ( A ( θ1 + θ2 ) ) −1  −e Aθ1 A −1 e Aθ2 − I + A −1 e Aθ1 − I  B   ( ) ( ) Considering that AeAtA−1 = eAt. substitute (4. Substitute (4. which gives ρp = I − e ( A ( θ1 + θ2 ) ) −1 A −1  e A ( θ1 + θ2 ) − 2e Aθ2 + I B Considering θ1 + θ2 = T. (4.15) and (4.15): ηp = e Aθ1 e Aθ2 ηp − A −1 e Aθ2 − I B + A −1 e Aθ1 − I B   ( ) ( ) which leads to ηp = e A( θ1 + θ2) ηp − e Aθ1 A −1 e Aθ2 − I B + A −1 e Aθ1 − I B.16) We suppose that θ1 and θ2 are known.16) can be solved for ρp and ηp as follows.5). LLC .

θ2 → θ = T / 2 ( ) −1 AT / 2 A −1  − 2e A(T / 2 − τ )  I + e B   (4.18) as a result of the feedback action:  f 0 − y(0) = b    f 0 − y(θ1 ) = −b  (4.5  1  y(0) + y(θ1 )  − y0 Re J (ω) = − lim 1 γ → u0 2 2 (4.19) at θ1.21) in accordance with its definition as follows: Im J (ω) = = π C lim x(0) 4 θ1.19) Reasoning along similar lines.25) © 2011 by Taylor and Francis Group. the constant term of the error signal σ(t) is σ0 = f0 − y0 = ((y(0) + y(θ1))/2) − y0.17) and (4. find the formula for x(θ1).23) for f0.4 -10 Control and Mechatronics Now. θ2 = (1 − γ)T. The real part of the LPRS definition formula can be transformed into 0 . LLC .24) where γ = θ1/(θ1 + θ2) = θ1/T.20) Consider now the symmetric motion as a limit of (4. θ2 → θ = T/2: lim x(0) = I + e AT / 2 θ1. Then θ1 = γT. u0 = 2γ − 1. we can obtain f0 = (y(0) + y(θ1))/2.5C  1  x(0) + x(θ1 )  − y0 Re J (ω) = − lim γ − 2 γ→ 1 2 1 2 (4.22) For deriving the expression of the real part of the LPRS.23) Having solved the set of equations (4.θ2 → θ =T/2 π C I + e Aπ / ω 4 ( ) (I + e −1 Aπ / ω − 2e A( π / ω − τ ) A −1B ) (4.24) can be rewritten as 0. consider the periodic solution (4. x(θ1 ) = e − Aτ  I − e AT   ( ) −1  Aθ1 AT −1 Aτ  A −1  2e − e − I  B − A e − I B   ( ) (4.21) The imaginary part of the LPRS can be obtained from (4. and (4. Hence. considering that and we find x(0) as follows: x(0) = e − Aτ  I − e AT   ρ p = x(τ) = e Aτ x(0) − A −1(e Aτ − I)B ηp = x(θ1 + τ) = e Aτ x(θ1 ) + A −1(e Aτ − I)B ( ) −1  AT Aθ2 −1 Aτ  A −1  e − 2e + I  B + A e − I B   ( ) (4.

respectively.21) and (2. Therefore.19) and (4. This can only be true if the derivative dT/du0 in the point u0 = 0 is zero. The real part of the LPRS is obtained by substituting (4. which are obtained from the original equations of the plant via equating the derivatives to zero:  0 = Ax 0 + Bu0 y   0 = Cx 0 From those equations.27) To find the limit limu0 → 0 ( y0 /u0 ). LLC .27) and (4.26) find the following limit: lim u0 → 0(θ1 + θ2 = T = const ) x(0) + x(θ1 ) x(0) + x(θ1 ) = lim γ →1/ 2 u0 2γ − 1 = 2e − AτT I − e AT ( ) −1 e AT/2 B (4. obtain x0 = −A−1Bu0 and y0 = −CA−1Bu0. which follows from the symmetry of the function T = T(u0).29) Finally.29) as follows: J (ω) = −0.28) for respective limits in (4.30) The real part of the LPRS was derived under the assumption of the limits at u0 → 0 and at γ → 1/2 being equal. lim y0 = −CA −1B. [2].5C  A −1 +     2 π A  −1 π − τ A 2π  ( )  B I − e ω  e ω     ω  π −τ A  π A  −1  πA π  ω   I + e ω − 2e ( ω )  A −1B + j C I + e        4  (4. considering the limit e Aγ T − e − Aγ Te AT = ATe AT / 2 γ→2 2γ − 1 lim 1 (4. Now.25): 1 C  x(0) + x(θ ) − y 1  0 1  lim 2 u0 2 u0→ 0 Re J (ω) = − 1 = − TC I − e AT 2 ( ) −1 A (T / 2 − τ ) e 1 −1 B − CA B 2 (4. A rigorous proof of this is given in Ref. consider the equations for the constant terms of the variables (averaged variables).20). the state-space description based formula of the LPRS can be obtained by combining formulas (4.Frequency-Domain Analysis of Relay Feedback Systems 4 -11 where x(0) and x(θ1) are given by (4. at first. © 2011 by Taylor and Francis Group.28) u0 → 0 which is the gain of the plant. u0 (4.

The main task of our analysis is to find the mapping δρ → δη. Let us denote x(t *) = η = ηp + δη (4.2) provided by condition (4. analysis of orbital stability can be reduced to the analysis of certain equivalent discrete-time system with time instants corresponding to the switches of the relay. If we assume that the initial state is x(0) = ρ + δρ. Local orbital stability. Also. the motion in the perturbed system converges to the orbit of the unperturbed system.34) © 2011 by Taylor and Francis Group. ∞) and assume that by the time t = 0. In relay feedback systems. LLC . Find the mapping of this perturbation into the perturbation of the state vector at the time of the next switch and analyze if the initial perturbation vanishes.13) is locally orbitally asymptotically stable if all eigenvalues of matrix  v τ+ T − C T 2  eA 2 Φ 0 = I −  Cv τ + T −    2   ( where the velocity vector is given by ( ) ) (4.32) where the first two addends give the unperturbed motion.1).33) that δη = x ( t*) − ηp (4. where t * is the time of the switch of the relay from “+1” to “−1. Therefore. a periodic motion has been established. and find the mapping δρ → δη. At the switch time t = τ. where time t = 0 is the switch initiation time. Consider the process with time t ∈ (−∞. The local orbital stability criterion can be formulated as the following statement. where δρ is the initial perturbation. and the third addend gives the motion due to the initial perturbation δρ. For the time interval t ∈ [τ. assume that there are no switches of the relay within the interval (0. The solution of system (4. with the fixed point of the Poincaré map given by formulas (4. the state vector receives a perturbation (deviation from the value in a periodic motion) x(τ) = ρ = ρp + δρ.2. it follows from (4.2  Orbital Asymptotic Stability The stability of periodic orbits (limit cycles) is usually referred to as the orbital stability.18). which can be obtained from the original system by considering the Poincaré map of the motion having an initial perturbation.31) −1 T  T     AT /2  A (T/2 − τ ) v  τ + − = x  τ + − = 2 I+e  B e     2  2  have magnitudes less than 1.2.4 -12 Control and Mechatronics 4. The notion of orbital stability is different from the notion of the stability of an equilibrium point. Proof. (4.” the state vector (while the control is u = 1) is given as follows: x(t ) = e A(t − τ) (ρ p + δρ) + A −1(e A(t − τ) − I)B = e A(t − τ)ρ p + A −1(e A(t − τ) − I)B + e A(t − τ)δρ (4.τ).17) and (4. if orbitally stable. as for an orbitally stable motion. What is important is that. the difference between the perturbed and unperturbed motions does not necessarily vanish.33) It should be noted that ηp is determined not at time t = t * but at time t = τ + θ1. t *]. we will be able to make a conclusion about the orbital stability of the system from consideration of the Jacobian matrix of this mapping.

38). substitute the expression for t * − τ − θ1 into (4.35) and (4.35) Now. evaluate x(t *) via linear approximation of x(t) in the point t = τ + θ1: x(t *) − x(τ + θ1 ) = x(τ + θ1 −)(t * − τ − θ1 ) where x ˙ (τ + θ1−) is the value of the derivative at time immediately preceding time t = τ + θ1. which will be the chain-rule application of two mappings: δρ → δη and © 2011 by Taylor and Francis Group.40) To be able to assess local orbital stability of the periodic solution.36) into (4.34): δη = x(τ + θ1 ) + x(τ + θ1 −)(t * − τ − θ1 ) − ηp = v(τ + θ1 −)(t * − τ − θ1 ) + e Aθ1 δρ (4.39)  v(τ + θ1 −)C  Aθ1 = I −  e δρ  Cv(τ + θ1 −)  Denote the Jacobian matrix of the mapping δρ → δη as follows:  v(τ + θ1 −)C  Aθ1 Φ1 =  I − e  Cv(τ + θ1 −)  (4.Frequency-Domain Analysis of Relay Feedback Systems 4 -13 Considering that all perturbations are small and times t * and τ + θ1 are close. evaluate x(τ + θ1) using (4.36) where v(t) = x ˙ (t). Now. we obtain t * − τ − θ1 = − δy(τ + θ1 ) Ce Aθ1 =− δρ y(τ + θ1 −) y(τ + θ1 −) = −Cδx ( τ + θ1 − ) = −Ce Aθ1 δρ (4. find (t * − τ − θ1) from the switch condition (t * − τ − θ1 ) y(τ + θ1 −) = −δy ( τ + θ1 ) From (4.37): δη = − v(τ + θ1 −) Ce Aθ1 δρ + e Aθ1 δρ y(τ + θ1 −) (4. we need to find the Jacobian matrix of the Poincaré return map.37) x ( τ + θ1 ) = η p + e Aθ1 δρ (4.38) Now. Express x(t *) from the last equation as follows: x(t *) = x(τ + θ1 ) + x(τ + θ1 −)(t * − τ − θ1 ) (4.32): Substitute (4. LLC .

Suppose the linear part does not have integrators.5 sin 2πkθ1/(θ1 + θ2 ) ReWl ( jkω) + sin2 πkθ1/(θ1 + θ2 ) ImWl ( jkω)    k     ( ) ( ) (4. 2 ( ) 4. checking only half a period of the motion is sufficient.3.2.2): 4c π u(t ) = u0 + ∑ sin k =1 ∞ (πkθ1 /(θ1 + θ2 ))   kωθ1   θ1   kωθ × cos  cos(kωt ) + sin  sin(kωt )       2 k 2   where u0 = c(θ1−θ2)/(θ1 + θ2) ω   = 2π/(θ1 + θ2) Therefore.2.41) if we set t = 0 and t = θ1.42) © 2011 by Taylor and Francis Group.3  Computation of the LPRS from Plant Transfer Function 4. y(t) as a response of the linear part with the transfer function W l(s) can be written as 4c π y(t ) = y0 + ∑ k =1 ∞ sin(πkθ1 /(θ1 + θ2 )) k (4. for the symmetric motion. In addition to the stability analysis. it can be recommended that the direction of the relay switch should be verified too. However. which gives formula (4. This condition is formulated as the satisfaction of the following inequality: T y 2  T − = Cv   2  − > 0  where v T − is given above.31). We write the Fourier series expansion of the signal u(t) (Figure 4.41)   kωθ1    kωθ1  sin  × cos  cos    kωt + ϕl (kω)  + sin  kωt + ϕl (kω)   Al (kω)     2 2   where φl(kω) = arg W l( jkω) Al(kω) = |W l( jkω)| y0 = u0|W l( j0)| The conditions of the switches of the relay have the form of Equation (4.4 -14 Control and Mechatronics δρ → δη. LLC .23) where y(0) and y(θ1) can be obtained from (4. respectively: 4c π y(0) = y0 + ∑ k =1 ∞  0.1  Infinite Series Approach Another formula of J(ω) can now be derived for the case of the linear part given by a transfer function.

47) that LPRS possesses the property of additivity.5 (−1)k ReWl (kπ / θ) (4. Formulas for J(ω) of first. we obtain the following expression: kn = ∑ ∞ k =1 0 .and secondorder dynamics. Having solved those equations for d(θ1 − θ2)/df0 and d(θ1 + θ2)/df0.47) 4. then the LPRS J(ω) can be calculated as a sum of n LPRS: J(ω) = J1(ω) + J2(ω) + … + Jn(ω). we obtain the final form of expression for ReJ(ω). Having put the real and the imaginary parts together. Wl(s) = W1(s) + W2(s) + … + Wn(s).42) and (4.1.and second-order dynamics. and the definition of the LPRS (4.2. subject to available analytical formulas for LPRS of first.45) Having solved together Equations 4. which can be formulated as follows. The considered property offers a technique of LPRS calculation based on the expansion of the plant transfer function into partial fractions. where Ji(ω) (i = 1. we obtain the formulas containing the derivatives in the point θ1 = θ2 = θ = π/ω. LPRS J(ω) can be calculated via summation of the component LPRS Ji(ω) corresponding to each of addends in the transfer function expansion. respectively. we shall obtain d(θ1 + θ2 )/df 0 | f0 = 0 = 0. if W l(s) is expanded into the sum of first.43). © 2011 by Taylor and Francis Group. Indeed. Similarly. LLC . which corresponds to the derivative of the frequency of the oscillations. n) is the LPRS of the relay system with the transfer function of the linear part being Wi(s). Additivity property.and second-order dynamics are presented in Table 4.12).46) Taking into account formula (4.43)). we obtain the final formula of ImJ(ω).….2  Partial Fraction Expansion Technique It follows from (4.43) Differentiating (4. and d(θ1 − θ2 ) df 0 = f0 = 0  c  Wl (0) + 2  ∑ 2θ ∞  cos(πk)ReWl (ωk)  k =1 (4.23) with respect to f0 (and taking into account (4.Frequency-Domain Analysis of Relay Feedback Systems 4 -15 y(θ1 ) = y0 + 4c π ∑ k =1 ∞ 0.44) Considering the formula of the closed-loop system transfer function we can write d(θ1 − θ2 ) df 0 2θkn c (1 + kn | Al (0) |) = f 0 =0 (4. having solved the set of equations (4.42) and (4.3. we obtain the final formula of the LPRS J(ω) for relay feedback systems: J (ω) = ∑ k =1 ∞ (−1)k +1 ReWl (kω) + j ∑ 2k − 1 ImW (2k − 1)ω 1 l k =1 ∞ (4. LPRS of some of these dynamics are considered below.46). and y(0) and y(θ1) have the form (4. the identity ω = π/θ.5 sin(2πkθ1 /(θ1 + θ2 ))ReWl ( jkω) − sin2 (πkθ1 /(θ1 + θ2 ))ImWl ( jkω)   k (4. If the transfer function W l(s) of the linear part is a sum of n transfer functions.45 for kn.23) where θ1 = θ2 = θ.44 and 4.

25πα / (1 + cosh α)].4 -16 Table 4. α 2 = π / (T2 ω) 4. C = α sin β cosh α − β cos β sinh α Ks (s + 1)2 Ks (T1s + 1)(T2s + 1) K [α(− sinh α + α cosh α) / sinh 2 α − j0.  2 4   1 + e−α  τ α = Tπ ω. It is of similar meaning and importance as the knowledge of the characteristics of first. β = π(1 − ξ2 )1/2 / ω. − j π4 α = πξ / ω.2.and secondorder dynamics. © 2011 by Taylor and Francis Group. Those features are considered in the following section. γ =T Ts + 1 K (T1s + 1)(T2 s + 1) K [1 − T / (T − T )α cschα − T / (T − T )α cschα )] 1 1 2 1 1 2 2 1 2 2 2 K /(T − T )[T tanh(α / 2) − T tanh(α /2)]. γ = α /β B = α cos β sinh α + β sin β cosh α. To be able to realize this methodology.1  Formulas of the LPRS J(ω) Transfer Function W(s) K s K Ts + 1 Ke − τs 0 − j 8ω π2K LPRS J(ω) Control and Mechatronics K (1 − αcschα) − j πK tanh(α / 2). one of possible techniques of LPRS computing is to represent the transfer function as partial fractions. The knowledge of the LPRS of the low-order dynamics is important for other reasons too because some features of the LPRS of loworder dynamics can be extended to higher order systems.1  LPRS of First-Order Dynamics As it was mentioned above. compute the LPRS of the component transfer functions (partial fractions).4  LPRS of Low-Order Dynamics 4.4. we have to know the formulas of the LPRS for first. − j π4 1 1 2 2 1 2 α1 = π /(T1ω). C = α sin β cosh α − β cos β sinh α Ks s 2 + 2ξ s + 1 K [ξ(B + γC ) − π / ω cos β sinh α]/ (sin2 β + sinh 2 α)] 2 K (1 − ξ2 )−1/2 sin β / (cosh α + cos β). 2 α = π/ω K (T − T )[α csch α − α csch α ] 2 1 2 2 1 1 2 K (T − T )[tanh(α / 2) − tanh(α / 2)] .2. β = π(1 − ξ2 )1/2 / ω. 2 4 α = π /(T ω) K (1 − αe γ cschα) + j πK  2e − αe γ − 1 . and add those partial LPRS together in accordance with the additivity property. LLC .and second-order dynamics in linear system analysis. α 2 = π /(T2 ω) K s 2 + 2ξ s + 1 K [(1 − (B + γC ) / (sin2 β + sinh 2 α)] 2 K (sinh α − γ sin β) / (cosh α + cos β). − j π4 2 2 1 1 α1 = π / (T1ω). γ = α /β B = α cos β sinh α + β sin β cosh α. − j π4 α = πξ / ω.

48).0 0. we can also see that the imaginary part of the LPRS is a monotone function of the frequency. −a1 −a2] and the delay be τ = 0.1) be A = [0 1.2 Im J –0. then the frequency of the periodic solution tends to infinity: limb → 0 Ω = ∞. and when the hysteresis value b tends to cK.13) and (4. as there is only one system variable. 4.0 0.2.2  LPRS of Second-Order Dynamics Now we shall carry out a similar analysis for the second-order dynamics.6 FIGURE 4. The plot of the LPRS for K = 1. which also determines the condition of the switch of the relay. the condition of the existence of a finite frequency periodic solution holds for any nonzero hysteresis value from the specified range and the limit for b → 0 exists and corresponds to infinite frequency. we can easily find the frequency of periodic motions in the relay feedback system with the linear part being the first-order dynamics. The LPRS formula for the first-order dynamics given by the transfer function W(s) = K/(Ts + 1) is given by the following formula [2]: K π π  πK π 1− csch −j tanh     2 Tω Tω 4 2ω T J (ω) = (4. The high-frequency segment of the LPRS has an asymptote being the imaginary axis. Let the matrix A of (4. The point (0.− j π 4 K ) corresponds to the frequency ω = 0 and the point (0.5 0. respectively.6 –0. From (4.1 0. Therefore.2 Re J 0.cK].Frequency-Domain Analysis of Relay Feedback Systems 4 -17 0.8 –1.4 –0. With the formula for the LPRS available.5  LPRS of first-order dynamics. j 0) corresponds to the frequency ω = ∞. © 2011 by Taylor and Francis Group. The LPRS is a continuous function of the frequency and for every hysteresis value from the range b ∈ [0.4. all with a1 > 0. a2 > 0.5K .5.48). Here. consider a few cases. For the firstorder system.3 0. LLC .0 –0. the only eigenvalue of this Jacobian will always be zero.48) where csch(x) and tanh(x) are hyperbolic cosecant and tangent. then the frequency of the periodic solution tends to zero: limb → cK Ω = 0. there exists a periodic solution of the frequency that can be determined from (4. which is Ω = π /2T tanh −1(b /cK ) . T = 1 is given in Figure 4.4 0. It is easy to show that the oscillations are always orbitally stable. It is easy to show that when the hysteresis value b tends to zero. The stability of a periodic solution can be verified via finding eigenvalues of the Jacobian of the corresponding Poincaré map.

49)). These limits give the two boundary points of the LPRS corresponding to zero frequency and infinite frequency.7. the coefficients of those partial fractions will be complex numbers and this circumstance must be considered.6: (#1 − ξ = 1.8 –1. a solution of equation (4. cK ).4 -18 Control and Mechatronics 2 A.55. lim ω → 0 J (ω) = (0. Then plant transfer function can be written as W(s) = K/(T 2s2 + 2ξTs + 1).4 Im J –0. #4 − ξ = 0. #3 − ξ = 0.6 –0. a periodic solution of finite frequency exists for the considered second-order system for every value of b within the specified range.0 –1. The high-frequency segment of the LPRS of the second-order plant approaches the real axis. j0) .2 –1. It can be easily shown that lim ω →∞ J (ω) = (0.48) obtained for the first-order dynamics.2 –0.4 –0. and there is a periodic solution of infinite frequency for b = 0.2 0. © 2011 by Taylor and Francis Group.4 FIGURE 4. To obtain the LPRS formula.6 0. LLC . #2 − ξ = 0. Consider the case when a2 find the limit for ξ → 1.6  LPRS of second-order dynamics. we can use formula (4. Let a2 − 4a1 < 0. − jπ/4 K ). The LPRS formula can be found. via expanding the above transfer function into partial fractions and applying to it formula (4. Since J(ω) is a continuous function of the frequency ω (that follows from formula (4. Therefore. #5 − ξ = 0.2 0. The formula of the LPRS for the second-order dynamics can be written as follows: J (ω) = where α = πξ/ωT  K g + γh πK sinhα − γ sin β 1 − 2 − j 2 2  sin β + sinh α  4 cosh α + cos β (4. 0.5K .6 (#1).85.49) and B.0 –0.2 1. 2 − 4a1 = 0 . for example.8 1. All subsequent analysis and conclusions are the same as in case A.0 5 1 2 3 4 0.4).49) β = π 1 − ξ 2 / ωT γ = α/β g = α cos β sinh α + β sin β cosh α h = α sin β cosh α + β cos β sinh α The plots of the LPRS for K = 1.4 –0. The LPRS for this case is given in Figure 4.6 –0.3) exists for any b ∈ (0. T = 1 and different values of damping ξ are given in Figure 4. Analysis of function (4.49) shows that it does not have intersections with the real axis except the origin. However.0 1.4 Re J 0.

2 0.51) where γ = τ/T.5. Assume that a2 − 4a1 > 0. For this plant.3  LPRS of First-Order Plus Dead-Time Dynamics Many industrial processes can be adequately approximated by the first-order plus time-delay transfer function W(s) = Ke−τs/(Ts + 1). Other plots are defined only for the frequencies that are less than the frequency corresponding to half of the period. D. © 2011 by Taylor and Francis Group. The formula of the LPRS for the first-order plus dead-time transfer function can be derived as follows [2]: J (ω) =  K π  2e − αe γ 1 − αe γ cschα + j K  − 1 2 4  1 + e −α  ( ) (4. −j π/4) that corresponds to the frequency ω = 0.4 –0. which can be obtained via partial fraction expansion technique: J (ω) = K π π πK  π π   − 1 + j tanh + csch   Tω  2  Tω 4  2ωT 2ω   (4.5 (#5) are depicted in Figure 4.2.5K. the transfer function is W(s) = K/[s(Ts + 1)]. The plots of the LPRS for γ = 0 (#1). For a1 = 0.5 (#3).5 –2.0 –0. LLC . γ = 1. T is a time constant.4.8. 4.5 –1.0 –2. T = 1 is given in Figure 4. This factor results in the particular importance of analysis of these dynamics.0 0.50) The plot of the LPRS for K = 1. The transfer function can be expanded into two partial fractions. γ = 0.7. and γ = 1. 2 C. The whole plot is totally located in the third quadrant. The high-frequency segment of the LPRS has an asymptote being the real axis.5 –3. the LPRS is given by the following formula.0 –0.0 –3.2 (#2).8 –0. γ = 0. j0) corresponds to the frequency ω = ∞.7  LPRS of integrating second-order dynamics. All the plots begin in the point (0. The point (0. −j∞) corresponds to the frequency ω = 0 and the point (0.6 –1. and then according to the additivity property. The subsequent analysis is similar to the previous one.5 –1. the LPRS can be computed as a sum of the two components. where K is a gain.0 (#4).6 Re J –0.0 Im J –1. Plot number 1 (that corresponds to zero dead time) comes to the origin that corresponds to infinite frequency. τ is a time delay (dead time).Frequency-Domain Analysis of Relay Feedback Systems 4 -19 0.4 –1.2 –1.2 FIGURE 4.

4. Figures 4. where K is the static gain of the linear part.  2 π A −1 π A  π A −1 πA    π  evaluate the following two limits: lim ω → 0  2 I−e ω e ω  = 0. we find the limit of function J(ω) for ω tending to zero.0 –0. see.5 0.4 -20 3.52) The product of matrices CA−1B in (4.and second-order dynamics.0 FIGURE 4.5 –1. To find the limit of J(ω) for ω tending to infinity. One of the most important properties is the additivity property.0 2. consider the following two limits of the expansion into power series of the exponential function: π A = lim limexp ω ω →∞ ω →∞ ( ) and / ω) A ∑ (πn ! n n=0 ∞ n =I −1  π 2π A   = lim {λ[I − exp(λA)]−1} = − A −1 lim  2 I − exp ω ω    λ = 2 π =→ 0 ω →∞     ω ( ) © 2011 by Taylor and Francis Group.0 –2.5 and 4. We now find the coordinates of the initial point of the LPRS (which corresponds to zero frequency) considering the LPRS formula (4.5K.52) is the negative value of the gain of the plant transfer function.0 3 2 1 4 Control and Mechatronics 5 Re J 0. Therefore. the initial point of the corresponding LPRS is (0.5 + j  4  CA −1B  (4.5 0.5 1.5 2. For that purpose. A few other properties that relate to the boundary points corresponding to zero frequency and infinite frequency are considered below.5 –2. and especially for the design of linear compensators with the use of the LPRS method.6. checking the results of computing. the following statement has been proven.0 1.2.5 –1. lim ω → 0  I + e ω I − e ω  = I. At first.5 Im J 1. for example. For a nonintegrating linear part of the relay feedback system.0 –1.30).5  Some Properties of the LPRS The knowledge of some properties of the LPRS is helpful for its computing.0 –0.0 0. LLC .8  LPRS of first-order plus dead-time dynamics. we can write the limit for the LPRS as follows: ( ) ( )( ) ω →0 π  lim J (ω) =  −0. This totally agrees with the above analysis of the LPRS of the first. ω       With those two limits available. −jπK/4).

taking account of the above two limits. the type of approach of the LPRS to the origin at ω → ∞ is not asymptotic. In that case. and the dynamics of the averaged motions in the relay feedback system are governed by linear equations—due to the chatter smoothing of the relay nonlinearity. LLC . which can be shown using the infiniteseries formula (4. the LPRS shape in the frequency range of the system bandwidth should be preserved. By comparatively slow. y0 that previously were considered constant are now slow-changing signals (slow in comparison with the periodic motions). the system always tries to decrease the value of the error signal σ. we can now consider analysis of the slow signals propagation through a relay feedback system. As a result. we shall understand the signals that can be considered constant over the period of the self-excited oscillation without significant loss of accuracy of the oscillations estimation. We shall call them the slow components of the motion. f0 + – σ0 u0 .9.53) For time-delay plants. The dynamics of the averaged motions can be represented by the block diagram in Figure 4. which must be accounted for.Frequency-Domain Analysis of Relay Feedback Systems 4 -21 Finally.3. The only difference from the analysis of the response to the constant inputs would be the effect of the dynamics of the linear part. x0 = Ax0 + Bu0 y0 = Cx0 y0 kn e–τs FIGURE 4. the end point of the LPRS is also the origin. Relay control involves modes with the frequency of self-excited oscillations (switching frequency) typically much higher than the closed-loop system bandwidth (that corresponds to the frequency range of the external input signals). At the same time.10). Due to the feedback action. We assume that signals f0. it outlines a framework for the following analysis.9 is linear. the equivalent gain value found for the infinitesimally small constant input will be a good estimation of the ratio between the averaged control and the averaged error signal even for large amplitudes of the input. However.3). the compensating filters design can be based on the change of the shape of the LPRS at the frequencies near the frequency of the self-excited oscillations without affecting the LPRS values at the frequencies of the input signals. Therefore. The system in Figure 4. This is also true with respect to the averaged value (or slow varying component) of the error signal σ0.1 Analysis of Performance of Relay Feedback Systems and LPRS Shaping With the presented methodology of analysis of the effect of the constant input (via the use of the equivalent gain and LPRS).9  Dynamics of the averaged motions. the averaged value of the error signal normally stays within the “linear zone” of the bias function (Figure 4. © 2011 by Taylor and Francis Group. Although this is not a rigorous definition. 4. τ = 0 lim J (ω) = 0 + j0 (4. Therefore. The input– output properties of the relay (the equivalent gain) are defined by the shape of the LPRS at the frequencies near the switching frequency while the input–output properties of the plant depend on the characteristics of the plant at the frequencies within the system bandwidth.3  Design of Compensating Filters in Relay Feedback Systems 4.47). σ0. one would want that the point of intersection with the real axis should be closer to the origin. which would result in the increase of the equivalent gain of the relay and enhancement of the closed-loop performance of the system (Figure 4. In order to have a larger equivalent gain. we prove that the end point of the LPRS for the nonintegrating linear part without time delay is the origin ω→∞. the equivalent gain of the relay in the closed-loop system and input–output properties of the plant are defined by different frequency ranges of the LPRS.

which enforce the LPRS of the system to have a desired location at the frequencies near the switching frequency and do not change the open-loop characteristics of the system at the frequencies within the specified bandwidth. At the second step. the frequency of self-excited oscillations. so that the frequency properties of the open-loop system in the specified bandwidth will not change). © 2011 by Taylor and Francis Group.10  Desired LPRS shape. its performance can be specified in a few different ways. LLC .5 / knd  Ω1 < Ωd < Ω2 ImJ (Ω ) = − πb /(4c) d d  (4. and the frequency response of the closed-loop system are computed. however.5/kn System bandwidth FIGURE 4. Assume that the compensator is not supposed to change the frequency response of the open-loop system in the bandwidth.4 -22 Range around switching frequency Im J ω=Ω ω LPRS of uncompensated system Desired LPRS Control and Mechatronics Re J 0. This can be done with the use of formulas (4.14) and the results of the analysis of the uncompensated system. Namely. The desired shape of the LPRS can be evaluated from the specification to the system closed-loop performance in the specified bandwidth (provided that the performance enhancement is going to be achieved only via the increase of the equivalent gain kn. it is specific for a particular type of system. the desired LPRS location can be determined by ReJ d (Ωd ) > −0. there can be various ways of the desired LPRS generation.13) and (4.* The compensation objective in terms of LPRS can be defined as selection of such linear filters and their connection within the system. the frequency characteristics and the LPRS of the plant. an uncompensated system consisting of the plant and the relay is analyzed. The methodology of the design of the system can be described as follows. a desired location of the point of intersection of the LPRS with the straight line “−πb/(4c)” and the corresponding frequency of self-excited oscillation can be determined. Then. Depending on the application of the system. At this step. We consider compensation of relay feedback systems assuming that the frequency of the self-excited oscillations (switching frequency) of the uncompensated system meets the specifications and does not need to be significantly modified by the introduction of compensating filters (insignificant changes are still possible). Therefore. the requirements to the desired LPRS can be calculated on the basis of the results of the uncompensated system analysis. so that the purpose of the compensation is not the design of the switching frequency but the enhancement of the input–output dynamic characteristics of the system.54) * Design of the switching frequency can also be done with the use of LPRS. At first.

and band-rejecting filters and two possible points of connection: the output signal (error signal).2  Compensator Design in the Relay Feedback Systems Considering the implementation of the idea of changing the LPRS shape (location at higher frequencies). phase-lead. The output of the relay is always available and can be used for the parallel compensation.3. It changes the shape of the LPRS at higher frequencies only (beginning from a certain frequency. consider two types of compensators performing the described function of the LPRS transformation: (1) the cascade connection with the use of the phase-lag filter and (2) the parallel connection with the use of the band-pass filter. ωmax is the upper boundary of the specified bandwidth.11. Therefore. The Cascade Compensation. The circuit connected in series with the plant must enforce the LPRS to have a desired location. This can be done with a filter having the transfer function W (s) = (T1s + 1)/(T2 s + 1). This compensator does not strongly affect the frequency response of the open-loop system at the frequencies within the bandwidth. so that the output of the filter is summed with the system input. in the case of the parallel compensation. Yet. Formulas of the LPRS of some band-pass filters are presented in Table 4.Frequency-Domain Analysis of Relay Feedback Systems 4 -23 where d refers to desired knd is the desired value of gain kn of the relay [Ω1. One of the features of such cascade compensation is that this compensation causes the frequency of the self-excited oscillations to decrease. After that. As a result.10). one has to recalculate the LPRS according to J (ω) = K J n (T ω) (4. which would be a parallel connection of the filter (parallel with the plant). The circuit connected in parallel with the plant must enforce the LPRS to have the desired location (Figure 4. the LPRS of the linear part is calculated as the sum of the plant LPRS and the compensator LPRS (see the additivity property). The output of the system is always available. we should assess what kinds of filters and signals are available. the LPRS of the compensated system can be approximately calculated as the product of the plant LPRS and the coefficient T1/T2 < 1). The LPRS of the second-order band-pass filters are depicted in Figure 4. Such properties are typical of the band-pass filters with the bandwidth encompassing the frequency of the self-excited oscillations of the system. The Parallel Compensation. Note that Table 4. W (s) = (KTs)/(T 2 s 2 + 2ξTs + 1) (including the case of the general form with ξ → 1).1. Other dynamic filters of higher order with similar amplitude-frequency response can be used for such compensation too. where ωmax < 1/T2 < 1/T1 < Ω . which would provide the cascade connection of the filter (in series with the plant).11 show the normalized LPRS Jn(ω) (computed for unity gain and time constants). addition of the compensator must not change the frequency response of the open-loop system at the frequencies within the bandwidth. band-pass.11 with the desired point of intersection of the LPRS and of the line “−πb/(4c).” and to find the gain of the normalized LPRS that would ensure this transformation.1 and Figure 4. Ω2] is the specified range for the switching frequency b is the specified value of the hysteresis (usually small value) c is the specified output level of the relay (control) 4. phase-lag. Among possible connections and filters. The transfer function of the open-loop system is calculated as the sum of the transfer functions of the plant and of the compensator. the cascade compensation can be implemented in most cases. One of the features of the parallel compensation is that it can cause the frequency of the self-excited oscillations to increase. lag–lead. the filter gain © 2011 by Taylor and Francis Group. LLC . To obtain the LPRS for the transfer function. The main idea of the design technique is to match the point corresponding to the maximal amplitude of the LPRS Figure 4. and the output of the relay. We might consider the use of low-pass. lead–lag. high-pass.55) where Jn(ω) is the normalized LPRS of the band-pass compensator.

Utkin V. U. 6. simplicity. U. Cambridge. 10. Relay Feedback: Analysis. Tsypkin Ya. Springer.2 0.11  LPRS Jn(ω) of band-pass filters W(s) = s/(s2 + 2ξs + 1) for ξ = 0.U. (1992).Z. New York. Design of compensating filters is specific for the particular type of relay feedback system. 4. (1945). New York. Relay Control Systems. 4. and time constant can be found from (4.K. Van Nostrand Co. 9. MA.K. (2009). Yu C.5 1. (1984).0 FIGURE 4.8 –1. New York. NJ.55). 7.-G. 2. (1968). Germany. Cambridge University Press.3–1.P.. 5. Sliding Modes in Control and Optimization.K. Lee T. the problems of analysis and design of many relay feedback systems can be efficiently solved.5..6 0. (1975). Discontinuous Automatic Control. References 1. and convenience of use. An example of design is given in Ref. New York. Berks. (1999) Automatic Tuning of PID Controllers: Relay Feedback Approach.2 –0.4 0. 8. LLC . Wang Q. Flugge-Lotz I.4 –0. It is shown that the use of the LPRS method not only allows for a precise analysis of the complex dynamics of the relay feedback system but also provides a convenient tool for compensator selection and design.-C. McGraw-Hill.0 Im J –0. and Lin C.5 1. [2].6 –0. (1953). Multiple-Input Describing Functions and Nonlinear System Design.5 2.0 –0. Princeton.3 Re J 0. Birkhauser. (1968). McGraw Hill. Workingham. and Vander Velde W.3  Conclusions The LPRS method is presented in the chapter beginning with basic ideas and progressing to computing formulas and algorithms of system analysis and compensator design. 3. Gelb A.3.2 –1. Berlin. and Meyer A. Fundamental Theory of Servomechanisms. (2003).E.. Princeton University Press.4 -24 0. U. Because of such LPRS properties as exactness. Modern Control Principles and Applications.0 –1. MacColl L.C. Hsu J. Discontinuous Control Systems: Frequency-Domain Analysis and Design. © 2011 by Taylor and Francis Group. Springer-Verlag. London. Boston. Boiko I.0 1.A.H. Nonlinear Control Engineering—Describing Function Analysis and Design.5 –1.6 Control and Mechatronics 0.0 0.5 0. Atherton D.9 1. D.0 ω 0. Identification and Control. Springer-Verlag. Van Nostrand Company Limited.

........5 Linear Matrix Inequalities in Automatic Control Miguel Bernal University of Guadalajara Thierry Marie Guerra University of Valenciennes and Hainaut-Cambresis 5.... details are available in [1. Moreover.... i i i =1 n (5...... it can be transformed to (5.. LLC .. This ample definition highlights the fact that an LMI can adopt nonobviously linear or matrix expressions.. Finding a solution x to (5..... many subjects are left out or been just outlined.1  What Are LMIs? 5..... F(x) > 0}..........2.. i = 0......1).. The following formal definition is normally used for an LMI [1]: F (x ) = F0 + where x ∈ Rm is the decision variable vector Fi = FiT ∈ R n×n ∑x F > 0.. convexity guarantees no local minima to be found and finite feasibility tests....... Note also that variables in this definition are not matrices.....1 5............. h(x) = 0} with g : Rm → Rn and h: Rm → Rl.. (>) stands for positive-definiteness (strict LMI).. 5-1 © 2011 by Taylor and Francis Group...... a convex optimization problem consists in minimizing a convex function f (x) provided that g (x) is also convex and h(x) is affine.... Should an LMI be expressed with matrices as variables....... the corresponding problem is called infeasible.. The solution set of LMI (5. Nevertheless... denoted by S = {x ∈ Rm.. that of linear matrix inequalities (LMIs) in automatic control. given a set S = {x ∈ Rm..1) ...9... 5-4 Model Analysis  •  Controller Design Preliminaries  •  Some Properties References���������������������������������������������������������������������������������������������������� 5-10 5.12].... an LMI is a set of mathematical expressions whose variables are linearly related matrices... is a convex optimization problem. In broad sense..... as expected.. More generally.... therefore.. which implies that it is amenable for computer solution in a polynomial time.. If no solution exists. a non-strict LMI has (≥) instead of (>) in (5.n are given constant symmetric matrices In this context...1  Preliminaries This chapter is a short introduction to a broad and active field. which is a convex subset of Rm... 5-1 What Are LMIs Good For?..... g (x) ≤ 0.1)... if any exists.... is called its feasibility set.1)......…....2 What Are LMIs?.1..1) by decomposing any matrix variable in a base of symmetric matrices...........

4]. and C(x) are symmetric and affine with respect to x. Fk(x)] denotes the block-diagonal matrix with F1(x). … . … .4 in [1]. some properties and equivalences follow: Property 1: A set of LMIs F1(x) > 0. MAXDET [1] and CSDP do not run under MATLAB related.e. Since the early 1980s. LMIs have been solved numerically. C(x ) > 0 where A(x). a full rank-by-row matrix S(x) ∈  Rn×m. or negative (unfeasible case). Fk(x) > 0 is equivalent to the single LMI F(x) = diag[F1(x). i. which depend affinely on a variable. the first attempts to solve LMIs were analytical [3. an LMI can appear in seemingly nonlinear or non-matrix expressions. then through interior-point algorithms [8]. The interested reader in the latter methods is referred to Sections 5. followed by some graphical techniques [5] and algebraic Ricatti equations [6]. Minimizing eigenvalues of a pair of matrices. Y ) denotes the largest generalized eigenvalue of λY–X with Y > 0. B(x )) subject to B(x ) > 0.2) (5. To recast matrix inequalities as LMI expressions.2. Property 2: (Schur Complement) Given a matrix Q(x) ∈ Rn×m. … . some solutions of nonlinear inequalities can be subsumed as the solution of an LMI problem.. VSDP. where comprehensive description is provided. LMILAB exploits projective methods and linear algebra.3 and 5. it belongs to semidefinite programming. Fk(x) on its main diagonal. Finding a solution x to the LMI system (5. LLC . first through convex programming [7]. Q(x) > 0.3) R(x ) − S(x )Q(x )−1 S(x )T > 0 © 2011 by Taylor and Francis Group. minimize λ subject to λB(x ) − A(x ) > 0. while SeDuMi is a semidefinite programming solver. thus. A word about the actual LMI solvers proceeds since the algorithms above have been implemented under several toolboxes and software programs. SDPT3. The problem can also be rewritten as minimize λ max ( A(x ).1) or determining that there is no solution is called the feasibility problem (FP). Fk(x)] > 0 where diag[F1(x). Moreover.1. or LMIRank. 3.2]: 1. Minimizing a linear combination of the decision variables cTx subject to (5. and a matrix R(x) ∈ Rn×n. the following inequalities are equivalent:  R(x )  T  S(x ) S(x )  >0 Q( x )   (5. C(x ) > 0 where λmax(X. zero (feasible solution). SeDuMi. all of them depending affinely on x. Most of them are MATLAB • related. B(x ) > 0. This problem is equivalent to minimizing the convex function f : x → λmin(F(x)) and then deciding whether the solution is positive (strictly feasible solution).5 -2 Control and Mechatronics The following well-known convex or quasi-convex optimization problems are relevant for analysis and synthesis of control systems [1. 5. 2. also known as an LMI optimization. Several methods have been developed for solving the aforementioned problems. This problem is a generalization of linear programming for the cone of positive semidefinite matrices.1) is called an eigenvalue problem (EVP).2. such as LMI Toolbox (LMILAB) [9].2  Some Properties As stated in the previous section. Historically. B(x). … . subject to a set of LMI constraints or determining that the problem is infeasible is called a generalized eigenvalue problem (GEVP).

9) Property 5: (S-procedure) Given matrices Fi = FiT .5) (5. G.4) (5.…. ∀x ≠ 0 : Rx = 0.11]: AT PA − Q < 0. G. and Q with the proper sizes. P > 0 −G − GT + P    −Q + AT L + LA ∃G.p and quadratic functions with inequalities conditions: x T F0 x > 0.τ p ≥ 0 : F0 − ∑τ F > 0 i i i =1 p (5.10) A sufficient condition for these conditions to hold is: ∃τ1 . S⊥ QS⊥ < 0 (5.7) Property 4: (Slack variables II) Given matrices A.Linear Matrix Inequalities in Automatic Control 5 -3 Property 3: (Slack variables I) Given matrices A. ∀x ≠ 0 : x T Fi x ≥ 0. Sx = 0 T T R⊥ QR⊥ < 0. P > 0  −Q    PA  −Q ∃G  T  G A AT P  <0 −P   AT G T (5.15) ∃σ ∈ R : Q − σRT R < 0. the following inequalities are equivalent: x T Qx < 0. the following inequalities are equivalent [11]: AT P + PA + Q < 0  AT L + LA + Q ∃G.12) (5. L  T T  P −L +G A P − L + AT G  <0 −G − GT + P   (5. L .11) Property 6: (Finsler’s Lemma) Given a vector x ∈ Rn and matrices Q = QT ∈ Rn×n.…. LLC . rank(S) < n. ∀i ∈{1. the following inequalities are equivalent [10.6)   < 0. … . P > 0 −G − G + P   − L + AT G   < 0. R ∈ Rm×n. L .13) (5.8) (5. P = PT and Q with the proper sizes. p} (5.14) (5. i = 0.L  T T   −L + G A (5. Q − σST S < 0 ∃X ∈ Rn×m : Q + ST XR + RT X T S < 0 © 2011 by Taylor and Francis Group. and S ∈ Rm×n such that rank(R) < n. P.

The time derivative of the Lyapunov function is given by V = x (t)Px(t) + · T T T x (t)Px (t) = x (t)(A P + PA)x(t). Quadratic δ i Ai . some of them are briefly presented and exemplified. r } (5. Y of proper size. for a discrete-time LTI model x(t + 1) = Ax(t).16) Following a similar procedure and using Schur complement (property 2). AiT P + PAi < 0 ⇒ (δ i Ai )T P + ∑ r i =1 (δ i Ai )T P + P ∑ r i =1 (δ i Ai ) = AT (δ)P + © 2011 by Taylor and Francis Group. being a LPV model stability: Consider a continuous-time LPV model x closed convex set. As an example of structured uncertainties.15) is usually referred as the elimination lemma for the obvious reason that expression (5. ∀Q = QT > 0. A ] is then guaranteed if stability of continuous-time LPV model x 1 r ∑ ∑ ∃P > 0 : AiT P + PAi < 0. ∑ r i =1 δi Ai which means that δi > 0.2  What Are LMIs Good For? 5. this is ∃P > 0: A(δ)T P + PA(δ) < 0. Property 7: (Congruence I) Given a matrix P = PT and a full rank-by-column matrix Q. ∀i ∈{1.2. provided that ∆ = δ = [δ1 . XYT + YXT ≤ XQXT + YQ−1YT.13) is equivalent to (5. δ i ≥ 0.1  Model Analysis A number of classical stability issues can be recast as LMI problems and therefore solved. consider a polytopic set Γ = co[ A1 .… .….18) since ∀A(δ) ∈ Γ .…. Applying Property 1. To translate the previous statement in LMI tests. this problem is summarized as solving the following LMI: P   0 0  >0 − AT P − PA   (5. Property 8: (Congruence II) Given two square matrices P and full rank Q. ∀δ ∈ ∆ ⇒ LPV model stable ∀ A ∈ Γ. P > 0. the following LMI problem arises: P    PA AT P  >0 P   (5. LPV models correspond to the former case. P > 0 ⇔ QPQT > 0. Ar ] = r r     δ i = 1 .15) without X. Property 9: (Completion of squares) Given two matrices X. In the sequel. two cases should be distinguished according to the type of uncertainties involved: structured and unstructured. P > 0 ⇒ QPQT > 0.  A(δ) : A(δ) = i 1 = i = 1     · (t) = A(δ)x(t) with A(δ) ∈ Γ = co[A . LLC . · (t) = Ax(t) and a Lyapunov function LTI model stability: Consider a continuous-time LTI model x · ·T T candidate V = x (t)Px(t).17) · (t) = A(δ)x(t) with A(δ) ∈ Γ.…. Taking into account that P > 0. The latter is indeed an LMI FP with P as a matrix decision variable. it follows that PA(δ) < 0. which guarantees the origin to be globally asymptotically stable if ∃P > 0: ATP + PA < 0. δ r ]. Γ. Quadratic stability of this model implies robust stability. 5. ∀δ ∈∆  .13) from (5. ∃δ ∈ ∆. for linear time-invariant (LTI) models as well as for linear parameter-varying (LPV) ones. δi ≥ 0 : A(δ) = P (δ i Ai ) < 0.5 -4 Control and Mechatronics Getting expression (5.

1a  0 −0. −0. feasibility of LMI conditions in (5. In this case.….1 − 0.b).1.3  Stability of the previous model has been investigated for parameters a and b in −10 ≤ a ≤ 10.…. r } P   (5. The results are shown in Figure 5.19) has been tested via MATLAB LMI toolbox. −10 ≤  b ≤ 10.2  with A1 =   .1b − 0.19) To illustrate the results above. Δ ∈ Θ} where Θ = ∆ ∈ R m × q .20) 0 0 . ʈ ∆ ʈ = λ max (∆T ∆) ≤ γ for the uncertain LTI model x(t ) = Ax(t ) + Bw(t ) 10 8 6 4 2 0 −2 −4 −6 −8 −10 −10 −5 0 a 5 10 b { } z(t ) = Cx(t ) (5. Γ = {A(Δ) = A + BΔC. A2 =  .20).e.21) Figure 5. Uncertain LTI models: A typical example of unstructured uncertainties is given by the norm-bounded ones. model (5.2  0.1  Feasibility domain for model (5.. … .20) is guaranteed to be quadratically stable for those parameter pairs. Ar] is guaranteed if  P    PAi AiT P   > 0.1 0. ∀i ∈ {1.1a − 0. consider the following discrete-time LPV model with structured uncertainties:  x(t + 1) = A(δ)x(t ). where a cross mark indicates that the stability problem is feasible for the corresponding parameter pair (a .Linear Matrix Inequalities in Automatic Control 5 -5 Similarly. © 2011 by Taylor and Francis Group. A2    (5. A(δ) ∈ co   A1 .5  0. For each pair of integers in this grid. i.1b − 0. LLC . quadratic stability of a discrete-time LPV model x(t + 1) = A(δ)x(t) with A(δ) ∈ Γ = co[A1.

leading to the well-known positive real lemma.26) provides a graphical condition for the feasibility of LMI (5.23)  AT P + PA  T   B P PB   x    < 0 w 0   (5. which states that model (5.5 -6 Control and Mechatronics with w(t) = Δz(t).22) admits an EVP formulation to find the minimum γ > 0 that renders it feasible. Another example of unstructured uncertainties is given by the sector-bounded ones. LLC . f : Rq → : Rq : f (0) = 0. the set of complex numbers RLMI *   I  Q  =  z ∈C .26) Following the same outline of the previous results. The interested reader is referred to [1] for details.22) arises.26) is quadratically stable if  AT P + PA  T   B P −C PB − CT  <0 − DT − D   (5.    T  zI    S  S   I      R   zI    (5.22) which is obviously an LMI FP with P as a single decision variable. Given matrices Q = QT. since γ 2 can be considered a single decision variable. Moreover. This LMI arises naturally from two developments: the first one concerning the time derivative of the Lyapunov function candidate V = xT(t)Px(t). The well-known bounded real lemma states that the model (5. the previous results can be straightforwardly extended to discrete-time uncertain LTI models.e.24) and the second one concerning the fact that wTw = zTΔTΔz ≤ γ 2zTz because ΔTΔ ≤ γ 2I: x  w w−γ z z≤0 ⇔   w  T 2 T T  −CT C    0 0  x    ≤ 0 w γ 2I    (5. (5. f ∈ {zTf (z) ≥ 0} in a square feedback interconnection as x(t ) = Ax(t ) + Bw(t ) z(t ) = Cx(t ) + Dw(t ) w(t ) = f (z(t )) (5.25) Combining both expressions via property 5 (S-procedure). As expected.27) Notice that the Nyquist plot of (5. i. and S. such that  AT P + PA + CT C  BT P   PB  <0 −γ 2I   (5. V (x ) = ( Ax + Bw)T Px + x T P ( Ax + Bw) = x T ( AT P + PA)x + 2x T PBw < 0 x  ⇔V =   w  T (5. the result in (5.21) is quadratically stable if and only if ∃P > 0. this problem can also be cast as an LMI problem..28) © 2011 by Taylor and Francis Group.27). R = RT. LMI regions: It is well known that exponential stability of an LTI model can be influenced by placing the model modes in different regions.

all the eigenvalues of a matrix A lie in the LMI region (5. corresponding to a damping factor α . transfer function F(s) = C(sI − A)−1B and (A. searching its minimum value is also possible via an LMI optimization problem. In terms of definition (5.28). different regions can be defined. Tr(Z ) < γ Z   Note that several properties have been employed in the previous development. It is important to keep in mind that LMI applications extend well beyond this survey to LPV and quasi-LPV models (Takagi-Sugeno. such as congruence and Schur complement. Similarly. 5.31) Note that since γ is not multiplied by any decision variable. LLC . and S.and H∞-norm upper bounds: Consider a globally asymptotically stable continuous-time LTI model (5. R. In the sequel. it can be described as Q = 2α . Moreover.29) H2. z + z < −2α . consider the region R(z) < −α .28) if and only if there exists a matrix P > 0 such that the following LMI is feasible:  I    A ⊗ I  *  P ⊗Q  T  P ⊗ S P ⊗ S  I   <0 P ⊗R   A ⊗ I  (5. α > 0. Z :  T   B P PB   < 0. Z : AT P + PA + PBBT P < 0. AX + XAT + BBT < 0. B. guaranteeing an H2-norm upper bound of F(s) turns out to be an LMI problem. respectively. S = 1. the controllability and observability grammians of F(s) will be denoted as Pc and Po. since F 2 ≤ γ ⇔ Tr CPcCT < γ 2 ⇔ ∃X > 0. Depending on the structure of matrices Q.21) with x(0) = 0. both results above have their equivalent LMI expressions for discrete-time LTI models. Tr (CP −1CT ) < γ 2 ( ) ( ) (5.2  Controller Design Some controller design issues for LTI models follow. for instance) [13].30) ⇔ ∃P > 0. guaranteeing ∙F(s)∙∞ < γ is equivalent to the feasibility of the following LMI problem with P as a decision variable:  AT P + PA  T  B P  C  PB − γI D CT   DT  < 0 − γI   (5. Tr CXCT < γ 2 ⇔ ∃P = X −1 > 0. AT P + PA + PBBT P < 0. Circles. CP −1CT < Z. Moreover. Tr(Z ) ≤ γ 2  AT P + PA ⇔ ∃P > 0. and R = 0.C) controllable and observable. which imply z ∈ ℂ.2. and cones can be likewise described and added to any LTI stability criteria since LMI constraints can be summed up to be simultaneously held.Linear Matrix Inequalities in Automatic Control 5 -7 is called an LMI region. − γI   P   C CT   > 0. Then. © 2011 by Taylor and Francis Group. strips. which correspond to the unique positive solutions to APc + Pc AT + BBT = 0 and ATPo + PoA + CTC = 0. for instance.

S. −1 2 0 −3  −3  −2 LMIs (5.34) 0  −1 2 1   −1  −1 with A1 =   . all the poles of the closed-loop nominal models lie in the desired region D. S = 1. Consider. though the result is not an LMI since it asks for P = PT > 0. Re(z) < −|Im(z)|. it can be found that this model is quadratically stabilizable if ∃X > 0 and M such that XAiT + Ai X − Bi M − M T BiT < 0.29) under definitions (5. B1 =   .16). Using the result in (5. © 2011 by Taylor and Francis Group. i = 1.(Ar. A2   .2070 .32) yields the LMI expression XAT − M T BT + AX − BM < 0 (5. Nonetheless. the elementary results in the sequel as well as the proofs behind them give an outline to the aforementioned extensions.1689  (5.36) Q = −100.2070 0.5 -8 Control and Mechatronics with state-feedback control law u(t) = −Lx(t). (A.2 since these are the nominal models whose poles are to be placed.18) and the aforementioned transformations.36).….2 with cross marks. B2 =   .…. LLC . R = 1 (5. and R [2]: Q = 2. To exemplify the LMI-based controller design described above as well as the pole-placement properties of LMI regions in (5. L =   −3.6303 0. uncertain [16].37) and A = Ai − BiL.33) allow a feedback controller u = −Lx to be designed if a feasible solution is found.29) with three different sets of matrices Q.38) As shown in Figure 5.37) for i = 1.2166 −1. S =    π  − cos   4   (5. This region is characterized by (5. A(δ) ∈ co   A1 . B1). and delayed [17].33) and (5. · (t) = A(δ)x(t) + B(δ)u(t). illustrated in Figure 5. that the poles of the nominal models are desired to lie in the convex D-region D = {z ∈ ℂ: Re(z) < −1. Quadratic stability of the closed-loop LTI model x(t) = (A − BL)x(t) can be studied by replacing A by A–BL in (5.8502 P= 0. L such that both continuous [15] and discrete-time [14].32) Defining X = P −1 > 0 and multiplying left and right by it (property 8).35)  π cos     4 .2 were found feasible with 0. R = 02 × 2  π  sin     4  (5.35). and (5. B(δ) ∈ co   B1 .33) with M = LP−1 = LX. As before. |z| < 10}. B) ∈ Consider a feedback control law u = −Lx(t) for a polytopic LPV model x co[(A1. (5. S = 0. in addition. A2 =   .28). LMIs in (5. · (t) = Ax(t) + Bu(t) State-feedback controller design: Consider an open-loop continuous-time LTI model x ( A − BL )T P + P ( A − BL ) < 0 (5. the control gain is given by L = MX−1. Br)]. expression (5.2. B2    (5. consider the continuous-time LPV model    x = A(δ)x(t ) + B(δ)u(t ). R = 0   π sin   4  Q = 02 × 2 .

39) to be bounded by γ > 0.and H∞-norm bounded controller design: Consider the following model x(t ) = Ax(t ) + Bw w(t ) + Buu(t ) z(t ) = Cx(t ) + Dw w(t ) + Duu(t ) (5.30) can be used to guarantee the H2 norm of the closed-loop transfer function Fcl(s) of model (5. LLC .Linear Matrix Inequalities in Automatic Control 5 -9 10 5 D Im (z) 0 −5 −10 −10 −5 0 Re (z) 5 10 Figure 5. Nonetheless. Tr(W ) < γ. the H∞ © 2011 by Taylor and Francis Group.41) W   T T T   XC − M Du CX − Du M  >0 X   Consider now the problem of guaranteeing the H∞ norm of the closed-loop transfer function Fcl(s) of model (5. and w(t) ∈ Rl the generalized disturbances. M. Fcl (s) 2 2≤ γ T T (5.2  An LMI region for stabilization of (5.40) T . As with the procedure above for H2-norm bounded design. H2. additional modifications are needed to render them LMI.39) to be bounded by γ > 0. z(t) ∈ Rq the controlled outputs. we have that the previous conditions can be rewritten as LMIs on decision variables X. u(t) ∈ Rm the control input vector.34). Under a state-feedback control law u(t) = –Lx(t). Tr(W ) < γ (5. and W: T T X > 0. XAT + AX − M T Bu − Bu M + Bw Bw < 0. Dw = 0 Defining M = LX and introducing an auxiliary matrix W : (C − DuL)X(C − DUL)T < W. X = X > 0 : X ( A − Bu L) + ( A − Bu L) X + Bw Bw <0 Tr((C − Du L) X (C − Du L)T ) < γ .39) with x(t) ∈ Rn the state-space vector. conditions in (5. as shown in the following development: ⇔ ∃L ∈ R m ×n ∃L ∈ R m ×n .

40(8):1441–1446. Scherer and S. Technical report. C. Lyapunov. 2004. O. Automation and Remote Control. Wang. 17. Linear Matrix Inequalities in System and Control Theory (SIAM: Studies In Applied Mathematics International). 15. USSR. V. 5. Centr.K. Princeton University Press. Peaucelle. AC-16(6): 621–634. Natick. Chen. (7):713–722. though they exceed the scope of this introductory chapter. Laub. London. Willems. observer design. The method of matrix inequalities in the stability theory of nonlinear control systems I.C. Kruszewski. 1947. H.M. Guerra. Problème général de la stabilité du mouvement. and K.. 2001. T. D. Lu’re. USSSR. MA. 8. PA. M. The Mathworks Inc. Oliveira and R. Yu. 17(3). Annals of Mathematics Studies. A new robust D-stability condition for real convex polytopic uncertainty. Balakrishnan. Karmarkar. Combinatorica. Systems and Control Letters. E.E. and output feedback can be found in the literature. Robust sampled-data stabilization of linear systems: An input delay approach. New York. P. Bernussou. Princeton.A. A Linear Matrix Inequality Approach. John Wiley & Sons.. and finally replacing any LX by M to get the following LMI expression: T  XAT − M T Bu + AX − Bu M  T Bw   CX − Du M  Bw − γI Dw T XCT − MT Du  T Dw <0  − γI  (5. 1971. References 1. 11. and J.31) will be invoked to be adapted for the closed-loop case. N. Tanaka and H. 753–763. 6. Delft. 2004. 577–592. 2003.P. A general approach to polynomial-time algorithms design for convex programming. 4. 9. 1988.I.. 21–30. Springer. then applying congruence with diag([X  I  I]). Nemirovski. the Netherlands. Philadelphia. D. Econ. M. Acad. IEEE Transactions on Fuzzy Systems. 3–4(51):171–18. 40. 4(4): 373–395. 1995. A new polynomial-time algorithm for linear programming. The interested reader is referred to the references below. J. X = P −1. III. Gahinet. Lecture Notes in Control and Information Sciences 268.. Chilali.. Nemirovsky. K. NJ. Society for Industrial Mathematics. 1999. Perspectives in Robust Control. 2001. Arzen. U. substituting any A by (A − BuL). II. L. Johansson. Systems and Control Letters. 7. Some nonlinear problems in the theory of automatic control. 25–26(4). 16. D. 10. 17. Course in LMI optimization with applications in control (Lecture notes). Least squares stationary optimal control and the algebraic Ricatti equation. Rantzer. Weiland. 2009.5 -10 Control and Mechatronics stability conditions in (5. Skelton. S. IEEE Transactions on Fuzzy Systems.M. 905–917. 3. Delft University. A. Control law proposition for the stabilization of discrete Takagi-Sugeno models. A. Linear matrix inequalities in control. Fridman. 13.O. 1984. Inst. A. A. Moscow. Automatica. 14. Feron. Nesterov and A. descriptor forms. Xu and T. 1967. Fuzzy Control Systems Design and Analysis. Ghaoui. 2005. 2000. A. LMI Control Toolbox.E. Boyd. 1957. and M. A. S. Arzelier. Henrion. Richard. Sci. 12.J.M. Piecewise quadratic stability of fuzzy systems.C. 1994. and V. Seuret. 724–731. E.42) A number of results on delayed systems. and M. and J. Robust H∞ control for uncertain discrete-time systems with time-varying delays via exponential output feedback controllers. A. Bernal. © 2011 by Taylor and Francis Group. Stationery Off. Bachelier. and Math. 2. Stability tests for constrained linear systems. Yakubovich. LLC .

.. humans and devices may cooperate by touching each other (Katsura et al..... or characterized by an uncertain or approximate knowledge of the operating conditions. 6 -13 References���������������������������������������������������������������������������������������������������� 6-14 6........... In this scenario....... 6 -1 High-Accuracy Motion Control.. when the operators may share the operating space with a moving device (Katsura et al.... 1996).. to be blended into the design of control algorithms for different applications....... and their knowledge less consolidated when compared with that of the macro-world.. etc......... In this case............ Instead....................... where the performances are usually stated in terms of speed and accuracy of the movement....... in addition to suitable control algorithms.... 2006).............................. the target of the MC system is to provide a motion that will not be harmful to humans. remotely located.....g...........e...........2 Introduction... 6 -10 6. Traditionally. The above scenario drastically changes...... 2007)............ where the laws governing the motion and the interactions (e... Additionally..... as the control of the motion rarely occurs in a closed.. such as point-to-point motion. .. In each of the possible environments..... powerful tools have been developed through MC research. This is the most significant challenge of MC... haptic devices can exert a force on users and exploit the proprioceptive system of humans to convey information to 6 -1 © 2011 by Taylor and Francis Group..... LLC .5 Conclusions. such environments can be populated by humans or unknown objects........................6 Motion Control Issues Roberto Oboe University of Padova Makoto Iwasaki Nagoya Institute of Technology 6. 6 -12 6..4 Remote Motion Control.. tribological issues) are rather different from those used in macroscaled problems. but also all possible interactions of the controlled device with the environment.. robust solutions must be utilized to cope with the high degree of uncertainty in the description of the device to be controlled and its environment....... rules and specifications are rather different..... This means that MC studies not only the use of proper devices for sensing and actuation...... MC has been dealing with the control of the movement in macro-world scenarios....... 6 -2 Feedback Control System  •  Feedforward Control System and Command Shaping  •  System Modeling and Identification  •  Optimization and Auto-Tuning of Motion Control System Toshiyuki Murakami Keio University Seta Bogosyan University of Alaska. well-defined environment.......... when humans come into play.3 Motion Control and Interaction with the Environment..............................1 6........... Those tools can only be partially used when the MC problem scales down to micro and nano level........... As an example. i. To achieve the desired level of performance.............. so the motion of the devices should be controlled in order to provide a useful cooperation with the human’s task..1  Introduction Motion control (MC) is concerned with all the issues arising in the control of the movement of a physical device (Ohnishi et al... servo-positioning... control of moving flexible structures. Fairbanks 6.....

Section 6. in turn. In addition. and performance optimization are all concurring to the realization of a modern MC system. Section 6. For this reason. the design of an MC system is a multifold task. an effective sensing process must be implemented. velocity. using all the possible technologies available. a simple approach to achieve the fast and precise motion performance is to expand the control bandwidth of feedback control system: a higher bandwidth allows the system to be more robust against disturbances and uncertainties. As well known. One of the most effective and practical approaches to improve the motion performance is the control design in a 2-degrees-of-freedom (2DOF) control framework (Umeno and Hori 1991). accuracy improvement. Finally.3 addresses the problem of the MC in human–machine interaction. In those applications. the required specifications in the motion performance. Devices. and the presence of nonlinearities generally do not allow widening the control bandwidth without impairing the overall system stability. high quality of products.6 -2 Control and Mechatronics operators and. Modeling/identification Controller design/precise simulator Linear/nonlinear modeling + + Model Plant Observer – Output Feedforward control/command shaping Fast response Vibration suppression Reference + + CS FF 2DOF control structure FB + Optimization/auto-tuning Control specifications Adaptive capabilities Feedback control Stabilization Disturbance suppression Adaptive robust capability FIGURE 6.2 describes issues relevant to high-accuracy MC systems in detail.. from the standpoints of high productivity. the availability of a fast and reliable communication infrastructure now opens devices located in remote and/or harsh environments to new applications of MC. and adaptation capability against variations in mechanisms should be essential properties to be provided in the performance. Section 6. In all the possible interactions with humans. modify their behavior. response/settling time and trajectory/settling accuracy.1  Block diagram of MC system. such as data-storage devices. control strategies. mechanical vibration modes. mechanical vibration suppression. manufacturing tools for electronics components. with all the additional concerns arising from variable communication delays and data losses. LLC . and total cost reduction (Ohishi 2008). robustness against disturbances and/or uncertainties. © 2011 by Taylor and Francis Group. this chapter discusses the most relevant issues in the different areas of interest for this discipline. as shown in Figure 6. e. 6. However. on the levels of force. analysis and control of the interaction with the environment. machine tools. and/or position.1. including micro-/nanoscale motion. and industrial robots.g. should be sufficiently achieved. Finally. torque. Summarizing all the above.4 presents the most recent results in the field of remote MCs. dead-time/delay components.2  High-Accuracy Motion Control Fast-response and high-precision MC. is one of the indispensable requirements in a wide variety of high-performance mechatronic systems.

With such techniques. (3) system modeling and identification. 2006) and complex lead/lag filters (Messner 2008). where operating conditions may drastically change with time. in order to achieve the desired loop transfer function. Indeed. (2) feedforward control system and command shaping. 6. Each component will be analyzed in the following. such accurate knowledge is sometimes hard or very expensive to achieve..1  Feedback Control System In precision MC applications. disturbance suppression.g. and possibly adaptation capability against system variations. in order to allow for a larger closedloop bandwidth. LLC . with the prescribed stability margins. For this reason. but it is often desirable to use a set of simple components. However. in turn. inherent requirements for the feedback system are a robust performance on system stability and/or controllability. especially when the plant operates in an off-laboratory environment. let us consider a simple plant. the design of both feedforward and feedback controllers calls for a precise modeling of the plant to be controlled and (possibly) of the disturbances acting on it. drive a change in the control strategy. it is vital to develop model identification procedures that may accurately track the plant and environment characteristics during operations and. © 2011 by Taylor and Francis Group. Robust design can be achieved by using standard tools.Motion Control Issues 6 -3 with feedback and feedforward compensators designed in order to achieve the desired performance. by depleting the resonant peak or by locally modifying the phase response). while limiting the control bandwidth demand. the gain and phase of the plant to be controlled are shaped by repeatedly using an element with a customizable response in order to modify the original plant response (e. if needed.2).2  Frequency response of a plant with resonant modes. such as a resonant filter (Atsumi et al. characterized by two low-frequency real poles and two high-frequency complex poles (Figure 6. To clarify with an example. while keeping large stability margin. In this sense. Bode diagram 20 0 –20 –40 –60 –80 –100 101 Magnitude (dB) 102 103 Frequency (rad/s) 104 105 FIGURE 6. novel approaches to achieve the robust stability have been proposed by applying simple filters. All of the above explains why the design of a modern high-accuracy MC system can be roughly divided into four tightly related parts: (1) feedback control system. and (4) optimization and auto-tuning of the MC system.2.

due to the presence of lightly damped resonant modes. which is defined when the open loop has multiple 0 dB crossings. Kobayashi et al.6 -4 Control and Mechatronics A simple approach for the feedback controller design is to use a standard proportional–integral– derivative (PID) controller.4.3  Resonant mode spillover.4  Primary and secondary phase margin definition. (2003) formally define the secondary phase margin.5 0 –0. tuned to achieve a certain open-loop crossing frequency and a desired phase margin at that frequency.5 –1 –1. but this notch reduces disturbance rejection and causes phase warping that adversely affects the overall servo performance.5 1 Imaginary axis 0. 60 40 20 Magnitude (dB) 0 –20 –40 –60 –80 101 102 103 Frequency (rad/s) 104 105 Bode diagram FIGURE 6. Enlarging the closed-loop bandwidth by increasing the 0 dB crossing frequency while maintaining the phase margin can lead to multiple crossings of the loop transfer function. The damping of the resonance in closed-loop depends on the secondary phase margin. © 2011 by Taylor and Francis Group. LLC .3.5 –2 –3 –2 –1 0 Real axis 1 2 3 Primary Secondary Nyquist diagram FIGURE 6. and the concept is easily understood by examining the Nyquist plot of Figure 6. Cascading a notch filter with the PID controller can eliminate the multiple 0 dB crossings. An alternative approach to dealing with the effects of mechanical resonances employs the so-called secondary phase margin. as shown in Figure 6. 2 1.

especially in the presence of nonlinear phenomena. As for the design of the feedback control for the suppression of the disturbance effects. If the lag compensator is realized with real pole–zero pairs. A limitation of this approach is that the gain and phase characteristics of standard lag–lead compensators also alter both the primary crossing frequency and the primary phase margin. In order to overcome this limitation. a single controller. However. LLC . A typical result is shown in Figure 6.5. Typical phase-shaping networks are properly tuned lag compensators. which move the secondary crossing point clockwise in the Nyquist plot by reducing both the phase and the gain of the loop transfer function after the first crossing. In this case. which are characterized by variations in plant parameters. the servo designer has little freedom in the overall design since the maximum phase lag is a function of the ratio of the pole to the zero (Franklin et al. A relevant example for this problem can be found in high-precision MC for data storage. in turn. the achieved modifications are more localized when compared with standard lag networks with real poles and zeros. but only with the complex ones the gain margin is kept large. the desired phase profile for the loop transfer function can be easily achieved by locally modifying the original plant transfer function. (2007). 1994). using a set of properly placed complex lead/ lag networks.5  Pole–zero map of a complex lag network. Messner and Oboe (2004) proposes the use of complex phase lag networks. the mechanical system in charge of moving the read/write equipment must 3 ×104 2 X 1 0 –1 X –2 –3 –4 –3 –2 –1 0 1 2 3 4 ×104 FIGURE 6. have been explicitly addressed and solved in Kobayashi et al. while maintaining a high primary crossing frequency and. both the center frequency and the phase variations can be tuned independently and. © 2011 by Taylor and Francis Group. it is wellknown that a disturbance observer is one of the powerful tools to enhance the disturbance suppression performance. where it can be seen that the same primary and secondary phase margins can be achieved with real complex networks. as shown in Figure 6. often fails in achieving the desired performance.6. In such type of networks. by using a phase-shaping network to increase the secondary phase margin. Some concerns on its use in such applications. as well as the overall robustness. even when robust. By exploiting this feature. where it has been shown how the proper design of the observer leads to a user-definable phase lead or lag introduced by a mismatched disturbance observer.Motion Control Issues 6 -5 Phase stabilized design explicitly addresses this limitation. with poles and zeros as in Figure 6.7. due to the fact that the plant may drastically change its characteristics. a large closed-loop bandwidth.

the accuracy is traded with the speed and the friction of the mechanical system. For this reason. Nyquist plots of PSD with real and complex lags 1.6  Comparison of frequency responses: real lag vs.5 2 PSD with real lag PSD with complex lag FIGURE 6.5 0 –0. complex lag. the controller must provide a ultrahigh accuracy and an excellent disturbance rejection. Then. when the speed goes close to zero.5 –2 –1.5 –1 –0. the controller design is greatly simplified.5 –1 –1. it is a common practice to use the so-called mode switching control strategy.5 1 1. LLC . which is mainly viscous.5 1 0. complex lag. During fast motion. On the other hand.6 -6 Bode diagram Control and Mechatronics 5 Magnitude (dB) 0 –5 –10 –15 –20 0 Phase (deg.7  Phase stabilized design: real lag vs. which selects the controller that best fits the current operating conditions among a set of predefined controllers. but a side effect that arises is the handling of possible unwanted transients induced by switching.) –30 –60 –90 102 Real lag Complex lag 103 104 Frequency (rad/s) 105 106 FIGURE 6. With this type of strategy. be controlled in order to make rapid and wide movements from one data area to another. the control must cope with nonlinear friction and possible slip-stick motion around the target position.5 0 0. A solution to this problem is the initial value compensation © 2011 by Taylor and Francis Group. once close to a target position.

fuzzy control gives the maximum sensitivity for servo systems (Precup et al. Of course. should be an additional control freedom to the feedforward compensation.  fast response.. In addition. Yang and Tomizuka 2006. 2008) have been obtained by exploiting high-performance CPUs. Moreover. with repetitive control. while satisfying possible constraints on inputs and outputs. Varieties of approaches on the IVC have been reported according to the application characteristics (Oboe and Marcassa 2002.g. 1994) and applied in different areas on MC.g. In this area. a coprime factorization (Hioki et al. e. The feedforward compensator can be systematically designed by. variations of system characteristics have been recently addressed (De Caigny 2008a.e. This technique can also be used to change the characteristics of a system subject to oscillatory disturbances (e. etc. etc. The most common technique to address this problem is the repetitive controller. the offline design of an input (usually a sequence of delayed impulses) that brings the system from the initial position to the final one.). Hirose et al. disturbance suppression. none of the approaches is conceived to directly address the variation of the process characteristics during the motion. Van den Broeck et al. Perfect tracking performance can also be achieved by a multirate feedforward control approach (Saiki et al.g.Motion Control Issues 6 -7 (IVC). initially proposed by Singer and Seering (1990). LLC . a shaping of spectral content) on the command of the servo-controller. which computes the optimal trajectories for the LPV system.. independent of the characteristics of a feedback system (system stability. in order to numerically solve the problem of the cancellation of residual vibrations on higher order flexible systems. data-storage servo-positioners). This is particularly effective in some MC applications. etc. 6. especially in point-topoint motion applications. high accuracy. like robotics. on the other hand.. shaping is not limited to the reference trajectory of an MC system. the most innovative results (e. with zero residual energy at the end of the motion.e. Command shaping. which initializes the states of the controllers in order to guarantee high degree of smoothness during transitions. aimed at increasing the gain of the loop at the frequencies where the disturbances can be found. an adaptive transient performance against parametric uncertainties (Taghirad and Jamei 2008) and a multirate perfect tracking control for plant with parametric uncertainty (Fujimoto and Yao 2005) could be achieved. Other approaches including the computational intelligence have been introduced to improve the ­ eedback control performance: repetitive learning control provides fast response and precise table f ­ positioning (Ding and Wu 2007). rely on the assumption that the system to be moved can be considered as a second-order system.. with consideration of mechanical vibration suppression. and neuron-based PI fuzzy controller suppresses the mechanical vibration (Li and Hori 2007).. This characteristic may lead to unwanted oscillations.2.b) for linear parameter varying (LPV) systems. All classical techniques. crane control. with the main drawbacks arising during long operation. which can be seen as a filtering (i. which relies on the strong assumption that the sampling frequency is an integer multiple of the frequency of the disturbance.” i. where the system to be controlled exhibits some flexibility. while satisfying several constraints (like maximum available actuation power. 2007).). other techniques (like adaptive feedforward compensation [AFC]) add repetitive compensating signals to the controller output.. This approach. On the other hand.g. 2008). however. robustness. For example. Adaptive robust control framework has also been applied in order to cope with plant changes in MC systems. with a solution in which the LPV system is embedded into a spline optimizer. information on the spectrum of the periodic disturbance cannot be © 2011 by Taylor and Francis Group. has been improved by adding robustness features (Singhose et al. where higher order systems have been addressed.2  Feedforward Control System and Command Shaping Feedforward compensation can improve the motion performance in the 2DOF control framework. The success of such techniques in data storage comes from the relative simplicity of the design. 2004). in addition to increased complexity in the derivation of the solution. by properly “shaping” the command to the process. e. and vibration suppression. 2008). Similarly. achieving the desired degree of smoothness. This limitation has been partially solved in (Hyde 1991). One well-known solution used to address this issue is the “input shaping.

. Steinbuch et al. to the maximum frequency of the plant modes that can be identified. in order to achieve the most effective model-based feedforward compensation and/or more progressive design of feedback controllers. Moreover. This and many other identification techniques found in literature rely on the assumption that the dynamic system to be identified is linear or weakly nonlinear. there are always some nonlinear phenomena. Such a model can be nonparametric (e. Karimi and Landau 1998). Antonello et al. one of the most important issues is to have an accurate model of the process to be controlled at hand.. such as for ultraprecision stage control in semiconductor and/or LCD manufacturing. a frequency response) or parametric (e. © 2011 by Taylor and Francis Group.g. LLC .e. In order to overcome this limitation. (2007) and Kawafuku et al. A relevant application of this approach is in MC systems where it is impossible to operate the identification in open loop (e. (2007). it has been shown that it is possible to realize repetitive controllers. without incurring in failures or major errors in the identified models. the final performance is determined by the disturbance acting on the system and the strategies used in compensating for their effects. new results have been obtained in the so-called simultaneous identification. In this context. which require less memory that traditional ones. which should be carefully identified by analyzing their complicated behaviors and their effects on the MC system.g.e.. 2007) have proved that it is possible to design repetitive controllers that are robust against variations in the period of the disturbances. i. (2008). to get a discrete-time model of both the system and the spectral description of the disturbances acting on it (Zeng and de Callafon 2004).6 -8 Control and Mechatronics used in the design (all harmonics are treated in the same way). nonlinear components directly deteriorate the motion performance. in Pipeleers et al. the use of experimental data obtained from the system (operating in closed-loop fashion). starting from the seminal work of many researchers (Van Den Hof and Sharma 1993. in turn. however. Recent results (Steinbuch 2002. while showing some robustness against variations on the disturbance period and accounting for spectral information on disturbances. In fact. however. It is worth noticing that technological limitations usually set an upper limit in the sampling rate of the measured data and. This. is not always enough to properly design the controller for the system. in data storage.3  System Modeling and Identification When designing an MC system.g. i.2.8. Po(s) and Ho(s) in the block diagram of Figure 6. In almost all MC systems... 2007).8  Block diagram of closed-loop identification. linear identification and control of plant dynamics above the system Nyquist frequency has been introduced in Atsumi et al. Nonlinear friction in mechanisms e(t) H0(s) v(t) r(t) – u(t) P0(s) y(t) uc(t) ZOH Cint(q) yt(t) T – FIGURE 6. 6. an ARMA model). In cases of nanoscale positioning.

and nanosized systems.Motion Control Issues 6 -9 interferes with motion performance as a disturbance.e. which implements a state machine that determines whether there is an actual motion or not. LLC . if there is a “pre-sliding” or “sliding” friction between the parts (Kawafuku et al. changes in dynamics.1. the computation of the optimal trajectories. (2005) and Izumikawa et al.. a relevant result can be found in Bertolazzi et al. the friction model can be tailored in order to better capture the motion-relevant characteristics. 6. all characterized by a rather simplified dynamic behavior. © 2011 by Taylor and Francis Group. genetic algorithms have been reported as effective tools in the design of optimized controllers (Itoh et al. hence.). As for the use of soft computing techniques in optimization. motion speed and accuracy. In robotics. in order to achieve an “almost” real-time performance. On the other hand. etc. On the other hand. deteriorating the motion performance.. (2008). As for the optimization. power limits. where the friction between the elements in relative motion can be properly described by a dynamic system. like the minimization of the energy consumption or the maximization of some performance index. etc. It is worth noticing that in robotics. i. (2008) and Lombai Szederkenyi (2008).g. i. This is the case of micro. while considering moving objects. For instance. capable of multi-objective optimization. adaptation capability has been introduced to address specific modifications of the plant or the environment. friction may cause a tracking error or deviation. the nonlinear friction affecting a linear motion system is described by using generalized Maxwell-slip model. In Verscheure et al. in terms of accuracy and vibration suppression. it can be performed in order to achieve various targets. motion smoothness. Some early approaches (Fiorini and Shiller 1997) for the optimal trajectory planning with moving obstacles were aimed at the collision avoidance in the presence of moving obstacles. can implicitly drive the adaptation process by modifying one or more blocks of the control system of Figure 6. recent results have been reached in the area of the time–energy optimal path tracking. torque limitation. the comfort or the safety in a vehicle. capable to replan the reference trajectory in order to accommodate for any possible change in operating conditions. road conditions. In Jamaludin et al.. from the trajectory reference generated by the feedforward compensator in the 2DOF control system. in a receding-horizon framework. the motion that satisfies some performance index) can be usually computed offline..4  Optimization and Auto-Tuning of Motion Control System From a practical viewpoint. power consumption. Another interesting application of the nonlinear modeling in MC can be found in Ruderman et al. e.e. to improve the overall accuracy. in order to achieve the desired performance. (2007).g. vibration suppression. e. the optimal solution (i. (2005). it has been possible to realize more refined optimization schemes. an adaptive neural network compensation for nonlinearity in HDDs has been proposed in Hong and Du (2008). call for online optimization schemes.e. which satisfy several constraints (collision avoidance. which is experimentally tuned and used for the quadrant glitch compensation. minimum time. in turn. has been carried out by exploiting the availability of powerful hardware and tailored numerical routines. e. Soft computing techniques and/or computational intelligence can be an attractive alternative to conventional approaches to possess autonomous functions in the controller design... According to the span of the motion. Optimization. 2008). and energy minimization.. where the optimal reference maneuver for a vehicle is computed in real time.g. and fault-tolerant schemes against system failures have been reported in Delgado et al. With the advent of more efficient CPUs. it is quite important to optimize the whole MC system (from the system modeling to the compensator design and tuning) and to flexibly adapt it to variations in various motion environments.2. unexpected obstacles. 2004. Among all. safety and comfort constraints. where the nonlinear behavior of a robotic transmission is used in order to compensate for the joint deflection due to the payload and. Low and Wong 2007). where multiple objectives for the target system can be handled. etc. of course. (2008).

2002. Ohnishi et al. the fusion of flexible control. and machine from a control viewpoint. as shown in Figure 6. and an optimal combination of hardware (shoes) and software (controller) is developed. so that the whole system is insensitive against environmental variations (Kim and Hori 1995. but the adaptive control for the environment is not achieved.11. the stabilization of a vehicle is realized by information-intensive control technology with the use of several kinds of sensors. signal processing. As described in this figure. Seki et al. just like vehicle. which is another type of a mobile system.6 -10 Control and Mechatronics 6. based on sensor information and MC. Adaptive MC for the interaction among human. however. in such a system. by utilizing the wheelchair system. This is one of the emergent technologies arising from advanced and networked applications of MC systems. a schematic illustration of the key components of advanced mechatronics for MC system is summarized in Figure 6. Various welfare devices have been developed by the influence of the society’s aging ­ opulation. MC systems are connected through the information network. In the conventional sensor feedback system.9. can be considered to represent a human–machine interactive system. and machines Considering these key issues. Once again. currently also known as cyber-physical systems (CPS). LLC . The adaptation to environment is also considered in the development of the biped robot. A typical example of such systems is a remote surgery system by the bilateral control. (2007) proposes safety and stable driving for an electric vehicle by the estimation of braking force on various road surfaces. To achieve sophisticated and advanced MC for the systems mentioned in the above examples. They are required to support caregivers and cared-for people these days because the burden p of the nursing care is increasing. though the system construction is not optimal. In human–machine interaction. described © 2011 by Taylor and Francis Group. human. environment. In such a system. it is necessary to construct an adaptive recognition system for the environmental change and the MC system that acts on the environment adaptively through human manipulation. The related technologies of intelligent transport systems (ITS) will be given as the typical system (Daily and Bevly 2004). For instance. it is necessary to develop the new and emergent research area of motion and information control technology. a typical example is a wheel chair system (Cooper et al. unifying the sensor information–based environment recognition and the MC technologies is of paramount importance. To improve the stability of yaw motion in an electric vehicle. Minakata et al. In this system. Tashiro and Murakami (2008) proposes a power assist control of step passage motion for an electric wheelchair by the estimation of reaction force from environment.10. including environmental model 3. controller is constructed. (2008) proposes a flexible shoe system for biped robots to optimize energy consumption in lateral gait motion as an environment adaptation method. 2006). This brings improvement to the system reliability. From these examples. Development of a multifunctional MC system based on information-intensive control technology 4. and the physical sensation of human–machine interactive system is transmitted by this information network. On the other hand. a schematic illustration of system integration in advanced MC is shown in Figure 6. Hence. due to the importance of information processing between the regulated system and the MC system. Then. Ohara and Murakami (2008) implements an active angle control of the front wheel based on the estimation of road cornering stiffness. Some key issues to this aim can be summarized as 1. Information-intensive control technology using heterogeneous and redundant sensor systems 2.3  Motion Control and Interaction with the Environment Recent advances of computer technology makes it possible to realize environment recognition based on visual information. and many kinds of sensors. it is necessary to achieve the unification of sensor information–based environment recognition with MC technologies while considering an environmental change. This combination strategy of software and hardware design is an important aspect of advanced MC. Shimizu and Oda (2007) introduces a virtual power assist control of an electric wheelchair based on visual information. the recognition of environment including human becomes a key issue for a reliable interaction between the environment. Recognition technology of environmental change in the system. For example. Mutoh et al. 1996).

Environment 2 Environment 1 Physical interaction from environment Environment M System 2 System 1 System 3 System N Interaction among systems Motion network Environmental recognition system Environment information System information Environmental action system Motion integration system FIGURE 6. © 2011 by Taylor and Francis Group.10  A schematic illustration of system integration in advanced MC.9  A schematic illustration of the key components of advanced mechatronics for MC. LLC .Motion Control Issues Machine interface Intelligence Data base Position control Power electronics Human interface Force control Human Robot Sensor signal 6 -11 Actuator FIGURE 6.

vehicles). delay compensation of data transmission will be a key technology (Natori and Ohnishi 2008). © 2011 by Taylor and Francis Group. in Matsumoto et al. in this framework. a novel structure of acceleration-based bilateral system is implemented to achieve high transparency of force sensation (haptics) through LAN-based motion network system. aimed at the MC of multiple objects. (2007). LLC . (2007) and Shimono et al. and they may range from master–slave (or leader–follower) schemes to self-arranging fleets. contributing in some way to the accomplishment of a certain “general” task. A typical case is the control of a fleet of robots (or. one of the open issues regards the handling of the limitations of the communication systems: packet losses. which must coordinate their motion in order to maintain a relative position or to safely transport a payload that is too big for a single robot (Ercarnacao and Pascoal 2001. the development of synchronous transmission of not only force sensation but also visual information is required as one of the future works. The next section will introduce the most relevant results in this field. more generally. variable delays. and yet.4 Remote Motion Control An emerging area of MC concentrates on the MC of remote systems. there are new ones. In fact. In all scenarios. Control strategies may vary according to the task and the communication resources available. there is an underlying communication infrastructure that allows for the communication among two or more elements of the system and. in addition to the traditional applications like telerobotics (Hokayem and Spong 2006) and field bus–based applications (Thomesse 2005). with the global coordination arising from the interaction of each element with immediate neighbors. Fax and Murray 2002).6 -12 Control and Mechatronics Roll Manipulation interface Yaw Roll rust wire Yaw rust wire Pitch Linear actuators Master robot Haptic surgical robot Slave robot Linear actuators Pitch Forceps HC-PCI_v01 FPGA board FIGURE 6. In the advanced MC with respect to this issue. To increase the feasibility of practical remote surgery system. 6. allocated remotely or at some distance with respect to one another.11  An example of bilateral surgery system.

in terms of span. which is based on an extension of model predictive control (MPC). which overcomes (at least in principle) the aforementioned limitations. These are high-speed networks (100/1000 Mbits/s) based on the well-known. and in robotics area. The major advantages of MPC are the possibility to handle constraints and the intrinsic ability to compensate large or poorly known time delays.5  Conclusions MC is a “broad” term to define all the strategies. (2003). so-called bilateral MPC (BMPC). MPC is an advanced method for process control that has been used in several process industries such as chemical plants. The main idea of MPC is to rely on dynamic models of the process in order to predict the future process behavior on a receding horizon and. many of the possible solutions. oil refineries. A simple example can be given to support this point with micro-/nanoscaled © 2011 by Taylor and Francis Group. accuracy. this approach cannot be applied to digital communication networks. to select command input with respect to the future reference behavior. The bilateral term is employed to specify the use of the signal feedback.Motion Control Issues 6 -13 and unpredictable available bandwidth. which estimates the load on the slave side (Bogosyan et al. proposed in Seuret et al. while the use of digital communication network leads naturally to a discrete-time scenario. LLC . and by combining the sliding mode observer with an extended Kalman filter. The first one consists of the realization of communication systems with reliable and predictable performance. using a sliding mode observer. There are two possible ways to address this limitation. As for the explicit addressing of the uncertainty on data delivery and delays. make some simplifying assumption to make the problem treatable. which is further estimated via a communication delay observer (CDOB) as in Natori and Ohnishi (2008). have been proposed in Secchi et al. like real-time Ethernet (RTE) (Thomesse 2005). accordingly. allowing to take into account the case where the reference trajectory is not a priori known in advance due to the slave force feedback. (2007). safety of the humans who may interact with the motion system. so a general approach to MC is not possible. Recently. however. execution time. the MPC was applied to teleoperation systems (Sheng and Spong 2004). Most of the proposed solutions. applied to telerobotics. etc. Considering the first approach. Moreover. original Ethernet specification and tailored for industrial applications. Another way to explicitly address the variable delay is to measure and then compensate it. The originality of the approach proposed in Slama et al. tools. while the second is the implementation of a control scheme that is inherently robust against the failures or limitations of the communication system. can be found in Hokayem and Spong (2006). Indeed. cost. (2007) lies in an extension of the general MPC. many of the approaches consider the delay as a variable with a time derivative strictly less than one. Another promising approach is the bilateral generalized predictive control (BGPC) proposed in Slama et al. (2007). This approach. 2009). there are also variations of this approach treating the communication delay as a disturbance. In the literature. Motivated by all the advantages of this method. in order to guarantee very short transmission times as well as tight determinism for periodic traffic and hard real-time operation for acyclic data delivery. the controlled motion must satisfy one or more requirements. which are not applicable to an MC application. Such systems may largely differ in their characteristics. is based on the use of GPS-based synchronization of the nodes. and technologies that concur in realizing a system that moves in its environment. The first one is that the time is considered as a continuous variable. an effective solution seems to be achieved with the introduction of new real-time communication protocols. As shown in Oboe et al. 6. (2006). based on passivity. so the discrete-time issue of the remotely controlled systems must be directly addressed. new approaches to prove the stability of a telerobotic equipment in a discrete-time framework. by means of which the random delay due to the communication can be compensated in a deterministic manner. which alters the reference system dynamic in the controller. A more recent approach in bilateral control is the consideration of the communication delay in the control input and feedback loop as a lumped disturbance.

U. M.. F. 17(2–3):119–139. which indeed represent a multidisciplinary area. Information flow and cooperative control vehicle formations. LLC . June 1997. and R. Arva. 2008a. 2008b. 1994.. and S. Vigo. Bertolazzi. Delgado. Integrated fault-tolerant scheme for a DC speed drive. Shiller. P. Italy. Orlando. Trento. A. Murray. where phenomena like surface adhesion. and A. 51(2):270–277. and M. Transactions on Industrial Electronics. De Caigny.M. Powell.L. IEEE/ASME Transactions on Mechatronics. Cooper.C. 2007. Ammer. Gokasan. Some of these systems have already proven capable. W. 10(1):121–126. Point-to-point motion control for a high-acceleration positioning table via cascaded learning schemes. Emani-Naeini. Turkey. Combined trajectory tracking and path following: An application to the coordinated control of marine craft.. 2008 (AMC ‘08). pp. © 2011 by Taylor and Francis Group.. IEEE/ASME Transactions on Mechatronics. S. S. 12:472–479. Franklin. 2006. This chapter presented some relevant results achieved in different areas pertaining to MC. March 26–28. Demeulenaere. IEEE Transactions on Control Systems Technology. Porto. 3102–3107. References Antonello. Sabanovic. Vibration control above the Nyquist frequency in hard disk drives.. Da Lio. special issue on Recent Development in Robotics.. IEEE Conference on Decision and Control (CDC’2001). Encarnacao. Barcelona. Corfman. R. 2007. and R.G. 2008a. and J. J. November 3–6. Fiorini. and K. and J. and J. Fax. De Schutter. Spaeth. and A. Real-time motion planning for multibody systems. IEEE Transactions on Industrial Electronics. Journal of Applied Mathematics and Computer Science. J. Okuyama. pp. Atsumi. 86–91. A. Fitzgerald. 2005. March 26–28. Proceedings of the 35th Annual Conference of IEEE Industrial Electronics Society (IECON 2009). 2008 (AMC ‘08). 2001. usually neglected in the macroscaled ­ systems. June 4–7. B. Swevers.M. 2006. T. De Schutter. Daily. 2002. have a large influence in the overall behavior.. J. 2008b. 2007. 10th IEEE International Workshop on Advanced Motion Control. and J. pp. J. according to the degree of interaction with the surrounding environment or with humans populating its workspace. and Z. Polynomial spline input design for LPV motion systems. Boninger. J. 54(5):2735–2744. 103–108. T. MA. Novel observers for compensation of communication delay in bilateral control systems. Spain.D. and M. pp. Italy. 10(4):419–427.. 10th IEEE International Workshop on Advanced Motion Control.. Proceedings of the 2002 IFAC. 2007.A. An identification experiment for simultaneous estimation of low-order actuator and windage models in a hard disk drive. Multibody System Dynamic. and D. Oboe. D. Reading. R. FL.F. M. Wu. De Caigny. Atsumi. P. Trento. 2004. Spain. which can be conceived in different ways. de Callafon. E. S. Time optimal trajectory planning in dynamic environments. 2002. Istanbul. G. 98–103. 2002. The use of GPS for vehicle stability control systems. 2001. Proceedings of the 9th International Workshop on Advanced Motion Control. B. Nakagawa.6 -14 Control and Mechatronics motion systems. Portugal. 7(2):101–126. T.M. Feedback Control of Dynamic Systems. IEEE International Symposium on Industrial Electronics. B. D. Another example comes from the MC of robotic systems. Gadamsetty. 2007. Martínez. 2007 (ISIE 2007). Demeulenaere. Bogosyan. Biral. Zhou. A. 2009. Dynamically optimal polynomial splines for flexible servo-systems: Experimental results. Pascoal. R. Ding. Kobayashi. R. and A. Performance assessment of a Pushrim-activated power-assisted wheelchair control system. Bevly. where many competences concur at the realization of innovative system. Swevers. Okuyama. H.A. AddisonWesley. 2009..A. Track-following control using resonant filter in hard disk drives. while some demonstrate great potential to improve the overall quality of life.

and J. Optimal design of robust vibration suppression controller using genetic algorithms. and T. Bilateral teleoperation: An historical survey. Kawafuku. H. 1998. pp. Japan. October 2006. 2005. and Y. 2005. IEEE/ASME Transactions on Mechatronics. Hirai. and G. Proceedings of the 10th International Workshop on Advanced Motion Control. 2008 (AMC ‘08). 42(6):653–662. Proceedings of the 10th International Workshop on Advanced Motion Control.M. Hyde. and J. Comparison of the closed-loop identification methods in terms of the bias distribution. and H. Izumikawa. 362–367. W. Italy. H. 10:371–377. and S. S. LLC . M. Hokayem. Experimental evaluation of adaptive and robust schemes for robot manipulator control. 2004. Automatica. Fault-tolerant control system of flexible arm for sensor fault by using reaction force observer. Yubai.. and C. M. 10th IEEE International Workshop on Advanced Motion Control. Taipei.. M. and J. Du. Katsura. pp. 2006. 523–527. Hori. K. Matsui. IEEE Transactions on Industrial Electronics. Swevers. 54:3413–3421. Jamaludin. 2004. An analysis of parameter variations of disturbance observer for motion control. Iwasaki. Proceedings of the 33rd Annual Conference of the IEEE Industrial Electronics Society. and H.. IEEE Transactions on Industrial Electronics. Trento. 39:844–850. and B. Comparison of rolling friction behavior in HDDs.. Identification method for plant dynamics over Nyquist frequency. © 2011 by Taylor and Francis Group. 2008. Kawafuku. Li. Hong F. pp. Z. 2008.V. M. 2007. H. December 2007. K.. 2008. 1991.D. Italy. Pushing operation by flexible manipulator taking environmental information into account industrial electronics. pp.S. 440–443. Spong. and K. A phase-stabilized servo controller for dual-stage actuators in hard disk drives. IEEE Transactions on Magnetics.. pp. Italy. 2008. M. An improved adaptive neural network compensation of pivot nonlinearity in hard disk drives. Szederkenyi. IEEE Transactions on Industrial Electronics. 212–217. pp. A. March 26–28. and M. Multiple mode vibration suppression in controlled flexible systems. Kobayashi. 2008. Brussel. 2008 (AMC ‘08). Lombai. Iwasaki.H. S. Proceedings of the 10th IEEE International Workshop on Advanced Motion Control. A. A multiobjective genetic algorithm for optimizing the performance of hard disk drive motion control system. Italy. F. Iwasaki. 2003. IEEE/ASME Transactions on Mechatronics. Nakagawa. IEEE Transactions on Industrial Electronics.W. Yubai. 2008. Nakamura. T. IEEE Transactions on Industrial Electronics. 2008.. Hirose. 53(5):1688–1697. 54(6):3298–3306. December 1995. M.. Hirai.F. and K. 42:2035–2057. Hirai. 2007. Proceedings of the 8th IEEE International Workshop on Advanced Motion Control. K. Multirate adaptive robust control for discrete-time non-minimum phase systems and application to linear motors. Initial value compensation using additional input for semi-closed control systems. Kawafuku. Trento. Hioki. Karimi. and Y. M... Ohta. T. Trento. Kawasaki. IEEE Transactions on Industrial Electronics. J. Landau. Taiwan. Trajectory tracking control of a 6-degree-of-freedom robot arm using nonlinear optimization. Eom. Hirai. 10th IEEE International Workshop on Advanced Motion Control. N.. IEEE Transaction on Industrial Electronics. Quadrant glitch compensation using friction modelbased feedforward and an inverse-model-based disturbance observer. M. 218–223. Yao. 2008. and K. Katsura. Itoh K. S. P. Suzuki. 272–277. Hirai. Iwasaki.S. 2008. Systems & Control Letters. Trento. Kim. Wong. 2008. Joint design method based on coprime factorization of 2DOF control system. 2007. Low. Ohnishi. Trento. Vibration suppression using single neuron-based PI fuzzy controller and fractional-order disturbance observer. 54(1):117–126. 655–660. Ohnishi. A realization of multilateral force feedback control for cooperative motion. Suzuyama. J. 2007. K.Motion Control Issues 6 -15 Fujimoto. 54(3):1716–1725. 32:159–167. 10(4):391–396. Y. Italy. and I. pp. J. March 26–28. 2004. 2007. Kobayashi. Katsura. and N. MIT Space Engineering Research Center Report (SERC). 51(5):947–953. Ohishi. and H. S. Hori.

C. 2003. Hayano. A. Oboe. J. pp. 2007. and T. 2004. Proceedings of the 7th International Workshop on Advanced Motion Control. Shibata. Nanotechnology. 2008. 1(1):56–57. H. Realization of fine motion control based on disturbance observer. W. Oboe. 2008. 4(31)(2):81–90. 2007. MA. 1–8. Trento. Minakata. taking the dynamic property of network disturbance into account. Proceedings of 10th IEEE International Workshop on Advanced Motion Control. Precup. March 26–28. Italy. 2008. Stramigioli. and S. 1165–1170. 54(3):1298–1310. H. Hara. © 2011 by Taylor and Francis Group. Taiwan. 2008. Italy. K. Electric braking control methods for electric vehicles with independently driven front and rear wheels. 721–726. Trento. and K. Novel step climbing control for power assisted wheelchair based on driving mode switching. Oboe. H. Motion control for advanced mechatronics. 2007. Bertram. pp. Ohnishi. Sakata. 2004. Saiki. Marcassa. K. Italy. N. Takita. Ohnishi. Otsuka. 104–109. and T. Paris. 2008. Ohara. De Schutter. pp. Masuda. 34–39.. 1996. 32nd Annual Conference on IEEE Industrial Electronics. Proceedings of the IEEE American Control Conference. Italy. IEEE Transactions on Industrial Electronics. J. Slama. pp. 2002. pp. 2006. 9:85–92... Annals of the University of Craiova–Series: Automation. approaches and applications. K. Demeulenaere. Seki. Trento. S. A study on high-speed and high-precision tracking control of large-scale stage using perfect tracking control method based on multirate feedforward control. Katsura. Telerobotics through Internet: Problems. Hoffmann. and P. 10th IEEE International Workshop on Advanced Motion Control. Secchi. Generalized repetitive control: Better performance with less memory. R. and S. Preisach model of nonlinear transmission at low velocities in robot joints. Messner. 2008. France.. and A. 54(2):1168–1176. and K. Minakata. A design method of communication disturbance observer for time-delay compensation. 3. J. Swevers. 55(5):2152–2168. IEEE Transactions on Industrial Electronics. The influence of nonlinear spring behavior of rolling Elements on ultraprecision positioning. A stability control by active angle control of front-wheel in vehicle system.E. Ruderman. 2008. C. Fujimoto. 2008. Initial value compensation applied to disturbance observer-based servo control in HDD. Murakami. H. Tadakuma. and T. F. and K. Transactions on Industrial Electronics. Taipei. Iijima.. Yahagi. Y. K. Preitl. K. and R. 55(3):1271–1276. and T. 2008 (AMC ‘08). Vol. Fuzzy controllers with maximum sensitivity for servosystems.. Digital passive geometric telemanipulation. Boston.. 3827–3832. Korondi. Trento. G. Maribor.... Natori.6 -16 Control and Mechatronics Matsumoto. IEEE/ASME Transactions on Mechatronics. Italy. H. 2002. Messner. and H. B. 2007. Phase stabilized design of a hard disk drive servo using the ­ complex lag compensator. Proceedings of the 10th International Workshop on Advanced Motion Control. 2008. Murakami. Computers. LLC . R. and C. Ohishi. Proceedings of 10th IEEE International Workshop on Advanced Control. 2003. Fantuzzi. pp. Transactions on Industrial Electronics. W. M. pp. M. Mutoh. 2008 (AMC ‘08). 2008. Seki. Proceedings of the IEEE International Conference on Robotics and Automation. R. 1998. T. Ohnishi. Slovenia. 2008. Trento. Y. Dexterous manipulation in constrained bilateral teleoperation using controlled supporting point. and F. pp. Classical control revisited: Variations on a theme. A study of energy-saving shoes for robot considering lateral plane motion. 2008. 2006. K. Pipeleers. Electronics and Mechatronics. Tadakuma. March 26–28. S. 206–211. pp. T. 3290–3295. Transactions on Industrial Electronics. and J. IEEE Transactions on Industrial Electronics. Trevisani. 55(3):1277–1285..C. 15–20. 54(2):1113–1121. 10th IEEE International Workshop on Advanced Motion Control. 2008.

S. Trento. Khattabi. 31(12):1751–1770. 2004. 2007.D. 2007. Italy. T.. Shimono. and W P.. IEEE/ASME Transactions Mechatronics. L. Van Den Hof. and M.. Implementation of an internet-controlled system under variable delays. 4. and M. 116:654–659. 38(12):2103–2109. Pipeleers. T.Motion Control Issues 6 -17 Seuret. 2008. Vision-based force following control for mobile robots. Servo experiments for modeling of actuator and windage dynamics in a hard disk drive. Canadian Conference on Electrical and Computer Engineering. B. 55:448–456. de Callafon. and N.. Termens-Ballester. Van Den Hof. A. Proceedings of the 15th IEEE Mediterranean Conference on Control and Automotion’07. Oda. pp. Proceedings of the IEEE. Katsura. Automatica. Robust bilateral generalized predictive control for teleoperation systems. De Schutter. pp. Oboe. M. 2004. Tampa. P. 2004. June 2005. Aubry. Jamei. D. 2008 (AMC ‘08). T. Measurement. W. Toguyeni.C. Residual vibration reduction using vector diagrams to generate shaped inputs.. 112:76–82. SICE Annual Conference 2007. A ­ linear programming approach to design robust input shaping. and M.D. Fieldbus technologies in industrial automation. Schutter. M. 1995. 1990. Singer. June 3–5.D. and Control. FL. J. 218–223. Transactions of the ASME. and R. 2007. and T. Van den Broeck. Istanbul. J. © 2011 by Taylor and Francis Group. Thomesse. Steinbuch. Seering. and K. Proceedings of ETFA ‘06. Tashiro. Richard. Vol. Umeno. 727–732. Turkey. and Control.C. Transactions of the ASME. 80–85. Singhose. Greece. Automatica. Robust speed control of DC servomotors using modern two degrees-offreedom controller design. Athens.. Verscheure. 2007. Slama. Ohnishi. pp. 2008. S. Short seeking by multirate digital controllers for computation saving with initial value adjustment. Step passage control of a power-assisted wheelchair for a caregiver. Repetitive control for systems with uncertain period-time. Sheng. An indirect method for transfer function estimation from closed loop data. B. Trento. Demeulenaere. J. S. IEEE Transactions on Industrial Electronics. and J. Seering. 2006. Demeulenaere. LLC . Singer. March 26–28. 1994. 1993. 38:363–368. Design of noise and period-time robust high order repetitive control. Kratz. Proceedings of the 10th International Workshop on Advanced Motion Control. 1877–1880. 2006. G. Schrama. 54(2):907–918. Weiland.W. March 26–28. Prague. Spong. and N. Taghirad.. J.E. and R. and T. J. Proceedings of the IEEE International Conference on Mechatronics (ICM). Vol. 2008. Swevers. Preshaping command inputs to reduce system vibration. 2507–2512. 2008. pp. 2002. and F. D. 2008. and Y. 55(4):1715–1721. J. Tomizuka. Steinbuch. Transactions on Industrial Electronics. Shimizu.P. with application to optical storage. Time-energy optimal path tracking for robots: A numerically efficient optimization approach. Journal of Dynamic Systems. and R. P. Schrama. Measurement. pp. M. 2006. Swevers. 11(1):9–16. 29:1523–1527.E. R. J. 43: 2086–2095.P. 2008. 2004. A. Automatica. H. IEEE Transactions on Industrial Electronics. IEEE Transactions on Industrial Electronics. Caigny. Robust performance verification of adaptive robust controller for hard disk drive. 1991. Model predictive control for bilateral teleoperation systems with time delays. Murakami.-P. W. H. Italy. Diehl. Reprint accepted for publication in Automatica. 2007. 93(6):1073–1101. L. 10th IEEE International Workshop on Advanced Motion Control. Identification and control-closed loop issues. S. Journal of Dynamic Systems. and E. Yang.A. Singh. Czech Republic. Hori. 2007. and J. 1. pp. Zeng. 675–680. N. Abstraction and reproduction of force sensation from real environment by bilateral control.

...... the “stable regenerative chatter”).. It is also known that SM dynamics is considerably more complex..... 7-5 CTCR Methodology.................7-10 Case Study 1: Application of CTCR for Single-Tool Machining  •  Case Study 2: Deployment of CTCR on Variable-Pitch Milling Cutters  •  Case Study 3: Six-Flute Milling Cutter Design Optimization Problem Characteristics of Multiple-Dimensional Stability Maps Simultaneous Machining and Multiple Delays  •  Stability of Simultaneous Machining Regenerative Chatter and Its Impacts on Machining  •  Basics of Single-Tool Machining  •  Single-Tool Machining and Stability Nejat Olgac University of Connecticut 7...................... especially when metal removal rates are maximized and the dynamic coupling among the cutting tools.......................1 7..................... there is no evidence of a broadly agreed upon analytical 7-1 © 2011 by Taylor and Francis Group....7-18 Rifat Sipahi Northeastern University 7.6 Conclusion......... both in the metal-processing field as well as in the mathematics. and the machine tool(s) becomes very critical............................ Furthermore................ It is partly due to the modeling difficulty and the large number of parameters involved in the emanating dynamics.............. The occurring dynamics is known as the “unstable regenerative chatter” at the interface between the workpiece and the cutting tool................ LLC ...................................................................................................... It is obvious that such a parallel processing operation as opposed to conventional singletool machining (STM) (also known as serial process) is more time efficient [1–5].................................................. depths-of-cut) on the same workpiece..............1  Introduction and a Review of Single Tool Chatter We present a new methodology on the stability of simultaneous (or parallel ) machining (SM)............. The stability repercussions of such dynamics are poorly understood at the present......3 7........7-7 Example Case Studies......5 Optimization of the Process..... where multiple tools operate at different cutting conditions (spindle speeds............... 7-20 References.........................7-1 Regenerative Chatter in Simultaneous Machining..........e........7-21 7. Primary difficulty arises due to the regenerative cutting forces that may ultimately violate the instability boundaries. 7-20 Acknowledgments..................................7 New Methodology for Chatter Stability Analysis in Simultaneous Machining 7............. The aim is to create working conditions that facilitate chatter rejection (i................2 7.........4 Introduction and a Review of Single Tool Chatter. the workpiece................................

1 Regenerative Chatter and Its Impacts on Machining Optimum machining aims to maximize the material removal rate while maintaining a sufficient disturbance rejection (i.35]. 7. does not have such a term. there are two groups of machine tool chatter: regenerative and nonregenerative [6]. There is certainly room for much-needed improvement in the field.” As accepted in the manufacturing community. thus we refer to it simply as “chatter” in the rest of the document. ho. chatter research has focused on the conventional STM. and stability limit aspects of this seemingly straightforward and very common behavior.2  Basics of Single-Tool Machining Let us review the basics of STM chatter dynamics for the clarity of the argument borrowing some parts from [6]. however.7-2 Control and Mechatronics methodology to assess the stability/instability boundaries for a perfectly known dynamic process model. Most commonly. Further research focused on the particulars of parameter selections in machining to avoid the build up of these undesired oscillations and on the analytical predictions of chatter.. again. This text evolves mainly on regenerative type. There is no practical methodology known at this point to resolve the complete stability mapping for such constructs. multiple spindle speeds. and nonregenerative chatter has to do with mode coupling among the existing modal oscillations.e. We compose the document with the support of several earlier publications. tool life. The segments on STM. and especially when they are rationally independent. the dynamic progression.10–12]. Starting with the early works [7–10]. In lack of a solid mathematical methodology to study the SM chatter.” Multiplicity of the delays (which arise from multiplicity of spindle speeds) present enormously more complicated problem compared with the conventional STM chatter governed by a single delay. They are not expandable to SM because of the substantial difference in the underlying mathematics. A natural progressive trend is to increase the productivity through SM. Consider an orthogonal turning process for the sake of example (Figure 7. In order to prevent the onset of chatter. Machine tool chatter is an undesired engineering phenomenon.37]. Further discussions on the selection of the variable-pitch milling process and the optimal design of multi-flute milling cutters as well as the operating conditions with the primary intent of avoiding the unstable chatter are attributed to a large extent to [36. one has to select the operational parameters appropriately. The machine tool instability primarily relates to “chatter.1. The latter. namely. structural reasoning. and this chapter presents some key steps in this direction. For SM. Similar problems result in much more complex form for SM. The former brings a delay argument into the inherent dynamics since part of the cutting force emanates from the tooth position one revolution earlier (thus. Regenerative chatter occurs due to the periodic tool passing over the undulations on the previously cut surface. are well known. The underlying mechanism for regenerative chatter is quite simple to state. © 2011 by Taylor and Francis Group. This process can be further optimized by determining the best combination of the chip loads and spindle speeds with the constraint of chatter instability. create governing differential equations with multiple time delay terms. and SM follow primarily the directions of [34. It is well known from numerous investigations that the conventional STM introduces some important stability concerns. Their characteristic equations are known in mathematics as “quasi-polynomials with multiple time delays. we can say that SM requires analyzing a more complex mathematical problem stemming from the general class of parametric quasi-polynomials with multiple-delay terms. The aim. delayed by a cutter-passing period). the chip loads and spindle speeds. etc. the existing practice is likely to be and it really is suboptimal and guided by trial-and-error or ad hoc procedures. many researchers very cleverly addressed the issues of physical modeling.. which cross-influence each other. Some interesting readings on this are found in [6. Its negative effects on the surface quality.1).1. orthogonal cutting. Leaving the details to the later sections. This text presents a novel procedure in that sense. even for the simplest cases of simultaneous machining. The desired (and nominal) chip thickness. 7. however. Existing studies on machine tool stability address conventional STM processes. the “stability margin in cutting”) to assure the surface quality. is to increase the metal removal rate while avoiding the onset of chatter [13–15]. LLC .

and τ [s] is the period of successive passages of the tool. what is uncut τ s earlier. Nominal chip thickness. Eng. ASME J. LLC . F. With permission. and thus the attribute “regenerative. where N is the spindle speed (rpm). y.2 gives a classical causality representation of the dynamics for this orthogonal turning operation. For the sake of streamlining the presentation. which is created during the previous pass. It carries the signature of y(t) − y(t − τ). Sci. Therefore. The workpiece and its rotational axis are considered to be rigidly fixed.2. where y(t) is the fluctuating part of the chip thickness at time t (so-called offset chip thickness from the nominal value ho).2  Functional block diagram of chatter regeneration. the following causal relations are incorporated: ho + – the cutting force: F (t ) = Cbh(t )  actual chip thickness: h(t ) = ho − y(t ) + y(t − τ)  + + e–τs h F y (7..1) (7. 127.New Methodology for Chatter Stability Analysis in Simultaneous Machining 7-3 N h(t) y(t – τ) c βm k Feed y(t) ho F FIGURE 7. Taking G(s) as the transfer function between the cutting force F and y. R. The tool actually cuts the chip from the surface. and the only tool flexibility is taken in the radial direction. the governing dynamics becomes a linear delayed differential equation. The process-generated cutting force. 791. Sci. 2005. is realistically assumed to be proportional to the dynamic chip thickness.) © 2011 by Taylor and Francis Group. In Figure 7. R.. (From Olgac. is disturbed by the undulating offset chip thickness. which is equal to 60/N.1  Orthogonal turning process. 2005. Eng.” In other words. Manuf. ASME J. for which the characteristic equation is a quasi-polynomial. ho. h(t). N.2) bC G(s) FIGURE 7. the tool–workpiece interface experiences different amplitude of excitation forces. 127. These undulations create driving forces for the y dynamics τ s later during the next passage. N.. Manuf. returns as additional chip thickness that the cutting tool has to remove. and Sipahi. The block diagram in Figure 7.. 791. a single-degree-of-freedom cutting dynamics is taken into account instead of higher degree-of-freedom and more complex models. With permission. and Sipahi.) is considered constant. (From Olgac.

791. The cutting force F remains constant. which is the inverse of the spindle speed (60/N ). namely τ. [34]). This equilibrium is called “stable” or “asymptotically stable. 2005. Certain selections of b and τ(= 60/N ) can introduce marginal stability to the system on the boundaries separating stability from instability in Figure 7.. the entire cutting mechanism described by Figure 7. if we increase the metal removal rate (by increasing the chip width b while keeping the spindle speed and related delay τ constant).” Figure 7.000 N (rpm) Unstable a=0 a=5 a = 10 a = 15 Stable 12. stiffness and damping characteristics) yields a static deflection throughout the cutting operation. which are considered to remain unchanged. They represent the (b. It is clear that the selection of b and τ influences the stability of the system and the complete stability map of this system in b and τ domain is the well-known “chatter stability lobes.3) has all its roots on the stable left half-plane [16.17].3. N.000 16.000 4. Manuf. ASME J. aggressive cutting ultimately invites chatter instability. For instance. Cutting is at equilibrium when y = 0. With permission. the characteristic equation (7.) © 2011 by Taylor and Francis Group.2 1 + 1 − e − τs bCG(s) = 0 ( ) (7. which is also user selected 7. and it possesses infinitely many finite characteristic roots.1 becomes linear as well. There are several such lobes marked on this figure. (From Olgac. which is obviously not desirable.014 0. There are two cutting conditions under the control of the machinist. and Sipahi. which means an ideal cut with no waviness..3.016 Chip width (m) 0. which is user specified and assumed constant C is the cutting force constant τ [s] corresponds to the period of one spindle revolution.3 (using the parameters from Ref. all of which have to be taken into account for stability. LLC .004 0.” if the loop characteristic equation of the block diagram in Figure 7.000 8.008 0. R. τ) plane mapping of the dynamics with dominant characteristic root at Re(s) = −a.000 6. which is the chip width. and b.e..7-4 Control and Mechatronics where b is the chip width.006 0. As such the 0. damped.000 14. The openloop transfer function G(s) typically manifests high-impedance. At the operating points on these boundaries. and the tool support structure (i. The other parameters. C and G(s). 127. N (rpm). these curves show the well-known stability lobes that form the stability constraints for any process optimization.002 0 2.1. This equation is transcendental. Eng. Sci.000 10. that is.012 0. When a = 0 (corresponding to the thick line). represent the existing cutting characteristics.3) possesses a pair of imaginary roots ±ωi as dominant roots.000 FIGURE 7.01 0. This transition corresponds to marching along a vertical line in Figure 7.3  Single-Tool Machining and Stability Assuming that the force–displacement relationship is linear. τ = 60/N.3  Typical stability lobes and constant stability margin lobes. and stable dynamic behavior.

the transcendental term appears with only single delay τ and no integer multiples of it (i. Optimum working conditions are reached by increasing b and N up to the physical limitations provided that a desirable stability margin (i.” Even for much more complicated dynamics than what is shown in Figure 7. The determination of these curves also offers a mathematically challenging problem. just to give an idea of the distribution of the loci where the chatter rejection speed is constant. It is logical to select the cutting parameters (b.) © 2011 by Taylor and Francis Group. [38].4b). LLC . The bigger the value of a. marked as “stable. etc. the complete stability maps of single-delay problems represented by (7. and Sipahi. 2τ.e. or both.” therefore. either through the flexible workpiece (as in Figure 7.2 Regenerative Chatter in Simultaneous Machining The functional block diagram in Figure 7. Clearly. including higher order modes of the tool holder). With permission. a simple procedure is given in [34].3) are obtainable (e. the determination of the stability boundaries (lobes) is relatively simple.4a and b for two example operations.. and 15. One should notice that in (7.. will influence the dynamics at tool j.e. A set of operating points are shown where a = 5. We assume these tool-to-tool interfaces are governed under linear relations. N ) sufficiently away from the chatter stability bounds. including itself. i. 2005. For this class of systems.g.3). we will treat a set of more challenging problems as the following chapters are formed. the entire structure mimics a spring-mass resonator (a conservative system) at the respective frequency of marginal stability.. 7. The crucial difference between the STM and SM is the coupling among the individual tool–workpiece interfaces. the better the surface quality.g. the characteristic equation still has no commensurate terms. R. ASME J. In mathematical terms.” It refers to a = −Re (dominant characteristic root). 791.. the chip load at tool i (i = 1. Consequently. 3τ. a) is guaranteed.3).2 expands in dimension by the number of tools involved in the case of SM.. Manuf. (From Olgac. the dynamics of tool j will reflect the combined regenerative effects from all the tools. which implies that the state of the ith tool one revolution earlier (i. 127. In this text..4a) or the machine tool compliance characteristics (as in Figure 7. Although the problem looks complex.4  Conceptual depiction of simultaneous face milling: (a) workpiece coupled and (b) machine tool coupled. Sci. the higher the “chatter rejection speed.e. N.1 (e.e.” The desirable operating point should lie in the shaded region. Machine tool Work-piece Tool 1 ω1 Tool 1 ω1 Tool 2 ω2 Tool 2 ω2 Work-piece (a) (b) FIGURE 7. as depicted in Figure 7.New Methodology for Chatter Stability Analysis in Simultaneous Machining 7-5 complete system is resonant at ω.). τi s) affects the present dynamics of the jth tool. The flexures are shown at one point and in 1D sense to symbolize the most complex form of the restoring forces in 3D. Eng.” For the simple mathematical construct as in (7. This is the cross-regenerative effect. Figure 7.3 represents a unique declaration of what is known as “equal stability margin” lobes..…. n). Spindle speeds and their directions are also selected symbolically just to describe the types of operations we consider here. which carries the signature of yi(t) − yi(t − τi). which can be found in Ref. Conventional terminology alluding to this feature is called the “stability margin. for the case of STM). 10. such systems are known as “single delay with no commensurate conditions.. also known as “chatter frequency.

ASME J. components of which denote the d-o-c at n individual tools. bn Cn) are the same terms as in the single-tool case. mainly because the stability problem is NP-hard with respect to increased number of delays. e − τn s ).5  Block diagram representing the simultaneous machining.. such as {τ} = (τ1. however. The specific objective of this paper is to develop a new mathematical framework to address the STM stability problem. Mathematically. it is overlooked at this stage. It is known that this dynamics also carries time varying and periodic nature of the cutting force directions. Y (n × 1) is the tool displacement fluctuations vector along the d-o-c directions.. Similarly. We present next the multi-spindle cutting tool dynamics in a generic form (see Figure 7. BCdiag = diag(biCi)..and cross-coupling effects between the cutting force vector. we take the fundamental elements of these forces in their Fourier expansions.7-6 Control and Mechatronics The overall dynamics becomes a truly cross-coupled linear multiple time-delay system. (From Olgac.2  Stability of Simultaneous Machining The characteristic equation of the loop in Figure 7. With permission. as it was pointed in the literature [33]. N. i = 1. Sci. F (n × 1).     ( ) (7. As such. the regenerative elements).) © 2011 by Taylor and Francis Group. H0 (n × 1) and H (n × 1) represent the commanded and actual depth-of-cut (d-o-c) vectors.4 is a parameterized H0 + – + + H –τs e diag BCdiag F G(s) Y ( ) FIGURE 7. We are making the assumption that these relations are all linear.4) which is representative of a dynamics with multiple time delays (τ) and multiple parameters (B) as opposed to a STM. BCdiag = diag(b1 C1. 7. τi (= 60/Ni). A notational selection here is that bold capital fonts represent the vector or matrix forms of the lower case elements. As such. and Nis are the spindle speeds.e. this nuance introduces only a scale factor along the delays. and Sipahi. represents the delay effects (i. e − τ2 s . The exponential − τs diagonal matrix e diag = diag(e − τ1s . except the multiple-tool version of them. Analogous to Figure 7. however. 2005.…. R. t. i = 1. to avoid the mathematical complexity in order to extract the underlying regenerative characteristics. for instance. Manuf.2. it becomes periodically time-variant matrix [33]. the dynamics reduces to linear time-invariant form. Notice that the relation between τj and Nj should go through the number of flutes in case of milling. τ2). The past investigations suggest the use of Fourier expansion’s fundamental term in such cases.…. It is clear that G(s) may appear in much more complicated forms. b2 C2. where there is a single delay τ and a single parameter. as it is common in most fundamental chatter studies. …. … n. There is no simple extension of the conventional STM stability treatment to multiple spindle and consequently to multiple-delay cases of SM. LLC . in milling.2.2. 791.5 is − τs CE(s. B) = det I + G(s)BC diag I − e diag  = 0 .1  Simultaneous Machining and Multiple Delays The stability of linear time-invariant multiple-delay systems is poorly known in the mathematics community. Equation 7. and this point is revisited in Case Study II. and the tool displacements in the d-o-c direction Y (n × 1).5). which entails all the auto. G(s) (n × n) is the dynamic influence transfer function. n represents the influence of bi (chip width) and Ci (the cutting force constants) vis-à-vis the tool i. Eng. 7. 127. b.

clearly.4) under the conceptual framework of the CTCR [23–26]. The highest degree of s in (7.4 decouples all the delay effects so that the problem reduces to n independent STM chatter problems. A recent paradigm by the authors enables the determination of all the stable regions of τ completely (including the first stability interval of 0 ≤ τ < τmax) [23–27]. significantly more complex. We study in this text.28–32]. Otherwise. c being constants. The dynamics at hand is a Retarded Multiple Time-Delay System. 1. (7. First.5) where aj(s. Again. using a reduced system for n = 2. e −( τ1 + τ3 )s term would indicate the cross-coupling between the regenerative effects of spindles 1 and 3. yields the complete stability picture for single-delay systems. These methods typically stop once τmax is computed. b2 ) = a0 + a1e − τ1s + a2e − τ2 s + a3e −( τ1 + τ2 )s = 0 . its infinitely many characteristic roots should all be on the left half of the complex plane. i. Therefore. k ≥ 2. τ1 . b. 7..30] tackle a limited subclass of (7. Reference [28] handles the same problem using simultaneous nonlinear equation solvers but is only able to treat systems in the form of a + be − τ1s + ce − τ2 s = 0 with a. τmax. the most general form of (7. they are performed point-by-point in (τ1. Very few mathematical investigations reported on the subject also declare strict limitations and restrictions to their pursuits [18. This imposes.e.e. For a stable operation.4.τ2) space and. however. This is a new approach in mathematics.b1.4) with det[G(s)] of degree 2. and it is uniquely suitable for SM. the cross-talk terms will appear in Equation 7. called “cluster treatment of characteristic roots” (CTCR).21. among the tools). We wish to recover the stability portrait (referred to as the “lobes”) in four-dimensional (τ1. they are numerical as opposed to analytical nature of CTCR. where the delay terms τi (= 60/Ni) are rationally independent of each other.New Methodology for Chatter Stability Analysis in Simultaneous Machining 7-7 quasi-polynomial in s with multiple time delays. 3 are polynomials in s with parameterized coefficients in b1 and b2. The most general form of the characteristic equation (7. Second. A critical point to note is that there is no possible commensurate delay formation here. for varying B matrix) bring another order of magnitude to the difficulty. the application is computationally overwhelming. b2) space.19. Multiple time-delay systems (MTDS) are. without loss of generality.e.28–30].3  CTCR Methodology In order to avoid the notational complexity. τ. no terms will appear with e − k τi s. There exists no commonly accepted methodology at present to study the stability of such systems. b1. the system depicted in Figure 7. τ 2 . Infinitely many such roots will have to be tracked for the stability condition. One can find substantial literature on the stability of systems with only one single time delay [18–22. The most general form of CE(s.4) representing the two-tool SM regenerative chatter dynamics becomes CE(s. τ2.. B) contains n −∑ α j τ j s terms like e j=1 with αj = 0 or 1 representing the cross talk among the delay terms (i. j = 0. and their approach is not expandable to damped cases. 2. which add to the complexity of the analysis considerably.38] even for commensurate cases. a very serious restriction. This new framework.5 is asymptotically stable if and only if all the roots of the transcendental characteristic equation (7. They consider all the parameters (B) to be fixed and claim that the higher delay values than a certain τmax would invite instability. References [29.b2). and their parameterized form (i.5) resides within a0(s) and it has no time delay accompanying it. LLC . Equation 7. therefore. this implies that the regenerative effect of tool j acts on itself only once. but without a damping term. For instance.4) are on the left half of the complex s plane. We wish to clarify this statement that many investigations performing time-domain simulations or iterative numerical determinations of stability boundaries are not included simply because they are not considered comparable approaches. The published studies address the question of the “stability margin” in time delay. This is the focal highlight of this chapter. For the exceptional case where G(s) is also diagonal. This is obviously a prohibitively difficult task.. Physically. STM dynamics is a simpler subclass of what is considered here. © 2011 by Taylor and Francis Group. b1 . as it is known in the mathematics [18. we present the description of the CTCR methodology in the next section.

τ1. which ultimately form the hyperplanes ℘(τ1 . The determination of the kernel and its offspring © 2011 by Taylor and Francis Group. τ 2 )ωc.e. we claim two novel propositions to summarize these peculiar features. τ 2 ) ∈ℜ2 + for all possible ωc s. “the kernel.6) generate a grid in {τ} ∈ ℜ2+ space with equidistant grid size in both dimensions. the most critical ones are those that are purely imaginary. Although the dynamics represented by (7.” ℘0 (τ1 . from single to multiple time delays is not trivial even when the parameters (b1.” for j = k = 0 and for all possible ωc’s. These two propositions form the foundations of CTCR paradigm..5) possesses infinitely many characteristic roots. These imaginary roots display some very interesting constructs. any stability switching (from stable to unstable or vice versa) takes place when the parameters cause such purely imaginary roots.2.5 can have an imaginary root only along countably infinite number of hypersurfaces ℘(τ1 . instead of generating these grid points and studying their variational properties.1. exhaustively.7-8 Control and Mechatronics 7. τ2. b2) for this SM operation. Therefore. τ2). Proposition 7. b2) are fixed. if there is any stability switching (i. The transition. Obviously. We start with a simpler stability problem for B = (b1. b1. which are simply “curves” in two dimension. we  need all possible ℘(τ1 . 2.1.5) are found. τ 2 ) curves. When ωc is varied continuously. which are also critical to the CTCR framework: Remark 7. which we will call the “kernel hypersurfaces. are indeed offspring of a manageably small number of hyperplanes. τ 2 ) along which all the imaginary roots s = ωci of (7. As stated earlier. however. Remark 7. it will take place at a point on ℘(τ1 . We present later some example cases to demonstrate the strengths and the uniqueness of CTCR. and τ 2 ∈ℜ+ . That means the description of kernel has to be exhaustive. LLC . we present two explanatory remarks. The kernel can alternatively be defined by min(τ1 . we have to determine the kernel ℘0 (τ1 . τ20). The determination of the kernel. ωc ∈ ℜ+. (7. τ 2 ) . the same imaginary root will appear at all the countably infinite grid points of  2π 2π  {t} =  τ10 + j. the respective grid points also display a continuous variation.…. τ 20 + k   ωc ωc  j = 1. A nontrivial transition of the CTCR from single delay to multiple delays leads to an exhaustive stability analysis tool in the space of the time delays (τ1. one can search only for the critical building block.3. Related to this proposition.3 is in four-dimensional parameter space of (τ1. b2)T parameters being fixed.32]. In the interest of streamlining the discussions. These hypersurfaces.(τ1 .1  Characteristics of Multiple-Dimensional Stability Maps Counterpart of the stability lobes given in Figure 7. we provide these propositions without proof and recommend [32] to the interested reader for the theoretical development. k = 1. τ 2 ) and the representative ωc s. τ 2 ). In other words. τ 2 ). Therefore. We revisit a mathematical tool for this operation in this section from [24–26. Kernel and offspring: If there is an imaginary root at s = ∓ω ci (subscript “c” is for crossing) for a given set of time delays {τ0} = (τ10. Based on our preliminary work. from stability to instability or vice versa). τ 2 ) .…. Equation 7. 2. the distinct points in (7. All the hypersurfaces in ℘(τ1 .6) Notice that for a fixed ωc. τ 2 ) are some descendants of ℘0 (τ1 .

New Methodology for Chatter Stability Analysis in Simultaneous Machining


is ­ mathematically very challenging problem. For this, we deploy a unique transformation called the Rekasius substitution [22,32]: 1 − Ti s e − τi s = (7.7) , Ti ∈ℜ, i = 1, 2 , + Ti s 1 which holds identically for s = ωci, ωc ∈ ℜ. This is an exact substitution for the exponential term, not an approximation, for s = ωci, with the obvious mapping condition of τi = 2  tan −1 (ω cTi ) + jπ  , j = 0, 1, … .  ωc  (7.8)

Equation 7.8 describes an asymmetric mapping in which Ti (distinct in general) is mapped into countably infinite τi sets, each of which has periodically distributed time delays for a given ωc with periodicity 2π/ωc. Substitution of (7.7) into Equation 7.5 converts it from CE(s, τ1, τ2) to CE (s, T1 , T2 ). Notice the slight breach of notation, which drops b1, b 2 parameters from the arguments both in CE and CE . We next create another equation as described below: CE s, T1 , T2 = CE s, T1 , T2 (1 + T1s ) (1 + T2 s ) =





∑ λ (T , T ) s .
k 1 2 k k =0



Since the transcendental terms have all disappeared, this equation can now be studied much more efficiently. Notice that all the imaginary roots of CE(s, τ1, τ2) and CE(s,T1 ,T2 ) are identical, i.e., they ­ coincide, while there is no enforced correspondence between the remaining roots of these equations. That is, considering the root topologies Ω1 = s CE ( s, τ1 , τ 2 ) = 0, Ω2 = s CE ( s,T1 ,T2 ) = 0,



(τ1, τ2 ) ∈ℜ2 + }, (T1,T2 ) ∈ℜ } ,


the imaginary elements of these two topologies are identical. In another notation, one can write Ω1 ∩ C 0 ≡ Ω2 ∩ C 0, where C 0 represents the imaginary axis. It is clear that the exhaustive determination of the (T1, T2) ∈ ℜ2 loci (and the corresponding ωcs) from Equation 7.9 is much easier task than the exhaustive evaluation of the same loci in (τ1, τ2) ∈ ℜ 2+ from Equation 7.5. Once these loci in (T1, T2) are found, the corresponding kernel and the offspring in (τ1, τ2) can be determined as per (7.8). We give a definition next, before stating the second proposition. The root sensitivities of each purely imaginary characteristic root crossing, ωci, with respect to one of the time delays is defined as
s Sτ j

s = ωc i


ds dτ j

∂CE =−
s = ωc i

∂τ j ∂CE ∂s

i = −1, j = 1, 2 ,
s = ωc i


and the corresponding root tendency with respect to one of the delays is given as
τj  s Root tendency = RT s = ω i = sgn Re Sτ j c 


s = ωc i

) .


This property represents the direction of the characteristic root’s crossing when only one of the delays varies.

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics

Proposition 7.2 [following, 32]. Assume that there exists an imaginary root at iωc, which is caused by the delay compositions (τ10, τ20) on the kernel and its two-dimensional offspring

(τ1, τ2 )ω


  2π 2π =  τ1 j = τ10 + j, τ 2k = τ 20 + k; j = 0, 1, 2,…, k = 0, 1, 2,… .   ωc ωc


The root tendency at this point is invariant with respect to j (or k) when k (or j) is fixed. That is, regardless for all j = 1, 2 … (k = 1, 2 …), respectively. In other words, the imaginary root always crosses either to the unstable right half-plane (RT = +1) or to the stable left half-plane (RT = −1), when one of the delays is kept fixed, and the other one is skipping from one grid point to the next, regardless of the actual values of the time delays as long as they are derived from the same kernel (τ10,τ20). of which offspring (τ1j, τ2k) of the kernel set (τ10, τ20) causes the crossing, RT s =ω i RT s = ω i are the same
c c





Note: There are several other computational tools that can be used to determine the kernel hypercurves. Interested reader is directed to the following texts: 1. Kronecker summation technique [39], which operates on an augmented characteristic equation that is independent of the delays. 2. Building block method [40], which considers a finite dimensional space, the so-called spectral delay space, to simplify the process. 3. Frequency sweeping algorithm [41], which offers a practical numerical procedure to obtain the kernel hypersurfaces, using geometry.

The two propositions of the CTCR and the steps associated with the two remarks stated above ultimately generate a complete stability mapping in (τ1, τ2) space. We next present three case studies to show how CTCR applies to STM and SM chatter problems. The first is to demonstrate how one can deploy CTCR to the conventional STM chatter. The second is the analysis of an experimental study, which is detailed in [35]. And the third study presents how CTCR can be deployed for designing six-flute milling cutters.

7.4  Example Case Studies
7.4.1  Case Study 1: Application of CTCR for Single-Tool Machining
In order to gain better understanding of the CTCR procedure, we take the conventional chatter stability problem first with a single cutter (n = 1). For this case, a regenerative dynamics with one single time delay appears. The characteristic equation (7.5) reduces to CE(s, τ, b) = a0 (b, s) + a1(b, s)e − τs = 0, (7.14)

where the only delay is τ[s] = 60/N[RPM], N is the only spindle speed, and b is the chip width. There are numerous case studies in the literature on this problem. Just for demonstration purposes, we take the orthogonal turning example given in [34, Equation 7], which starts from the characteristic equation 1 + bC cos β 1 − e − τs ms 2 + cs + k


) = 0, (


where C , β, m, c, and k are the constants related to the cutting dynamics. This equation can be rewritten as CE(s, τ, b) = ms 2 + cs + k + bC cos β 1 − e − τs = 0,



© 2011 by Taylor and Francis Group, LLC

New Methodology for Chatter Stability Analysis in Simultaneous Machining


which is in the same form as (7.14). The parametric values are taken from practice as follows C = 2 × 109 N/m2 β = 70° m = 50 kg c = 2 × 103 kg/s k = 2 × 107 N/m

Equation 7.16 with these numerical values becomes s 2 + 40s + 400,000 + 1,368,0805.73b − 1,368,0805.73be− τs = 0 . (7.17)

We next search for the stability pockets in τ ∈ ℜ+ space for varying chip widths b ∈ ℜ+. This picture is conventionally known as the “stability lobes” for regenerative chatter. Unlike multiple-delay cases, this problem with a single delay is solvable using a number of different procedures given in the literature [6,21,34,43]. Our aim here is to show step-by-step how the CTCR paradigm applies in this case. Take b as a fixed parameter. Use the Rekasius substitution of (7.7) in (7.20) to obtain CE(s, b,T ) = (1 + Ts)(s 2 + 40s + 400, 000 + 1, 368, 0805.73b) − 1, 368, 0805.73b (1 − Ts) = Ts 3 + (40T + 1)s 2 + [40 + (400, 000 + 2, 736,1611.46b)T ] s + 400, 000 = λ 3 (T , b)s 3 + λ 2 (T , b)s 2 + λ1(T , b)s + λ 0 = 0, (7.18)

where λj(T, b), j = 0…3 are self-evident expressions. Search for the values of T, which create s = ωi as a root for (7.18). This is best handled using the Routh’s Array, Table 7.1. Apply the standard rules of Routh’s array dictating that the only term in s1 row to be zero for Equation 7.18 to possess a pair of imaginary roots, i.e., λ 2 λ1 − λ 3λ 0 = 0, λ 2 ≠ 0. (7.19)

Equation 7.19 yields a quadratic equation in T for a given chip width b, which results in at most two real roots for T. If these roots are real (call them T1 and T2) we proceed further. It is obvious that for those real T values, the imaginary characteristic roots of (7.18) will be s j = ω ji λ0 λ2 =

λ1 , with T = Tj , j = 1, 2 , λ3


Table 7.1  The Routh’s Array (b and T Arguments Are Suppressed)
s3 s2 s1 s0 λ3 λ2 λ 2 λ1 − λ 0 λ 3 λ2 λ0 λ1 λ0

if and only if λ2 λ0 > 0. These are the only two imaginary roots (representing the two chatter frequencies) that can exist for a given b value. Obviously, if T1 and T2 are complex conjugate numbers for a value of b, it implies that there is no possible imaginary root for (7.18) regardless of τ ∈ ℜ+. In other words, this d-o-c causes no stability switching for any spindle speed N > 0. Using Equation 7.8, we determine the τ values corresponding to these (T1, ω1) and (T2, ω2) pairs. There are infinitely many τ1j and τ2j, j = 0, 1, 2, …, respectively. As per the definition of kernel, following (7.6), the smallest values of these τ1 and τ2 form the two-point kernel for this case. For instance, when b = 0.005 [m], these values are found as T1 = −0.006142 ω1 = 728.21 rad/ s τ1 = 0.0049 s T2 = −0.0003032 ω 2 = 636.33 rad/ s τ 2 = 0.0093 s (7.21) 

The first clustering is already achieved. For b = 0.005, the kernel consists of two discrete points (as opposed to a curve in two-delay cases, such as in Case II below) τ1 = 0.0049 s and τ2 = 0.0093 s, for which the characteristic equation

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics

has two imaginary roots. These are the only two imaginary roots that any τ ∈ ℜ+ can ever produce for the given b = 0.005. Each τ1 (or τ2) resulting in ω1 (or ω2) also describes a set of countably infinite delay sets, called the “offspring” of the original τ1 (or τ2). They are given by the single-delay version of Equation 7.8 as τ1 j = τ1 + 2π j with the cluster identifier ω1 = 728.21 rad/s, j = 1, 2,… ω1 2π j with the cluster identifier ω 2 = 636.33 rad/s, j = 1, 2,… ω2

τ2 j = τ2 +

The second clustering feature (Proposition 7.2) is slightly more subtle. The root tendencies associated with the transitions of τ defined by τ1 j − ε → τ1 → τ1 j + ε are all RT = +1 (i.e., destabilizing) τ 2 j − ε → τ 2 → τ 2 j + ε are all RT = −1 (i.e., stabilizing)

Let us examine this invariance property for {τ1j} cluster, every element of which renders the same s = ω1i characteristic root. The differential form of (7.18) is

dCE(s, τ, b) = which results in

= 2ms + c + bC cos βτe − τs ds + bC cos βse − τsd τ = 0,


∂CE(s, τ, b) ∂CE(s, τ, b) ds + dτ ∂s ∂τ



bC cosβse − τs ds . =− dτ 2ms + c + bC cosβτe − τs


It is easy to show two features in (7.23): (1) e−τs remains unchanged for s = ω1i and τ = τ1j, j = 0, 1, 2, …, and (2) RT1 = Re(ds /d τ) s = ω1i is independent of τ despite the varying s term in the denominator, and this root tendency is +1 for all τ1j, j = 0, 1, 2, …. The invariance feature for (τ2, ω2) pair can be obtained in a similar way, which becomes stabilizing, i.e., RT2 = −1. This demonstrates the second clustering feature as per Proposition 7.2. We can now declare the stability posture of the system for a given b (say 0.005 m), and the deployment ,… (all ′, C1 ′′ of CTCR is completed for this d-o-c (Figure 7.6). Notice the invariant RT from points C1 , C1 ′ , C2 ′′,… (all stabilizing) as τ increases. As a consequence, the number of unstable destabilizing) and C2 , C2 roots, NU, can be declared in each region very easily as sparingly shown on the ­ figure. Obviously, when NU = 0, the cutting is stable; 0 < τ < τ1 stable, τ1 < τ < τ2 unstable, τ2 < τ < τ1 + 2π/ω1 stable, etc., stability switchings occur. We marked the stable intervals with thicker line style for ease of recognition. If we sweep b ∈ [0 … 30 mm], we obtain the complete stability chart of Figure 7.6. Stable cutting appears below the dark curve (also known as the chatter bound). The conventional chatter stability lobes (as the machine tools community calls them) in the rpm domain can readily be obtained using τ = 60/N coordinate conversion.

© 2011 by Taylor and Francis Group, LLC

New Methodology for Chatter Stability Analysis in Simultaneous Machining
0.03 0.025 b = chip width (m) 0.02 0.015 0.01 0.005 0 C1 C2 2 2 C΄ 1 C΄ 2 0.02 C˝ 1 C˝ 2 0.03 τ (s) 0.04 0.05 Stable 0.06 4 Unstable





NU = 0 τ2

FIGURE 7.6  STM chatter stability chart in delay domain. (From Olgac, N. and Sipahi, R., ASME J. Manuf. Sci. Eng., 127, 791, 2005. With permission.)

7.4.2  Case Study 2: Deployment of CTCR on Variable-Pitch Milling Cutters
We report the following experimental validation of CTCR on an equally interesting high-tech machining process: milling with variable-pitch cutters, following the scientific data provided in Refs. [33,35–37]. The practice of variable-pitch cutters originates from the desire of attenuating the regenerative chatter. Instead of four equidistant flutes located around the cutting tool (4 × 90° as described in Figure 7.7a), variable pitch is used (θ1, θ2 as shown in Figure 7.7b). Differently from the conventional uniform-pitch milling (Figure 7.7a), this cutter distributes the cutting energy to different frequencies and, therefore, it would attenuate the onset of undesirable chatter behavior until very high rates of metal removal are attempted. This is a very advantageous property from the increased efficiency standpoint. Many interesting variations of this clever idea have been developed and put to practice over the last 40 years. Due to the nonuniform pitch distribution in this system, multiple regenerative effects appear in the governing equation with multiple time delays, which are proportional to the respective pitch angles. The main problem reduces to the stability analysis of multiple-delay system as in (7.5). We adopt the core problem from Ref. [33] as follows: A four-fluted uniform-pitch cutter is used in milling Al356 alloy workpiece. The cutter has 19.05 mm diameter, 30° helix, and 10° rake angles. Stability chart indicates that this milling process is unstable for axial d-o-c a = 5 mm and spindle speed N = 5000 rpm. A natural question follows: Which pitch angles should be selected for the best chatter stability
θ2 = 110° θ1 = 70°

θ2 = 90°

θ1 = 90°



Figure 7.7  Cross section of the end mill. (a) uniform pitch and (b) variable pitch.

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics

margins when variable-pitch milling is considered? [34] further develops this case study exactly, except it frees the pitch ratio, therefore introducing a truly multiple time-delayed dynamics. In other words, the selection of the two “pitch angles” is completely relaxed. The steps of this exercise and the numerical results are explained next. The system characteristic equation is taken as [33, Equation 15]: 1   det I − K t a 4 − 2 e − τ1s + e − τ2 s Φ0 (s) = 0 π 4  

( (



with Kt = 697 MPa, a = axial d-o-c [m], τ1[s] = (θ1[deg]/360)(60 / N[RPM]) is the delay occurring due to the pitch angle θ1, τ 2 = (θ2/θ1 ) τ1 is the second delay due to θ2, N is the spindle speed, and s is the Laplace variable. The matrix Φ0 containing the transfer functions and the mean cutting directions is defined as  φ xx α xx + φ yx α xy Φ0 =   φ xx α yx + φ yx α yy φ xy α xx + φ yy α xy  , φ xy α yx + φ yy α yy   (7.25)


α xx = 0.423, α yx = 1.937,

α xy = −1.203 α yy = −1.576

are the fundamental Fourier components of the periodically varying directional coefficient matrix. The transfer functions in (7.25) are populated from [33, Table 1], as φ xx = 0.08989 0.6673 0.07655 + + s 2 + 159.4s + 0.77 × 107 s 2 + 395.2s + 0.1254 × 108 s 2 + 577.2s + 0.2393 × 108 0.834 s2 + 162.2s + 0.1052 × 108

φ xy = φ yx = 0 φ yy =

Inclusion of these expressions and parameters in the characteristic equation and expanding it into a scalar expression, we reach the starting point of the CTCR paradigm. For a = 4 mm d-o-c, the characteristic equation of this dynamics is CE(s, τ1 , τ 2 ) = 625 s 8 + 808, 750 s 7 + 3,5068.83 × 106 s 6 + 3, 1376.027 × 109 s 5 + 0.6861018 s 4 + 0.385 × 1021 s 3 + 0.565 × 1025 s 2 + 0.150 × 1028 s + 0.166 × 1032 + (−0.266 × 109 s 6 − 0.324 × 1012 s 5 − 0.127 × 1017 s 4 − 0.924 × 1019 s 3 − 0.18 × 1024 s 2 − 0.572 × 1026 s − 0.756 × 1030 )(e − τ1s + e −τ 2 s ) + (284 , 943.13 × 109 s 4 + 0.212 × 1018 s 3 + 0.889 × 1022 s 2 + 0.253 × 1025 s + 0.538 × 1029 )e −(τ1 +τ 2 )s + (14247.16 × 1010 s 4 + 0.106 × 1018 s 3 + 0.445 × 1022 s 2 + 0.126 × 1025 s + 0.269 × 1029 )(e −2τ1s + e −2τ 2 s ) = 0 (7.26)

Parametric form of (7.26), that is, CE(s, τ1, τ2, a) expression, is prohibitive to display due to space limitations, thus the substitution of a = 4 mm. Please note that all the numerical values above are given in their truncated form to conserve space.

© 2011 by Taylor and Francis Group, LLC

New Methodology for Chatter Stability Analysis in Simultaneous Machining


Notice the critical nuance between Equations 7.4 and 7.14. The former represents a truly two-spindle, two-cutter setting, while 7.14 is for single-spindle turning process or a milling process with uniformly distributed flutes in a milling cutter. Mathematical expression for the characteristic equation becomes two-time delayed quasi-polynomial of 7.26 which is, in fact, much more complex than 7.16 due to the commensurate delay formation (i.e., e −2 τ1s , e −2 τ2 s terms are present). The stability assessment of this equation is a formidable task. We put CTCR to test and present the results below. The CTCR takes over from Equation 7.26 and creates the complete stability outlook in (τ1, τ2) space as in Figure 7.8a. The kernel is marked thicker to discriminate it from its offspring. The grids of D 0 (kernel) and D1, D 2 , D 3, … (offspring sets) are marked for ease of observation. Notice the equidistant grid size 2π/ω as per Equation 7.6, D0 D1 = D1D2 = D2 D3 = D3D4 , etc. The (τ1, τ2) delays at all of these sibling points impart the same ωi imaginary root for (7.26). Figure 7.8b shows the possible chatter frequencies of this system for all (τ1, τ2) ∈ ℜ+ for varying pitch ratios (0, ∞) whether they are operationally feasible or not. It is clear that this system can exhibit only a restricted set of imaginary roots (from Figure 7.8b, 3250 rad/s [517 Hz] < ω < 3616 rad/s [575 Hz]). All of these chatter frequencies are created by the kernel and no (τ1, τ2) point on the offspring can cause an additional chatter frequency outside the given set. Figure 7.8a shows the stable (shaded) and unstable regions in (τ1,τ2) space for a given axial d-o-c (a = 4 mm). It contains some further information as well. Each point (τ1,τ2) in this figure represents a spindle speed. It is obvious that, considering the relation τ1 + τ2 = 30/N, all the constant spindle speed lines are with slope –1, as annotated on the figure. The constant pitch ratio lines pass through the origin, where pitch ratio = τ2/τ1. Notice that the most desirable pitch ratios are close to τ1/τ2 = 1 for effective chip removal purposes. Therefore, very high or very low pitch ratios are not desirable, and 55° < θ1 < 90° as declared in [33]. The pitch ratios between τ2/τ1 ∈ [1.374, 2.618] offer stable operation (marked as points A and B on Figure 7.8a) for 5000 rpm. The corresponding pitch angles are (θ1A = 75.8°, θ2A = 104.2°) and (θ1B = 49.8°, θ2B = 130.2°). They coincide precisely with the results declared in [33, Figure 3]. Figure 7.8a provides a very powerful tool in the hands of the manufacturing engineer. One can select uniform-pitch cutter and 7500 rpm speed (point O 2) as opposed to variable-pitch cutter (pitch ratio 11/7) and 5000 rpm (point O1), increasing the metal removal rate by 50%. This figure is for a constant d-o-c, a . A 3D stability plot can be produced scanning the values of a in, for instance, a = 1…6 mm, τ1 ∈ ℜ+, τ2 ∈ ℜ+ domain. A cross section of this 3D plot with constant-N planes is comparable with [33, Figure 3]. We next look at the chatter stability variations for two different settings (a) Uniform pitch cutters (θ1 = θ2 = 90°) where the pitch ratio is unity, (b) Variable pitch cutters (θ1 = 70°, θ2 = 110°), where the pitch ratio is 7/11, both with Al356 workpiece. It is clear from Figure 7.8a that the system goes in and out of stable behavior as the spindle speed decreases (i.e., τ1 + τ2 increases). User could select the optimum operating condition within many available regions (such as between the points of O1 and O2, as discussed earlier). This optimization procedure is presented in Section 7.5.

7.4.3  Case Study 3: Six-Flute Milling Cutter Design
The core problem in this case study is stated as follows. We wish to design a six-flute milling cutter. Given a fixed tool diameter and tool holding structure, for a desired axial d-o-c, a, find which pitch angles and spindle speeds impart stable cutting. We refer to Figure 7.9 for the assumed distribution of the pitch angles θ1, θ1 + φ, θ1 + 2φ. The fixed increment φ separates the three pitch angles. The corresponding tooth passage times are given as τ1 = θ1/Ω , τ2 = τ1 + h, τ3 = τ1 + 2h, where h = φ/Ω with Ω [rad/s] being the spindle speed. The characteristic equation is taken from [36, Equation 17], where the preliminary machining parameters and dynamic characteristics are also described. For a 2 mm d-o-c, it is given by

© 2011 by Taylor and Francis Group, LLC


Control and Mechatronics




B O1
N =5


4 τ2 (s)










10 ,00 0

7,5 ,00 0 00



1 D0 0 0 1


3 τ1 (s)

4 Pitch ratio = 1




Pitch ratio = 11/7 (a) N = 30,000 rpm

3,700 3,600 3,500 3,400 3,300 3,200 3,100 3,000 2 1.5 1.5 2 ×10

×10 (b)


τ2 (s)






1 τ 1 (s)

FIGURE 7.8  (a) Stability regions for 4 mm axial d-o-c, with four-flute cutter milling on Al 356 (kernel, thick; offspring, thin; stable zones, shaded). (From Olgac, N. and Sipahi, R., ASME J. Manuf. Sci. Eng., 127, 791, 2005. With permission.) (b) Chatter frequencies (exhaustive) [rad/s] vs. the kernel (τ1, τ2). (From Olgac, N. and Sipahi, R., ASME J. Manuf. Sci. Eng., 127, 791, 2005. With permission.)

© 2011 by Taylor and Francis Group, LLC

New Methodology for Chatter Stability Analysis in Simultaneous Machining


θ1 θ3

θ2 = θ1 +

θ3 = θ1 + 2 θ2 θ1

FIGURE 7.9  Cross section of variable-pitch six-flute end mill cutter.

CE(s, τ1 , h) = 125 s 6 + 89, 600 s 5 + 0.3947 × 1010 s 4 + 0.1823 × 1013 s 3 + 0.4065 × 1017 s 2 + 0.9062 × 1019 s + 0.1357 × 1024 + (−0.2757 × 108 s 4 − 0.1711 × 1011 s 3 − 0.6117 × 1015 s 2 − 0.1616 × 1018 s − 0.3068 × 1022 )e − τ1s + (0.647 × 1013 s 2 + 0.1212 × 1016 s + 0.5353 × 1020 )e −2 τ1s + (−27, 569, 203.11 s 4 − 0.1711 × 1011 s 3 − 0.6117 × 1015 s 2 − 0.1616 × 1018 s − 0.3068 × 1022 )(e −( τ1 + h)s + e −( τ1 + 2h)s ) + (0.1294 × 1014 s 2 + 0.2425 × 1016 s + 0.1071 × 1021 )(e −(2 τ1 + h)s + e −(2 τ1 + 3h)s ) + (0.1941 × 1014 s 2 + 0.3637 × 1016 s + 0.1606 × 1021 )e −2( τ1 + h)s + (0.647 × 1013 s 2 + 0.1212 × 1016 s + 0.5353 × 1020 )e −(2 τ1 + 4h)s = 0. (7.27)

One can see the fundamental similarity between (7.5) and (7.27). Main difficulties in resolving the stable regions arise in several fronts: (1) The rationally independent delays τ1 and τ2, both appear in cross-talking forms (e.g., e −( τ1 + τ2 )s ), (2) Their commensurate forms also appear (e.g., e −2 τ1s and e −2 τ2 s), (3) the coefficients of the equation are badly ill-conditioned, in that they exhibit a difference of about 25 orders of magnitude. The primary tool we utilize is, again, the stability assessment method, CTCR. It results in the stability tableaus as shown in Figures 7.10 and 7.11. Stable operating regions in these figures are shaded. The constant spindle speed lines are marked using the relation τ1 + h = 10/N, N [rpm]. Notice the kernel curve, which is shown in thick lines, and the offspring curves in thin lines. To facilitate the practical usage of Figure 7.10, we marked the spindle speeds N = 625 … 5000 rpm. For instance, if the machinist wishes to operate at 3000 rpm with a = 2 mm axial d-o-c without changing the metal removal rate (i.e., very critical), an unstable (i.e., chatter prone) uniform-pitch operation at τ1 = 3.33 ms, h = 0, can easily be replaced by stable (i.e., chatter rejecting) operation at τ1 = 20 ms and h = 1.33 ms (variable-pitch format). This is a considerable flexibility in the selection process of the user. Note that the stability pictures in Figures 7.10 and 7.11 can be efficiently produced for a variety of d-o-c selections, and similar design discussion can be performed. Just to give an idea to the reader, we wish to present some measures of computational complexity. Figures 7.10 and 7.11 take about 40 s CPU time on a computer with 3.2 GHz Intel Centrino chip and 512 MB RAM. Using this mechanism, one can select appropriate cutting tool geometry (i.e., the pitch variations) and determine the chatter-free maximal d-o-c for increased productivity. According to the discussions in this section, it is obvious that Figures 7.10 and 7.11 offer a unique representation for the ultimate goal at hand, and they can be achieved using the stability analysis technique, CTCR.

© 2011 by Taylor and Francis Group, LLC

0.01 0.009 0.008 0.007 0.006 h (s) 0.005 0.004 0.003 0.002 0.001 0 0 0.002 0.004 0.006

Control and Mechatronics





1,0 1,6 5,0 2,5 00 00 1,2



83 m




τ1 (s)



FIGURE 7.10  Stability picture of six-flute cutter with d-o-c = 2 mm.

0.01 0.009 0.008 0.007 0.006 τ2 (s) 0.005 0.004 0.003

2,1 3,0




3,7 5,0 7,5







1 0.002 5,00 0
0.001 0 0




τ1 (s)




FIGURE 7.11  Stability picture of four-flute cutter with d-o-c = 2 mm.

7.5  Optimization of the Process
In the shaded regions of Figure 7.8a, the chatter has stable nature. Thus, these regions are preferred for operation. We then search for a measure of chatter rejection within these stable zones. It is well known that this property is represented by the real part of the dominant characteristic root of the system characteristic equation. The smaller this real part, the faster the chatter rejection (faster settling after disturbances) is.

© 2011 by Taylor and Francis Group, LLC

New Methodology for Chatter Stability Analysis in Simultaneous Machining


7.5.1  Optimization Problem
Two conflicting objectives in metal cutting are 1. Increased metal removal rate, which is directly proportional to axial d-o-c, a, and spindle speed N[rpm] = 30/(τ1 + τ2), as in the Case Study II above. 2. Generating desirably chatter-free surface quality on the workpiece. In order to serve for both objectives, we introduce the following objective function:  a  J = −α ℜe ( sdom ) + β  ,  τ1 + τ 2 




in which α and β are constant positive weights that can be selected based on the importance given on the conflicting items. This is a three-dimensional optimization problem, in which τ1, τ2 , and a are independent parameters, and |ℜe (sdom)⎜ is a measure of chatter rejection, as explained earlier. CTCR paradigm helps us obtain the stability boundaries of this system in the time-delay domain, but obtaining the real part of the rightmost characteristic roots requires that we look inside each stability region. Finding the real part of rightmost characteristic root of a quasi-polynomial is still an open problem in mathematics. One of the few numerical schemes available in the literature to “approximate” the rightmost (dominant) roots for different values of time delays is given in [42] that will be used for this optimization problem. It is clear that the variation of τ1, τ2 as well as axial d-o-c, a, is continuous. Figure 7.12 displays an example case study of four-flute milling cutter, taken from [37]. For that figure, a grid-point evaluation of the objective function is performed, which results in the respective chatter rejection abilities. One can seek the most desirable points in the stable operating regions. At any point (τ1, τ2), we can characterize the tool geometry (i.e., the pitch angles θ1 and θ2) as well as the spindle speed N = 30/(τ1 + τ2). Notice that the pitch ratios τ1/τ2 = θ1/θ2 are varied between [0.5…1] due to physical constraints. Too small pitch angles cause practical difficulty in extracting the metal chips after cutting. The best
×10–3 35 2 30 25 20 1 15 10 5 0 0 0.5 1 τ1 (s) 1.5 2


1.5 τ2 (s) 0.5


FIGURE 7.12  Objective function variations (sidebar) for various pitch angle (delay) compositions, for 5 mm d-o-c.

© 2011 by Taylor and Francis Group, LLC

Table 7.2  Description of the Optimum Points for Different d-o-c
a (mm) 3 3.5 4 4.5 5 5.5 Jmax (×103) 36.6088 36.8306 40.2018 38.9324 38.6183 39.4497 |ℜe(sdom)| (1/s) 97.1846 117.9176 100.8796 94.4836 91.4836 94.0572 τ1 (ms) 0.225 1.50 0.245 0.2675 0.290 0.20 τ2 (ms) 0.180 0.90 0.1575 0.1575 0.1575 0.3375

Control and Mechatronics

θ1 (deg) 99.5 112.5 109.6 113.3 116.6 67

θ2 (deg) 80.5 67.5 70.4 66.7 63.4 113

N (rpm) 74,534 12,500 74,534 70,588 67,039 55,814

Source: Olgac, N. and Sipahi, R., ASME J. Manuf. Sci. Eng., 127, 791, 2005. With permission.

operating point in each stable region is numerically determined for a given d-o-c, a. We then create the matrix of objective function values at various combinations or grid points of (τ1, τ2) and collect the corresponding best operating points (maxima of the objective function, Jmax) for each d-o-c on Table 7.2. Highlighted columns show the best cutter design (θ1 and θ2) and the most desirable spindle speed N for different depths-of-cut. The numerial values of these points are shown in Table 7.2 along with the corresponding values of the objective function Jmax. These maximum values of J represent the best selections of operating conditions that result in maximum metal removal rate, while maximizing the chatter rejection capability in parallel. Accordingly, we can conclude that the best milling cutter should be designed with θ1 = 109.6°, θ2 = 70.4°, so that it accommodates the highest productivity. Obviously, this comparison is only valid under the given objective function and its weighthing factors (which are taken as α = 0.3 and β = 1). So, for this particular operation, when performed at the setting of a = 4 mm, N = 74,534 rpm with θ1 = 109.6°, θ2 = 70.4° pitch angles on the cutter, one could produce 8122.1 mm3/s material removal with desirable surface quality (considering 0.05 mm/tooth feed rate and approximately 6 HP of spindle power). We wish to draw the attention of the reader to the point that Figure 7.12 is symmetric with respect to τ1 and τ2. Departing from a geometry-based intuition, we make a qualified statement that τ1 and τ2(θ1 and θ2, in essence) are commutable in this process without changing the results, and the mathematical expressions validate this statement.

7.6  Conclusion
The chapter treats the machine tool chatter from an analytical perspective to bridge the mathematical novelties with the physics of the operation. An ideally suiting mathematical procedure, CTCR, is presented for determining the complete stability posture of the regenerative chatter dynamics in SM. The novelty appears in the exhaustive declaration of the stability regions and the complete set of chatter frequencies for the given process. The outcome of CTCR forms a very useful tool to select the operating conditions, such as d-o-c and spindle speed, in SM applications. We also present an example case study addressing the same selection process on a simpler problem: the conventional STM chatter. Another case study is for the chatter dynamics of currently practiced “variable-pitch milling” operation. In this particular study, we deploy CTCR from the design of the milling cutter geometry to the spindle speed and d-o-c selections and further to the optimization of these decisions.

This research has been supported in part by the awards from DoE (DE-FG02-04ER25656), NSF CMS0439980, NSF DMI 0522910, and NSF ECCS 0901442.

© 2011 by Taylor and Francis Group, LLC

New Methodology for Chatter Stability Analysis in Simultaneous Machining


1. I. Lazoglu, M. Vogler, S. G. Kapoor, and R. E. DeVor, Dynamics of the simultaneous turning process, Transactions of North American Manufacturing Research Institute of SME, 26, 135–140, 1998. 2. C. Carland, Use of multi-spindle turning automatics, International, Industrial and Production Engineering Conference (IPE ), Toronto, Canada, pp. 64–68, 1983. 3. C. Carland, 2 Spindle are Better than 1, Cutting Tool Engineering, 49(2), 186–192, 1997. 4. A. Allcock, Turning towards fixed-head ‘Multis’, Machinery and Production Engineering, 153(3906), 47–54, 1995. 5. C. Koepfer, Can this new CNC multispindle work for your shop? Modern Machine Shop, 69, 78–85, 1996. 6. J. Tlusty, Machine Dynamics, Handbook of High Speed Machining Technology, Chapman and Hall, New York, 1985. 7. M. E. Merchant, Basic mechanics of the metal-cutting process, ASME Journal of Applied Mechanics, 66, A-168, 1944. 8. S. Doi and S. Kato, Chatter vibration of lathe tools, Transactions of ASME, 78, 1127, 1956. 9. S. A. Tobias, Machine Tool Vibration, Wiley, New York, 1961. 10. J. Tlusty, L. Spacek, M. Polacek, and O. Danek, Selbsterregte Schwingungen an Werkzeugmaschinen, VEB Verlag Technik, Berlin, Germany, 1962. 11. H. E. Merritt, Theory of self excited machine tool chatter, Journal of Engineering for Industry, 87, 447, 1965. 12. R. L. Kegg, Cutting dynamics in machine tool chatter, Journal of Engineering for Industry, 87, 464, 1965. 13. Y. Altintas and E. Budak, Analytical prediction of stability lobes in milling, Annals of the CIRP, 44, 357–362, 1995. 14. S. Smith and J. Tlusty, Efficient simulation program for chatter in milling, Annals of the CIRP, 42, 463–466, 1993. 15. S. Smith and J. Tlusty, Update on high speed milling dynamics, Transactions of the ASME, Journal of Engineering for Industry, 112, 142–149, 1990. 16. B. C. Kuo, Automatic Control Systems, Prentice Hall, Englewood Cliffs, NJ, 1987. 17. C. L. Phillips and R. D. Harbor, Feedback Control Systems, Prentice-Hall, Englewood Cliffs, NJ, 1988. 18. C. S. Hsu, Application of the tau-decomposition method to dynamical systems subjected to retarded follower forces, ASME Journal of Applied Mechanics, 37, 258–266, 1970. 19. C. S. Hsu and K. L. Bhatt, Stability charts for second-order dynamical systems with time lag, ASME Journal of Applied Mechanics, 33, 119–124, 1966. 20. J. Chen, G. Gu, and C. N. Nett, A new method for computing delay margins for stability of linear delay systems, Systems & Control Letters, 26, 107–117, 1995. 21. K. L. Cooke and P. van den Driessche, On zeros of some transcendental equations, Funkcialaj Ekvacioj, 29, 77–90, 1986. 22. Z. V. Rekasius, A stability test for systems with delays, Proceedings of the Joint Automatic Control Conference, San Francisco, CA, Paper No. TP9-A, 1980. 23. R. Sipahi and N. Olgac, Active vibration suppression with time delayed feedback, ASME Journal of Vibration and Acoustics, 125(3), 384–388, 2003. Best Student Paper Award at ASME-IMECE, New Orleans, LA, 2002. 24. R. Sipahi and N. Olgac, Degenerate cases in using direct method, Transaction of ASME, Journal of Dynamic Systems, Measurement, and Control, 125, 194–201, 2003. 25. R. Sipahi and N. Olgac, Active vibration suppression with time delayed feedback, ASME Journal of Vibration and Acoustics, 125, 384–388, 2003.

© 2011 by Taylor and Francis Group, LLC

2006. New perspective in process optimization of variable pitch milling. 344–362. 46(1). Stability robustness of retarded LTI systems with single delay and exhaustive determination of their imaginary spectra. Sipahi. 38–42. 178. 819–825. Sipahi and N. Sipahi and I. 38.. 2007. Olgac. 482–495. 1031–1043. D. 43. 28. Automatica. Automatica. Extraction of 3D stability switching hypersurfaces of a time delay system with multiple fixed delays. G. S. Journal of Mathematical Analysis and Applications. SIAM Journal on Control and Optimization. Olgac. R. Longman Scientific & Technical. 36. 35(1/2). 2007. Stability analysis of multiple time delayed systems using ‘Building Block’ concept. 55(10). F. R. Breda. K. 30. Ergenc. Olgac and R. An exact method for the stability analysis of time delayed LTI systems. N. Altintas. Filipovic and N. Retarded Dynamical Systems: Stability and Characteristic Function. 2009. R. A practical method for analyzing the stability of neutral type LTI-time delayed systems. Extended kronecker summation for cluster treatment of LTI systems with multiple delays. 27. Sipahi. Hale and W. 1449–1454. Maset. Olgac. co-publisher John Wiley & Sons Inc. Dynamics and stability of multi-flute variable-pitch milling. Global geometry of the stable regions for two delay differential equations. Fazelinia and N. 2006. Sipahi. 127. Sipahi and N. IEEE Transactions on Automatic Control. Olgac and R. IEEE Transactions on Automatic Control. 52(5). Delice. 12(3). 29. 1680–1696. S. Automatica. 34. Journal of Vibration and Control. Fazelinia. Pseudospectral differencing methods for characteristic roots of delay differential equations. LLC . N. D. N. Y. 2005. 2002. 1993. 793–797. Olgac. 40. N. H. 2002. 885–889. A unique methodology for chatter stability mapping in simultaneous machining. Budak. 2009. 45(5). 791–800. Vermiglio. SIAM Journal on Control and Optimization. A new perspective and analysis for regenerative machine tool chatter. N. 2007. 1989. 173–178. International Journal of Materials and Product Technology. and N. R. 847–853. Hosek. New York. 121. Sipahi. 47–63. 27. J. International Journal of Machine Tools & Manufacture. Olgac and R. N. © 2011 by Taylor and Francis Group. 33. Delayed resonator with speed feedback. Stepan. 41. 1998. Sipahi. IEE Proceedings. MacDonald.7-22 Control and Mechatronics 26. 134. Fazelinia. Olgac and R. 2002. Olgac. An interference effect of independent delays. 393–413. 1987. 37. New York. 13(7). 45. and E. Huang. 2004. H. 42.-I. On delay robustness analysis of a simple control algorithm in high-speed networks. 38. and R. 47. 799–810. 783–798. S. Engin. Systems & Control Letters. design and performance analysis. I. ASME Journal of Manufacturing Science and Engineering. A. Mechatronics. 143–155. and H. 1999. 35. 40. SIAM Journal on Scientific Computing. A unique methodology for the stability robustness of multiple time delay systems. ASME Journal of Manufacturing Science and Engineering. 31. Olgac. Niculescu. Olgac and M. 2005. 32. 39. N. 38. Analytical stability prediction and design of variable pitch cutters.

......................................9-1 Dynamic Matrix  •  Output Projection  •  Control Computation  •  Remarks  •  References Introduction  •  Pole Placement  •  Ziegler–Nichols Techniques  •  Remarks  •  References 10 PID Control  James C........................ Becerra..............Control System Design Basic IMC Structures  •  IMC Design  •  Discussion  •  References II 8 Internal Model Control  James C.16 -1 Introduction  •  Formulation of Optimal Control Problems  •  Continuous-Time Optimal Control Using the Variational Approach  •  Discrete-Time Optimal Control  •  Dynamic Programming  •  Computational Optimal Control  •  Examples  •  References 17 Time-Delay Systems  Emilia Fridman.. Chang and John Y........................................... Hung and Joel David Hewlett..........................13-1 Sliding Mode in Continuous-Time Systems  •  Design  •  Some Applications of VSS • Conclusion  •  References Introduction  •  Discretization of Continuous-Time Plant  •  Discretization of Continuous-Time System with Delay  •  Digital Control Design  •  Multirate Controllers  •  Conclusions  •  References 14 Digital Control  Timothy N................................................... LLC ........................... Rowland.................... 11-1 Introduction and Criterion Examples  •  Phase Margin and Gain Margin  •  Digital Control Applications  •  Comparisons with Root Locus  •  Summary  •  References Motivation and Background  •  Root Locus Analysis  •  Compensator Design by Root Locus Method  •  Examples  •  Conclusion  •  References 12 Root Locus Method  Robert J.............................................................................................15-1 Introduction  •  Basic Concept of PLL  •  Applications of PLL-Based Control  •  Analog......................................................................................... Digital.............................................................. and Hybrid PLLs  •  References 16 Optimal Control  Victor M............10-1 11 Nyquist Criterion  James R.... Hung........................................ Hung............... 17-1 Models with Time-Delay  •  Solution Concept and the Step Method  •  Linear Time-Invariant Systems and Characteristic Equation  •  General TDS and the Direct Lyapunov Method  •  LMI Approach to the Stability of TDS  •  Control Design for TDS  •  On Discrete-Time TDS  •  References II-1 © 2011 by Taylor and Francis Group................................................................12-1 13 Variable Structure Control Techniques  Asif Šabanovic ´ and Nadira Šabanovic ´-Behlilovic ´.......................... Veillette and J. Hung.. Alexis De Abreu Garcia.....14-1 15 Phase-Lock-Loop-Based Control  Guan-Chyun Hsieh...8-1 9 Dynamic Matrix Control  James C................................

............................................18-1 Introduction  •  Control System of PMSM  •  Observer for Rotor Position and Speed of PMSMs  •  Control of Induction Motors  •  Conclusions  •  References 19 Predictive Repetitive Control with Constraints  Liuping Wang............II-2 Control System Design 18 AC Servo Systems  Yong Feng........... .......... 20-1 Introduction  •  Backstepping  •  Adaptive Backstepping  •  Adaptive Backstepping with Tuning Functions  •  State Feedback Control  •  Backstepping Control of Unstable Oil Wells  •  References 21 Sensors  Tiantian Xie and Bogdan M............................. 22-1 Introduction  •  Key Technical Issues in SMC  •  Key SC Methodologies  •  Sliding Mode Control with SC Methodologies  •  Conclusions  •  References © 2011 by Taylor and Francis Group....................................................................... and Eric Rogers.............. Wilamowski........................................................... 21-1 Introduction  •  Distance and Displacement Sensors  •  Pressure Sensors  •  Accelerometer  •  Temperature Sensors  •  Radiation Sensors  •  Magnetic Field Sensors  •  Sensorless Control System  •  References 22 Soft Computing Methodologies in Sliding Mode Control  Xinghuo Yu and Okyay Kaynak................ Liuping Wang.................. ....... and Xinghuo Yu............................ .... LLC ..................... Shan Chai........................19-1 Introduction  •  Frequency Decomposition of Reference Signal  •  Augmented Design Model  •  Discrete-Time Predictive Repetitive Control  •  Application to a Gantry Robot Model  •  Conclusions  •  References 20 Backstepping Control  Jing Zhou and Changyun Wen.......................

.... the controller consists of the plant model G pm and two compensators Ga and Gb...........3.... Clearly. The concept is summarized here for a single-input–single-output (SISO) system......... Y (s) = GaGp R(s) + (1 − GaGp )D(s) The design of a cascade compensator Ga for a specified response to input r is simple..... G pm. respectively..1. 8-3 8.... In fact.... shown in Figure 8. 8......... there is no feedback signal and the control is open loop. Gp............ an equivalent controller Gc of a conventional control structure.... corresponding to each IMC controller Ga of Figure 8.................. 8-1 8... The system is a feedback system for disturbance d.... Knoxville 8........... 8-3 References������������������������������������������������������������������������������������������������������ 8-3 Internal model control (IMC) is an attractive method for designing a control system if the plant is inherently stable... additional research results on this subject have been reported in journals and conferences........ Hung The University of Tennessee..1.. is explicitly included in the overall controller.....2 IMC Design.. ................. by and Yd (s) = (1 − GaGbGp )D(s)... as shown in Figure 8......1 Basic IMC Structures.. The method has been formulated using Laplace transform. LLC Yr (s) = GaGp R(s) ......... the system responses to reference input r and to disturbance d are given..... 1 − GaGpm Independent reference input response and disturbance rejection can be achieved by adopting a twodegree-of-freedom (TDF) IMC structure..8 Internal Model Control James C...........1  Basic IMC Structures A single-degree-of-freedom (SDF) IMC structure is shown in Figure 8.......... Since then... Notice that the plant model.. matches the plant model Gpm. thus attenuating its effect....... It should be pointed out that....3 Discussion...... and therefore is a frequency-domain technique. Gc = Ga . 8-1 © 2011 by Taylor and Francis Group.. In this structure.... When G p = G pm............ When the actual plant........................... the stability of both the plant and the compensator is necessary and sufficient for the system to be stable...... The method was first formally reported in Garcia and Morari (1982). exists...2......................

Plant r – Ga u Gp d y Gpm Gb Plant model – Figure 8. r – u y Gc Gp Plant Figure 8. LLC .1  SDF internal model control. a feedback signal exists and can be used for achieving system robustness.8 -2 Plant r – Ga u Gp d Control and Mechatronics y Gpm Plant model – Figure 8. When there are differences between the plant and the model. Under this condition. making compensator design easy.2  Conventional control.3  TDF internal mode control. Notice that these responses are linear in Ga and Gb. where compensators Ga and Gb can be adjusted independently to achieve the desired individual responses. Yr (s) = GaGp R(s) Q and where Yd (s) = (1 − GaGbGp )D(s) Q Q = 1 + GaGb (Gp − Gpm ) © 2011 by Taylor and Francis Group.

Robust stability is achieved by an appropriate choice of G b. The conditions for stability are as follows: all roots of the characteristic equation are on the left-half plane and there is no pole zero cancellation among Ga. C. M. © 2011 by Taylor and Francis Group. E. consider a system in the conventional control configuration. Morari. an unstable plant may be stabilized by a controller using the conventional control configuration. Process Des. 1989. Dev. Internal model control. 8.2  IMC Design Two-degree-of-freedom IMC design usually begins with the design of the forward compensator Ga for a desired input–output response. a stable plant using a stable compensator results in a stable IMC system. As mentioned before. readers are referred to Morari and Zafiriou (1989) and the reference list therein. Ind. Gb. 1982. M. implementation dependent. Englewood Cliffs. 1. 21(2):308–323.Internal Model Control 8 -3 Q is the characteristic polynomial of the system. 8. and Morari. Any noise entering the system at the plant input will drive the plant output to infinity. and Zafiriou. Robust Process Control. References Garcia..3  Discussion In the IMC approach. LLC . the controller is easy to design and the IMC structure provides an easy way to achieve system robustness. however. One limitation of the IMC method is that it cannot handle plants that are open-loop unstable. NJ. E. Then the feedback compensator is designed to achieve a specified system stiffness with respect to disturbances and a specified robustness with respect to model error. For the MIMO case and for an in-depth treatment of IMC. the corresponding IMC control system is not stable. The system consists of an unstable plant Gp = Gpm = 1 (s − 1)(S + 5) and a cascade controller Gc = 18 The closed-loop system is stable having a transfer function GCL = 18 s 2 + 4 s + 13 The equivalent single-degree-of-freedom compensator Ga of the IMC is given by Ga = Gc 18(s − 1)(s + 5) = 2 1 + GcGpm s + 4 s + 13 but it does not help to stabilize the system. Prentice-Hall. On the other hand. Chem. Here one sees that the stability of a system is. and (G p − G pm) on the right-half plane. As an example. A unifying review and some new results. An equivalent controller can be determined for the conventional control configuration giving the same response characteristics. Eng. in general.

........ Consider a SIS0 discrete-data process................. 9-2 9......... 9. ....................... Cutler (1982) gives a very lucid description of the concept......... 9-1 9...... is attractive to technical people with a limited background in control theory............. since the concept is intuitively transparent....2 Output Projection... It is suitable for control by a computer and............. Hung The University of Tennessee... LLC yk = ∑ k 0 0 0 ... 9-3 References������������������������������������������������������������������������������������������������������ 9-3 Dynamic matrix control (DMC) is a method suitable for the control “of processes. is a time domain technique.......................3 Control Computation... 9-2 9..4 Remarks............ and therefore.................. The method provides a continuous projection of a system’s future output for the time horizon required for the system to reach a steady state...1 Dynamic Matrix................. It is intended for linear processes.................9 Dynamic Matrix Control James C...............1  Dynamic Matrix A key tool of the DMC is the dynamic matrix which can be constructed from a process’ unit-step response data... Knoxville 9..................................................... The projected outputs are based on all past changes in the measured input variables.............. including both single-input-single-output (SISO) and multi-input–multi-output (MIMO) cases... The control effort is then computed to alter the projected output to satisfy a chosen performance specification...... The method is based on the step response characteristics of a process........ the following dynamic matrix equation can be formed:  y1   h1  y  h  2  2   = h2        yn  hn    0 h1 h2 hn −1 0 0 h1    δu1    δu2       δu     m hn −m +1   9-1 © 2011 by Taylor and Francis Group....” The method was developed by control professionals in the oil industry in the 1970s........ Its output at the end of the k th time interval is given by the following convolution summation hk −i +1δui i =1 where hi is the discrete-data step response δui is the incremental step input at the beginning of the ith time interval For k = 1 to n... The method is intended for the control of a linear process which is open-loop stable and does not have pure integration.......... The concept is summarized here for a SIS0 system..................

y = y* + hδu1 6. The iterative projection computation steps proceed as follows: 1. to the next time interval. 5. The incremental control input vector δu needed to eliminate the error e can be obtained by one of the variety of methods. it can be obtained by solving the equation e = H δu Since the output response from the last input must have the time to reach steady state.1 shows the block diagram of a DMC system.9-2 Control and Mechatronics or y = H δu The n × m H matrix shown is the so-called dynamic matrix of the SISO process and its elements are called dynamic coefficients. 2. only the most current input change δu1 is involved in the output projection computation and the H matrix becomes a column vector h. The DMC block performs output projection and control computation. Compute the changes in the output vector by using the input change and the dynamic matrix equation. errors due to erroneous initialization will diminish with time. 9. For example. At the beginning of a subsequent time interval. Initialize the output projection vector y by setting all its elements to the currently observed value of the output. Under this condition. © 2011 by Taylor and Francis Group. Compute the updated output projection vector y using the updating equation. 9. the H matrix will have more rows than columns. Loop back to Step 2. 4.2  Output Projection Assume that the control computation is repeated in every time interval. The vector at the lefthand side of the equation is called the output projection vector. shift the projection forward one time interval and change its designation to y *. 3. the projected output values do not match the observed values due to disturbances. the output error at time instant i is given by ei = r − yi For i = 1 to n. A solution for δu can be obtained by using the classical least-square formula δu = (H T H )−1 H Te Figure 9. Since the process is assumed to be open-loop stable. ei form an error vector e. In general. Compute the change in the input from the last time interval to the present.3  Control Computation Let the process setpoint (reference input) be r. The discrepancy provides the feedback data for the output vector adjustment. This matrix is used for projecting the future output. LLC . depending on the performance criterion chosen.

R. Butterworths. 4. without relying solely on the mathematical model of the process. 9. 3. M. Chapters 5 and 6. 1982. A demerit of DMC is that it is limited to open-loop bounded-input–bounded-output (BIBO) stable type of processes. Theoretical work has been done on DMC by control researchers since the early 1980s. Prett and Garcia (1988) is a good source for additional information. LLC .1  Dynamic matrix control (DMC).Dynamic Matrix Control r u y 9-3 DMC controller Process FIGURE 9. 21(1):1–6. Dynamic matrix control of imbalanced systems. The additional results obtained have further revealed the capability of the method and its comparison to other control methods. References Cutler. Coefficients of the dynamic matrix can be obtained by testing.4 Remarks The extension to MIMO processes is conceptually straightforward. the dynamic matrix equation has the form  δu1   y1   =H  δu2   y2  where y1 and y2 are the two-output projection vectors δu1 and δu2 are the two incremental step-input vectors H is the dynamic matrix of the process Merits of DMC are as follows: 1. Prett. Fundamental Process Control. When disturbances are observable. D. For a two-input–two-output ­ process. C. Boston. and Garcia. 1988. It is easy to understand and easy to apply without involving sophisticated mathematics. ISA Trans. 2. the design of control is independent of whether there is transport lag or not. C. © 2011 by Taylor and Francis Group. MA. feed-forward control can conveniently be implemented via the DMC structure. Since the transport lag characteristics can easily be included in the dynamic matrix. E.

............................... The type is modeled by the transfer function Gp (s ) = Ke −τs 1 + Ts (10........................................1  Introduction PID control was developed in the early 1940s for controlling processes of the first-order-lag-plus-delay (FOLPD) type.... PD..... 10 -7 References���������������������������������������������������������������������������������������������������� 10-8 First Technique  •  Second Technique Joel David Hewlett Auburn University 10.... the integral term is mainly for eliminating the steady-state error............... Determination of controller parameters for a given process is the controller design........ 10 -1 © 2011 by Taylor and Francis Group.....4 Remarks.....10 PID Control James C.2)   1 GC (s) = K p  1 + + TDS  TIS  (10............... The proportional term is to affect system error and system stiffness..... the derivative term is for damping oscillatory response.......3) The effect of each term on the closed-loop system is intuitively clear.... Hung The University of Tennessee.........................2 Pole Placement...................................... or PI controllers. its time integral..... 10 -2 10... and its time derivative............... Dropping appropriate terms in these equations gives P.. A PID controller is often modeled in one of the following two forms: GC (s) = K p + K1 + K DS S (10........ LLC ..... Knoxville 10....3 Ziegler–Nichols Techniques........ 10 -3 10...... 10 -1 10...................1) where K is the d-c gain of the process T is the time constant of the first-order lag τ is the transport-lag or delay The controller generates a control signal that is proportional to the system error........................................1 Introduction. Note that having the pure integration term makes the PID controller an active network element..............

In Figure 10.1  Two configurations for PID controlled systems: (a) cascade compensation and (b) cascade and feedback compensation.4) where K is the gain τ1 and τ2 are time constants associated with the system’s two poles When the transfer function for the PID controller (10.1. 10.6) © 2011 by Taylor and Francis Group. Two commonly used PID control configurations are shown in Figure 10. in motor control. the closed loop poles of the system can be chosen arbitrarily using analytical methods (Hägglund and Åström. the characteristic equation can be expressed as where ξ is the damping ratio ω is the natural frequency (s + aω)(s 2 + 2ξωs + ω 2 ) = 0 (10.10 -2 R – (a) R – U U Control and Mechatronics C PID Plant GP PI Plant GP C D (b) Figure 10. Take for example the ­ ­ general second-order case in which the transfer function is of the form Gp (s ) = K (sτ1 + 1)(sτ 2 + 1) (10. so the derivative output is already available. LLC . the result is a thirdorder closed-loop system of the form GCL (s) = Ks /τ1τ 2  τ1 + τ 2 + KK pTD   1 + K pK  K pK s2 + s2  + s +  τ1τ 2    τ1τ 2   τ1τ 2TI (10.5) For a general third-order closed-loop system.1b. The latter is sometimes more convenient for practical reasons.4). For example.1a. the entire PID forms a cascade compensator.2  Pole Placement For processes which have well-defined low-order models. 1996).3) is combined with (10. the derivative part is in the feedback. the motor shaft may already have a tachometer. while in Figure 10.

8) Kp = TI = TD = τ1τ 2ω(a + 2ξ) − τ1 − τ 2 τ1τ 2 (1 + 2ξa)ω 2 − 1 Assuming the system is in fact FOLPD. aω = 2ξω = τ1 + τ 2 + KK pTD τ1τ 2 1 + K pK τ1τ 2 (10. and ω to yield the desired closed-loop poles leaves three equations and three unknowns.7) ω 2 + 2ξaω 2 = aω 2 = K pK τ1τ 2TI Finally. For instance.10) The resulting solution is unobtainable since this forces TD to take on a negative value. However. Both techniques are very easy to execute. looking at (10.9) represents a lower bound for realizable choices of ω. (10. a. Furthermore.6). LLC .7). making them very popular. Through some manipulation of (10. 10.5) to (10.3  Ziegler–Nichols Techniques Two celebrated Ziegler–Nichols techniques were developed in the early 1940s for tuning PID control of FOLPD systems (Ziegler and Nichols.8). resulting in purely PI control. 1942).9) TD = 0. for ω< τ1 + τ 2 τ1τ 2 (a + 2ξ) (10. Therefore. ξ and ω may be chosen arbitrarily. the following solution is obtained: τ1τ 2ω 2 (1 + 2ξa) − 1 K τ1τ 2ω 2 (1 + 2ξa) − 1 aτ1τ 2ω 3 (10.PID Control 10 -3 Equating the coefficients of (10. some care must be taken to ensure that the chosen values are reasonable. it is clear that for τ1 + τ 2 τ1τ 2 (a + 2ξ) ω= (10. © 2011 by Taylor and Francis Group. choosing ξ.

To begin. set K p = 0. The tuning procedure is given as follows. © 2011 by Taylor and Francis Group.2  Second Technique The second technique consists of two steps: 1.1  First Technique The first technique can be performed online and does not require knowledge of the model parameters.10 -4 Control and Mechatronics 10. Graphical techniques as well as computer techniques are available for such determination.6K max KI ≤ For PI controller. τ. set Kp. which have been incorporated into some of the commercial PID controllers (Rovira et al. At this setting.2 is the controller model for this technique. Equation 10. The determination of the model parameters K.5K max K D ≥ .45K max For PD controller. The result obtained from the second technique is more refined. For PID controller.3.3 is the controller model for this technique. 2.125K pT KI ≤ 1. set K p = 0. from a step response test. set K p = 0. and KD all to zero. The result is often satisfactory but can be crude. The steps can be computerized. 1969). record Kmax = Kp and T = oscillation period.2 K p T 2K p T K D ≥ 0. The process parameters are determined from its step response.125K pT 10. Equation 10.3. Setting controller parameters. LLC . It involves an adjustment procedure. set K p = 0.65K max For P controller. and T. which can be carried out by a technician. Increase the value of Kp until oscillation occurs. KI..

. 2. and determine T by least-square using n measurement x(ti). … Choose τ = τ1. Details of PID tuning and some further development can be found in Cheng and Hung (1985) and Franklin et al.284 c∞ Unit-step response 0 t0. respectively. (1994).284 t0.284 = τ + –— and t0.. (τ1 – t1) .284c∞ and c(t) = 0.632 t 1. the time instants when c(t) = 0.632c∞. i = 1 to n.3 gives a least-square-based computer algorithm.2  A graphical FOLPD parameter identification method. © 2011 by Taylor and Francis Group. Unit-step response equation c(ti) = K [1 – e–(ti – τ)/T ] K = c∞ – c(0) c(ti) Let x(ti) = In 1 K = (τ ti) i = 1. 1 — = (hTh)–1hTx T Compute y(t) = K[1 – e–(t – τ1)/T] J = Σ [c(ti) – y(ti)]2 i=1 n Repeat the above process using different τ's until a minimum J is Figure 10.3  A least-square based FOLPD parameter identification algorithm. Figure 10.632. 3.4 where values in the table were established empirically (Cheng and Hung.284 and t0.. Solve T and τ from the two equations T t0. The parameter setting is done by using the formula and the table contained in Figure 10.2 shows typical graphical techniques and Figure 10.632 c∞ 0. LLC . x– x(t1) x(tn) – 1 1 — – h— T T (τ1 – tn) . Both techniques have become classics. K = c∞ – c(0) 2.632 = τ + T 3 Figure 10. 1985).PID Control c(t) 10 -5 c∞ 0. Locate t0..

1 The unit-step response of a process is shown in Figure 10.308 1.916 0.986/–0.7.137/0.855 0.608/1.147 0. © 2011 by Taylor and Francis Group. LLC .749/–0.965 Bp –0.6.376 1.165 –/– AD –/– –/– BD –0. the closed-loop tracking system responding to a unit-step input is shown in Figure 10. The closed-loop regulator system responding to a disturbance is shown in Figure 10.814.0 0 5 Time (s) 10 15 Figure 10.03 –0.0 Process Unit-step response 0.5.5  Unit-step responses of a process and its identified FOLPD model. AI + BI τ T τ TD = T AD T R/T P PI PID BD Ap 0.902/– 0. TI = 2. for tracking control.0 Identified model K=2 T = 2. Computer formulas of Figure 10. Kp = 0. 2.921/–0.071 τ = 2. Comparing the two sets of parameter values.4 are used to tune the PID controller.435/0.3 is used to identify the parameters of the FOLPD model. and TD = 0.10 -6 AP K τ EP T BI Control and Mechatronics KP = TI = T AI τ T for regulator. Example 10. For regulator control. TI = 3. and TD = 0.5 where values of the identified parameters are also shown. Kp = 0. The unit-step response of the identified model is also shown in Figure 10.442.4  Formulas and data for tuning PID family of controllers.586 1.929 R/T: (value of regulator)/(value of tracker) Figure 10. TI = T for tracker. The computer algorithm of Figure 10.984/0. one sees that the tracking control system is more damped than the regulator system.878/0.796 –0.644.558.707/–0.543.985/– –/– AI –/– BI –/– –0.980.482/0.

In fact.6  Unit-step disturbance response of a PID controlled regulator.5 0 10 -7 System output –0. LLC . The techniques are also effective for processes modeled by other forms of transfer functions. In general. It is given by Gp (s ) = 2 (s − 1)2 (s + 2)(s + 0. these techniques can be applied to any linear process whose step response does not have overshoot. 1.5) © 2011 by Taylor and Francis Group.PID Control 2 1.5 System output 1 0. 10.5 –1 0 5 10 15 20 Time (s) 25 30 35 40 Figure 10. as long as their input–output characteristics resemble that of FOLPD. the true transfer function of the process in the above example is not a FOLPD one.5 0 0 5 10 15 Time (s) 20 25 30 Figure 10.7  Unit-step input response of a PID controlled tracking system.4 Remarks The classical PID techniques have been very useful for process control since many industrial processes can be well approximated by FOLPD models.5 1 0.

K. T. Franklin. and system stiffness. (1994). or other techniques to achieve more sophisticated objectives. The popularity of PID control is due to the fact that a technician. Tuning controllers for set-point changes. and Emami-Naeini. © 2011 by Taylor and Francis Group. References Cheng. who has no exposure to control theory. (1969). Raleigh. G. bandwidth. lag.. Knowing the transfer function. Hägglund. one then needs to know the plant transfer function.. N. S. and Hung. Therefore. J. G. such as risetime. However. Addison-Wesley. W. p. C. can do the tuning of PID controller without knowledge of the transfer function. MA.) and has much more freedom in designing a controller. A least-square based selftuning of PID controller. robustness.10 -8 Control and Mechatronics But the step response of this transfer function resembles that of a FOLPD one. the approach is for very limited objectives. (l942). poleplacement. (1985). D. FL. root-locus. 64(11):759. 85C3H2161-8. L. A. Reading. A. and Åström. (1996). CRC Press LLC. lead-lag networks. and Nichols. J.. Ziegler–Nichols tuning does not address other important concerns of control systems. The Control Handbook. E. 325.. Rovira. Proceeding of IEEE Southeastcon. A. The classical PID control can make a closed-loop system have a satisfactory damping ratio ξ and zero steady-state error in response to a set-point input. IEEE Catalog no. Murrill. Ziegler. P. B. C. ASME Transactions. Optimum setting for automatic controllers. LLC . It should be pointed out that a PID controller can also be designed using Bode-plot. However. 3rd edn. J. he or she may consider all classical compensation networks (lead. 42(l2):67. Feedback Control of Dynamic Systems. Powell. Boca Raton. So the PID control using the Ziegler–Nichols tuning technique can be applied. G. Instruments and Control Systems. and Smith. NC. etc.

.................. the G(s)-plane plot for K < 0 is the mirror reflection of Figure 11........... 11-7 11................. the application of the Nyquist criterion for the stability of a closed-loop system can be presented succinctly in a table for different ranges of K...1b about the real axis..............4 Comparisons with Root Locus..11 Nyquist Criterion 11............ The frequency at which the phase of G( jω) is −180° is designated as the phase crossover frequency ωpc............. ...1 Consider a negative unity feedback system having G(s) = K /(s + 1)3. A contour that includes the entire righthalf of the s-plane is first mapped into the GH(s)-plane by using either s-plane vectors or Bode diagrams of magnitude and phase versus frequency ω......... The closed-loop system is stable if and only if Z = 0........ where P is the number of open-loop poles in the righthalf s-plane and N is the number of encirclements of the −1 + j0 point in the GH(s)-plane.......... the G(s)-plane contour for positive 11-1 © 2011 by Taylor and Francis Group...............................................1  Introduction and Criterion Examples The Nyquist criterion is a frequency-based method used to determine the stability or relative stability of feedback control systems [1–4]...... which for this example is √3  = 1... hence....1b...... thus P = 0.................. Table 11... Finally.. 11-4 11.................. 11-1 11...... This polar plot mapping is often considered to be the most difficult step in the procedure.5 Summary.....1a.....1 shows stability results for different ranges of K....... for K = 10...... as ω varies from 0 to +∞..3 Digital Control Applications..1c........1 Introduction and Criterion Examples.... which is the upper limit of the range shown in Row 1 of the table... Example 11..... the corresponding G(s)-plane contour is the polar plot of G( jω)..... as shown in Figure 11... Therefore..... For the section of the s-plane ­ contour along the positive jω -axis shown in Figure 11............................. as shown in Figure 11.. different stability results... The three open-loop poles are located at s = −1.. Different ranges of gain K can yield different numbers of encirclements and........ shown in Figure 11...1c about the jω -axis...... and the remaining section (the negative jω -axis) is obtained as the mirror reflection of Figure 11.. Next.... Let GH(s) be the transfer function of the open-loop plant and sensor in a single-loop negative-feedback continuous-time system...... In words.......... LLC .. Rowland University of Kansas 11...... It is important to specify N as positive if the encirclements are in the same direction as the contour in the s-plane and negative if in the opposite direction. Observe that ωpc could be obtained alternatively by setting the imaginary part of G( jω) equal to 0............. Moreover.. the infinite arc that encloses the entire righthalf s-plane maps into a point at the origin in the G(s)-plane............2 Phase Margin and Gain Margin..... 11-5 11...... none of the open-loop poles are inside the righthalf s-plane contour and.. The real part of G( jωpc) = G( j√3) = −1 when K = 8......732 rad/s..1d. 11-8 James R................ The number of closed-loop poles in the righthalf s-plane Z is calculated as the sum of P and N. 11-8 References.......

Example 11. The open-loop poles are located at s = −2. The real part of G( jωpc) = G( j1) = −1 when K = 16. the Nyquist criterion has shown (Rows 1 and 3) that the closed-loop system is stable for −1 < K < 8.11-2 Control and Mechatronics s-plane ω –1 G( jω)-plane 10 ω ωpc (a) (b) G(s)-plane (K > 0) ω 10 –10 ω (c) (d) –1 G(s)-plane (K < 0) –1 FIGURE 11. again no open-loop poles are inside the righthalf s-plane contour and.2 Let a negative unity feedback system have a plant transfer function G(s) = K /[s(s + 2)2]. the −1 + j0 point is not encircled © 2011 by Taylor and Francis Group. the G(s)-plane contour for positive K in Figure 11.2b for K = 20. −2.2a. and it is encircled twice (N = 2) for K > 8. as ω varies from ε to +∞. In words. shown in Row 1 of the table.1 Range of K 0<K<8 8<K<∞ −1 < K < 0 −∞ < K < −1 P 0 0 0 0 N 0 2 0 1 Z 0 2 0 1 Stability Yes No Yes No K in Figure 11. Therefore. These results are shown in Rows 1 and 2 of the table. as shown in Figure 11. In brief.1  Nyquist Criterion Results for Example 11. and the origin. LLC . Table 11. which is the upper limit of the range. thus P = 0. Table 11. and ωpc is equal to 2 rad/s. The G(s)-plane contour corresponding to this arc is an infinite arc extending from the positive real axis clockwise through the fourth quadrant and connecting with the polar plot of G( jω). For the negative K case.1 is modified by inserting an infinitesimal arc of radius ε that excludes the open-loop pole at the origin from inside the contour. the −1 + j0 point is not encircled (N = 0) for K < 8.1d.1. The s-plane contour along the positive jω -axis from Example 11. The complete G(s)-plane contour is shown in Figure 11.2 shows stability results for different ranges of K.1  Nyquist criterion application to Example 11. the results in Rows 3 and 4 are obtained for the ranges of negative K. Similarly. the real part of G( j0) = −1 when K = −1. shown in Figure 11.2b passes through the −1 + j0 point for K = 16.1c passes through the −1 + j0 point for K = 8.

the Nyquist criterion has shown (Row 1) that the closedloop system is stable for 0 < K < 16. For the negative K case. there is one encirclement of −1 + j0 for all K < 0 and consequently Z = P + N = 0 + 1 = 1. In summary. Example 11. Table 11. The infinite arc extends from the negative real axis ­ lockwise through the second quadrant and connects with the polar plot of G( jω). The complete G(s)-plane contour is shown in Figure 11. These results are shown in Rows 1 and 2 of the table.3  Nyquist criterion application to Example 11.2  Nyquist criterion application to Example 11.3.3 shows stability results for different ranges of K.2. and ωpc is equal to 1 rad/s.2 Range of K 0 < K < 16 16 < K < ∞ −∞ < K < 0 P 0 0 0 N 0 2 1 Z 0 2 1 Stability Yes No No (N = 0) for K < 16. LLC . The s-plane contour along the positive jω -axis is again modified with an infinitesimal arc of radius ε. Table 11. and it is encircled twice (N = 2) for K > 16.3 To illustrate the Nyquist criterion for a system having an unstable open-loop plant. © 2011 by Taylor and Francis Group. as shown in Row 3. G(s)-plane (K = 2) –1 FIGURE 11.Nyquist Criterion G(s)-plane s-plane G(s)-plane 11-3 –2 –1 –1 (a) (b) (c) FIGURE 11. consider the negative unity feedback system with G(s) = K(s + 1)/[s(s − 1)]. as ω varies from ε c to +∞.2  Nyquist Criterion Results for Example 11.3 for K = 2.

there are no encirclements of −1 + j0 for all K < 0 and consequently Z = P + N = 1 + 0 = 1.3 passes through the −1 + j0 point for K = 1.4 shows the PM marked on the polar plot in the G( jω)-plane.37°.11-4 Table 11.1 as ωpc = √3 = 1.5 Consider again the system of Example 11.1. In summary. Two measures of relative stability are phase margin (PM) and gain margin (GM).9°. In words. Using |G( jωgc)| = 1 yields ωgc = √(52/3 − 1) = 1.4 Let K = 15 for a negative unity gain feedback system having a plant transfer function G(s) = K /s(s + 4). The PM is determined as the absolute difference between −180° and the phase at the gain crossover frequency ωgc defined by the equation |G( jωpc)| = 1. and the PM is calculated as 36. For the negative K case. which is the upper limit of the range. Since K * = 8 from Example 11. The phase crossover frequency ωpc was calculated in Example 11.387 rad/s from which the PM is calculated as 17. Figure 11. A typical desired GM is 5 or greater. © 2011 by Taylor and Francis Group.3 Range of K 0<K<1 1<K<∞ −∞ < K < 0 P 1 1 1 N +1 −1 0 Z 2 0 1 Stability No Yes No Control and Mechatronics The open-loop poles are located at s = +1 and the origin. Since there is no finite ω for which the phase is −180° (only ωpc = ∞). A typical desired PM is 40° or greater. the G(s)-plane contour for positive K in Figure 11. These results are shown in Rows 1 and 2 of the table. the −1 + j0 point is encircled once in the same direction as the s-plane contour (N = +1) for K < 1. and it is encircled once in the opposite direction as the s-plane contour (N = −1) for K > 1. LLC . and let K = 5. Figure 11.1 having a plant G(s) = K /(s + 1)3.5. 11. shown in Row 1 of the table. PM is the amount of phase that could be added to an open-loop plant transfer function before the system becomes unstable. Example 11. The GM is defined as the reciprocal of |G( jωpc)| or alternatively as K */K. GM is the amount of gain that can be multiplied by the actual plant transfer function gain before the system becomes unstable.3  Nyquist Criterion Results for Example 11.60 . This GM is shown in Figure 11. In such cases. Example 11. Therefore. Using |G( jωgc)| = 1 yields ωgc = 3 rad/s from which the phase of G( jωpc) = G( j3) = −153.732 rad/s. which requires that the phase at ωgc must be less negative than −140°.2  Phase Margin and Gain Margin Relative stability can be described by using measures that designate how close the G(s)-plane contour comes to an encirclement of the −1 + j0 point for open-loop stable systems. one open-loop pole (s = +1) lies inside the righthalf s-plane contour and.1°. In words.5 shows the PM on the polar plot in the G( jω)-plane. where K is the actual gain and K * is the gain for which the system becomes unstable. the Nyquist criterion has shown (Row 2) that the closed-loop system is stable for 1 < K < ∞. thus P = 1. the value of N is 0 because there is no encirclement. the GM is infinite. as shown in Row 3. The real part of G( jωpc) = G( j1) = −1 when K = 1. the GM can easily be calculated as GM = K */K = 8/5 or as 1/ G( j ω gc ) = 1/ Gj 3 = 1. In words.

LLC .e..Nyquist Criterion 11-5 G( jω)-plane –1 PM = 36. i.37° ωgc 1 G( jω)-plane GM = 1. © 2011 by Taylor and Francis Group. A second difference between digital control systems and continuous-time systems is the value of Z required for stability. an important difference from continuous-time systems is that the Bode diagrams.. Continuous-time feedback systems are stable if and only if Z = 0.5  Phase margin and gain margin for Example 11. the number of closed-loop poles inside the righthalf s-plane is zero. 11. cannot be used for digital control systems. all of the closed-loop poles lie inside the unit circle in the z-plane.5. In applying the Nyquist criterion. which were useful for mapping the jω -axis into the G(s)-plane.9° GM = ∞ 1 ω FIGURE 11.4  Phase margin and gain margin for Example 11. i. Digital control systems are stable if and only if Z is equal to the order of the system. The sampler is often followed by a zero-order hold device for smoothing.e. ⅝ –1 PM = 17. Single-loop digital control systems having a single sampler inside the loop yield the characteristic equation 1 + GH(z) = 0.3  Digital Control Applications The application of the Nyquist criterion to digital control systems differs from the application to continuous-time systems.4.60 5 ω FIGURE 11. where GH(z) is the pulsed transfer function expressed in terms of z-transforms. z-plane vectors are used to map the unit circle in the z-plane into a corresponding contour in the GH(z)-plane. Instead.

Example 11. Use z-plane vectors as θ varies from 0° to 180° clockwise around the unit circle (Figure 11.6  Nyquist criterion application to the digital control system of Example 11.6c.2 −∞ < K < −1.7a).6a.6. Again. A mirror reflection about the real axis in the G(z)-plane completes the mapping for θ between 180° and 360°.11-6 Control and Mechatronics z-plane Unit circle R(z) + – (a) G(z)-plane Σ K z(z – 0.6b).4 shows the Nyquist criterion results for different ranges of K. the Nyquist criterion has shown (Rows 1 and 3) that the closed-loop system is stable for −0. consider a closed-loop digital control system with negative unity feedback having an open-loop pulsed transfer function G(z) = K(z + 1)/[z(z – 1)]. both positive and negative. resulting in the G(z)-plane mapping for K = 1. In summary.8 < K < −0.2 < K < 0 −1.8 P 2 2 2 2 2 N 0 −2 0 −1 −2 Z 2 0 2 1 0 Stability Yes No Yes No No Example 11. as shown in Figure 11.4). which places an open-loop pole on the unit circle.7 As a final example.8)].4  Nyquist Criterion Results for Example 11.8 θ –1 0. all open-loop poles are inside the © 2011 by Taylor and Francis Group. and the phase of G(z) is −360° and its magnitude is 0. including an infinitesimal arc around the open-loop pole at z = 1. Table 11. Table 11.8) (b) Y(z) 0 0. shown in Figure 11. use z-plane vectors. Thus. The magnitude of G(z) is unity at the phase crossover angle θpc = cos−1(0.6 Range of K 0<K<1 1<K<∞ −0. LLC .556 for θ = 180°.2 < K < 1.6 Consider the closed-loop digital control system having an open-loop pulsed transfer function given by G(z) = K /[z(z – 0. as θ varies from 0° to 180° clockwise around the unit circle (Figure 11.56 5 (c) FIGURE 11.

and Nyquist criterion results are provided in Table 11. The complete G(z)-plane plot is shown in Figure 11.5. This infinitesimal arc in the z-plane results in an infinite arc that starts on the positive real axis in the G(z)-plane and moves clockwise through the fourth quadrant.8 shows this relationship between root locus and Nyquist criterion procedures.5  Nyquist Criterion Results for Example 11.7. On the other hand. The value of ω* from root locus is the same as the phase crossover frequency ωpc from the Nyquist criterion.8  Relationships between root locus and the Nyquist criterion. connecting with the z-plane vector plot for θ = ε. 11. which can be found by using the Routh table. Imaginary axis crossings occur for gains K * at the imaginary axis location s = jω*. the Nyquist criterion is based on an s-plane contour that contains the jω -axis and an infinite arc around the righthalf s-plane.7 Range of K 0<K<1 1<K<∞ −∞ < K < 0 P 2 2 2 N 0 −2 −1 Z 2 0 1 Stability Yes No No contour in the z-plane. LLC . © 2011 by Taylor and Francis Group.Nyquist Criterion 11-7 G(z)-plane z-plane θ –1 Unit circle 0 +1 –1 0 (a) (b) FIGURE 11. s-plane jω (Nyquist) K (root locus) jωpc and ω* FIGURE 11.4  Comparisons with Root Locus Root locus. Figure 11. described elsewhere in this book provides a plot of closed-loop poles in the s-plane for ­ continuous-time feedback systems as the gain K varies.7b.7  Nyquist criterion application to the digital control system of Example 11. A phase crossover frequency ωpc is determined when applying the Nyquist criterion. Table 11.

. J. the PM calculations are exclusive for frequencybased methods (Nyquist criterion and Bode diagrams).C. and A. except that the contour in the z-plane is the unit circle and the closedloop system is stable if and only if Z equals to the order of the system.C. Feedback Control of Dynamic Systems.4 and 11. References 1. For continuous-time systems. the reader should consider determining root locus sketches for the Nyquist criterion examples presented here. 2002. 5th edn. 11th edn. Prentice-Hall. Bishop. Nise. 5th edn. However. Three examples were given for continuous-time systems and two for digital control systems. Control Systems Engineering. R. Modern Control Systems. 2. Franklin. The GM and PM calculations were also shown. NJ. LLC ..11-8 Control and Mechatronics Finally. The same ranges of stability would be obtained. Powell. John Wiley & Sons. New York. The closed-loop system is stable if and only if Z equals zero. Upper Saddle River.5 can also be obtained by root locus plots. 2008. The same procedure applies for digital control systems. B. 11. New York. Prentice-Hall. 3. G. Lebanon. 2005. Kuo and F.5  Summary The Nyquist criterion was described for both continuous-time and digital control systems having a single feedback loop.. Golnaraghi. Automatic Control Systems.. Emami-Naeini. N.S. the righthalf s-plane was mapped into the GH(s)-plane and the number of encirclements N of the −1 + j0 point recorded. 4. 2007. GMs in Examples 11. IN.D. © 2011 by Taylor and Francis Group. Moreover. John Wiley & Sons. Ranges of K for different N (and hence Z ) are often listed in a table. The number of closed-loop poles in the righthalf s-plane Z is equal to the sum of N and the number of open-loop poles (P) inside the righthalf s-plane.H. 8th edn. Dorf and R.

. the real part of the poles determines the rate of decay......... Any pair of complex-conjugate poles in the left half-plane corresponds to an exponentially decaying oscillatory mode of response... Loosely speaking.12 Root Locus Method 12................. 12-1 12....................... Any negative real pole of the s-domain transfer function of a continuous-time system corresponds to an exponential mode of response.......................................................................... LLC .......... pp. The compensator poles and zeros are chosen to shape the branches of the root locus.3 Compensator Design by Root Locus Method...... Root locus analysis was first developed by Evans (1948) as a means of determining the set of positions (loci) of the poles of a closed-loop system transfer function as a scalar parameter of the system varies over the interval from −∞ to ∞.. A more detailed discussion relating the pole and zero locations of a system transfer function to the time-domain system response and control design specifications is presented in Franklin et al.......5 Conclusion............2 Root Locus Analysis.4 Examples.....1  Motivation and Background The form of the natural response of any linear dynamic system is determined by the complex plane locations of the poles and zeros of the system transfer function.......................... the rate of decay of the exponential depends on the magnitude of the pole.................. 12-10 Effect of Including Additional Poles in the Open-Loop Transfer Function  •  Effect of Including Additional Zeros in the Open-Loop Transfer Function  •  Effect of a Lead Compensator  •  Lead Compensator Design Problem Definition  •  Development of Rules for Constructing Root Locus  •  Steps for Sketching the Root Locus Robert J.... 12-2 12.......... The pole locations of a feedback control system are a crucial consideration in its design.......... It constitutes a powerful tool in the design of a compensator for a feedback control system.... 108–129)... 12-14 12........... 12-22 Compensation of an Inertial System  •  PID Compensation of an Oscillatory System J....... the root locus analysis complements the 12-1 © 2011 by Taylor and Francis Group........... (2009.. Fairbanks 12.......................... then the compensator gain is chosen to place the closed-loop poles at the desired positions along the branches....... while the imaginary part determines the frequency of oscillation... it should be designed such that its poles lie far enough into the left half-plane........................ Alexis De Abreu Garcia The University of Akron...... the system is then said to be unstable. 12-21 References........1 Motivation and Background........... Any pole or poles in the right half of the complex plane correspond to an unbounded (exponentially growing) mode of response........................... By displaying the closed-loop poles............ Veillette The University of Akron 12................................... if a system is to respond to inputs quickly and without excessive oscillation........................

1  Closed-loop system.1) then the poles of the open-loop transfer function are the solutions of dG (s)dH (s) = 0. H (s ) = H . The quantity KG(s)H(s) is referred to as the open-loop transfer function of the system. Usually. (12.2. 12. (12. LLC .2) and the zeros of the open-loop transfer function are the solutions of nG (s)nH (s) = 0. It is especially useful for systems that are unstable or marginally stable in open loop since the frequency-domain methods are more cumbersome for such systems.4) The poles of T(s) are the solutions of the characteristic equation 1 + KG(s)H (s) = 0. 1 + KG(s)H (s) dG (s)dH (s) + KnG (s)nH (s) (12. If G(s) and H(s) are each expressed as the ratio of polynomials in s as G(s) = nG (s) n (s ) . dG (s) d H (s ) (12. Moreover. The root locus method is a means of plotting these poles as K ranges from −∞ to ∞.6) which clearly depend on the gain K. which provide information on the magnitude and phase of the system frequency response. + – K G(s) H(s) Figure 12.3) The transfer function of the closed-loop system is T (s ) = KG(s) KnG (s)dH (s) = .1  Problem Definition Consider the closed-loop feedback control system shown in Figure 12.2 Root Locus Analysis 12. (12. It is of interest to determine how the closed-loop poles move in the s plane as K varies.12-2 Control and Mechatronics frequency-domain methods.1. (12. these poles and zeros do not depend on the gain K. © 2011 by Taylor and Francis Group.5) or of the equivalent characteristic equation dG (s)dH (s) + KnG (s)nH (s) = 0. the poles and zeros of the openloop transfer function are known or are easily determined.

pp.  K >0 . According to the angle condition. Equation 12. then s is said to be on the complementary (or negative) root locus. any point s automatically satisfies the magnitude condition. which is the set of solutions of Equation 12.10 is known as the angle condition. The development of the complementary (or negative) root locus is similar to that of the standard root locus and is not discussed here.1  The root locus (sometimes called the complete root locus) of the system in Figure 12.… . 12. In addition. (12.5 for K < 0. It is possible.9 and 12. LLC . ± 2. The intuition and understanding that attend the ability to sketch the root locus by hand and the use of the computer for precise root locus calculations are complementary tools in feedback control design. a point s is on the root locus if the angle of G(s)H(s) is any multiple of π. to get a clear idea about the general form of the root locus © 2011 by Taylor and Francis Group. if the angle is an even multiple of π . In order to determine and plot the root locus of the system. k = 0. ± 1.2. ± 1. For the standard root locus. k = 0.8) KG(s)H (s) = −1 (12. (12. write Equation 12. ± 2. 472–509).… . K <0 (12.9) Equations 12. 266–270).10 are sometimes given as an alternative definition of the root locus. therefore. the ability to sketch a root locus by hand is essential. interested readers are referred to Kuo (1995.10) KG(s)H (s) = 1 (12.Root Locus Method 12-3 Definition 12. Since K can be any real number.1 is the set of loci of the poles of the closed-loop system as the gain K ranges from −∞ to ∞. (2009. it can be argued that the most convenient approach to plotting a root locus is to use a computer. It is the angle condition that determines the locus. the angle condition reduces to ∠G(s)H (s) = (2k + 1)π.5 for K > 0. and Equation 12. which is the set of solutions of Equation 12. Nevertheless. For a complete treatment of the complementary root locus. k = 0. the magnitude condition is mostly useful for determining the value of K that corresponds to a point s already known to lie on the root locus. it improves the intuition necessary for designing a feedback controller—a process of altering a system to obtain the desired root locus.11) We concentrate here on the construction of the standard root locus. with hardly any calculations. ± 2.  ∠G(s)H (s) =   2k π .7) which is equivalent to the two simultaneous conditions and (2k + 1)π.9 is known as the magnitude condition.… . If the angle is an odd multiple of π .2  Development of Rules for Constructing Root Locus Considering the availability of special-purpose software for control systems analysis. A summary of the guidelines for sketching the complementary root locus is given in Franklin et al. then s is said to be on the standard root locus (sometimes called simply the root locus). ± 1. It affords confidence that a computer-generated solution is free of errors.5 as or KG(s)H (s) = 1∠(2k + 1)π.

including the zeros at infinity.2  The root locus is symmetric about the real axis in the s plane. note that the left-hand side of Equation 12.2). Simply observe that setting K = 0 makes the closed-loop characteristic equation (12.12 as K → ∞. As K varies. there is no feedback to alter the open-loop system dynamics.12 vanish. m of the branches of the root locus must terminate at the open-loop zeros. The rules for plotting the root locus are now developed. But the number of closed-loop poles is equal to the number of open-loop poles since the characteristic polynomials in Equations 12. 381–395). as K approaches infinity. whose n − m highest order coefficients decrease as K increases. According to the magnitude condition.6) identical with the open-loop characteristic equation (12. Nise (2008.6 have the same degree.e.3. The additional n − m roots must lie outside this growing region. Kuo (1995. may be found in Franklin et al. (It is assumed that both G(s) and H(s) are proper—i. Then Rule 12. As a result. As these coefficients go to zero. This presentation starts with the rules that are the simplest and therefore the most useful for sketching the root locus by hand and proceeds to the rules that are more complex and tedious to apply. Therefore. a useful approximate sketch can be made without any tedious calculations. are real. 225–235). and D’Azzo and Houpis (1988. any complex closed-loop poles occur in conjugate pairs. so do the root locus branches. However. 228–234). both terms on the left-hand side of Equation 12. for each K. Usually. Let the open-loop transfer function have n poles and m zeros. When K = 0. and hence of the characteristic. (2009. except for simple systems.3  The root locus starts (for K = 0) at the poles of G(s)H(s) and ends (for K = ∞) at the zeros of G(s)H(s). pp.6 can be rewritten as dG (s)dH (s) + nG (s)nH (s) = 0. the number of branches is equal to the number of closed-loop poles for any value of K. where the highest order terms of the polynomial still dominate. pp.12) It is clear that each of the m open-loop zeros will satisfy Equation 12. Other developments. 477–505). Rule 12. The development is intended to be intuitively convincing and complete. called a “branch” of the root locus.2 and 12.12-4 Control and Mechatronics of even a complicated system..6. |G(s)H(s)| must approach zero. neither G(s) nor H(s) has more zeros than poles. For any physical system. K (12. but not too mathematical. This claim also seems reasonable. pp. with varying degrees of mathematical rigor. Rule 12. each of the closed-loop poles moves along a continuous path in the complex plane. Equation 12. while the other n − m branches go off to infinity as K → ∞. and the second one since the open-loop zeros are defined by Equation 12. It seems reasonable from an intuitive point of view that the branches of the root locus should originate from the open-loop poles. Equation 12. the coefficients of the open-loop transfer function. Thus. pp.12 is a polynomial of degree n.3 claims that m of the branches of the root locus terminate at the finite zeros of G(s)H(s).) Rule 12. This is also easy to establish mathematically. To see that the other n − m branches go to infinity. the first one since dG (s) dH(s) is bounded and 1/K → 0. the polynomial approximates nG (s) nH(s) in a larger and larger region in the s plane. therefore. To establish the claim mathematically. For s equal to any of the open-loop zeros.1  The number of branches of the root locus is equal to the number of poles of the open-loop transfer function KG(s)H(s). © 2011 by Taylor and Francis Group. precise calculations of the detailed features of the locus are best done by computer. LLC .

14 from the angles of these vectors as ∠G(s1 )H (s1 ) = 0 − 180° − 0 − 0 − θ1 − (−θ1 ) = −180°.16) Therefore.11. To derive this result. In general. write the open-loop transfer function as G(s)H (s) = A(s − z1 ) . Rule 12. are indicated by dark line segments. and (c) test point not on the locus. the angle of the open-loop transfer function at s2 is ∠G(s2 )H (s2 ) = 0 − 180° − 180° − 0 − θ2 − (−θ2 ) = −360°.14) Now consider the test point s1 on the real axis. which implies simply that the leading coefficients of the numerator and denominator polynomials of G(s)H(s) both have the same sign.13) It is assumed that A > 0. (s − p1 )(s − p2 )(s − p3 )(s − p4 )(s − p5 ) (12. for any test point on the real axis. s2 does not satisfy the angle condition.11. complex-conjugate pairs of poles or zeros contribute zero net angle. (12. (12.2  The angle condition for points on the real axis: (a) the real-axis root locus. shown in Figure 12.4. Equation 12.4  A given point on the real axis is included in the root locus if the number of real open-loop poles and zeros to the right of that point is odd. and lies on the root locus. s1 satisfies the angle condition. for example. © 2011 by Taylor and Francis Group. and the vectors s1 − zi and s1 − pi.2b.2a.15) Therefore.Root Locus Method Im(s) p4 Re(s) p3 z 1 p2 p1 p3 z 1 p2 θ1 (c) p4 θ1 s1 p1 Re(s) p3 z 1 θ2 p5 Im(s) p4 θ2 s1 p2 p1 Im(s) 12-5 Re(s) (a) p5 (b) p5 Figure 12. and does not lie on the root locus. The angle of the open-loop transfer function at s1 is computed according to Equation 12. The intervals on the real axis included in the root locus. This rule is a direct consequence of the angle condition. real poles and zeros to the right of the test point contribute angles of ±180° each. Consider. (12. real poles and zeros to the left of the test point also contribute zero angle.2c. LLC . As shown in Figure 12. consider the test point s2. the open-loop system polezero configuration shown in Figure 12. as given by Rule 12. Therefore. By contrast. the angle of G(s)H(s) at a test point on the real axis depends only on the real poles and zeros to the right of the test point. Then the angle of the open-loop transfer function is given by ∠G(s)H (s) = ∠(s − z1 ) − ∠(s − p1 ) − ∠(s − p2 ) − ∠(s − p3 ) − ∠(s − p4 ) − ∠(s − p5 ). Equation 12. (b) test point on the locus. and the test point satisfies the angle condition if the number of those poles and zeros is odd.

with n > m This rule is illustrated in Figure 12. k = 0. For such a test point. each angle ∠(s − zi) and ∠(s − pi) is approximately equal to ∠s.5  The n − m branches of the root locus that go to infinity for large K asymptotically approach straight lines at angles θk = (2k + 1)π . n − m − 1 . it is assumed that the leading coefficient is positive (A > 0). the root locus then lies on the real axis at points to the left of an even number of real open-loop poles and zeros. LLC .3  Asymptote angles of the root locus branches: (a) n − m = 1. Rule 12.17) where n is the number of open-loop poles m is the number of open-loop zeros. which would correspond to the complementary root locus.12-6 Control and Mechatronics If the leading coefficient A of the open-loop transfer function G(s)H(s) is negative. (c) n − m = 3.4. where the asymptotes of the root locus branches are illustrated for several values of n − m.11. may be written as ∑ i =1 m ∠(s − zi ) − ∑ ∠(s − p ) = −(2k + 1)π i i =1 n (12. then the same reasoning leads to the opposite conclusion.18) Im(s) Im(s) Re(s) Re(s) (a) Im(s) (b) Im(s) Re(s) Re(s) (c) (d) Figure 12. Equation 12. The angle condition. n−m (12. and only the standard root locus (K > 0) is considered. © 2011 by Taylor and Francis Group.… . and (d) n − m = 4. in fact. (b) n − m = 2.3. is the same result that would be obtained when considering A > 0 but K < 0. as shown in Figure 12. 1. 2. That is. This. From this point forward. test points that are far from the origin and far from all of the finite open-loop poles pi and zeros zi. It is derived by applying the angle condition for test points s of large magnitude— that is.

(12.3 indicate that.17. This point. the “center of gravity” or average of those roots. For large s. The center of gravity of the roots that go to infinity may be found by taking advantage of a simple property of polynomials: For any polynomial q(s) = s n + a1s n −1 + + an .18 is approximated by or ∑ ∠(s) − ∑ ∠(s) = −(2k + 1)π i =1 i =1 m n (12. therefore. then there are no asymptotes. is the intersection of the asymptotes.22) © 2011 by Taylor and Francis Group. respectively. for n − m ≥ 2. for some integer k. Rule 12.21) where pi and zi denote the open-loop poles and zeros. and if n − m = 1. LLC . (12. The illustrations in Figure 12. denoted as σ.20) Solving for ∠s yields the given condition.4  The angle condition for test points far from the origin. then there is a single asymptote at an angle of 180°. the “intersection of the asymptotes” is irrelevant. If n − m = 0. In either of these cases. Equation 12. only the situation where n − m ≥ 2 needs to be considered.19) m∠(s) − n∠(s) = −(2k + 1)π . Equation 12. (12.6  The intersection of the asymptotes lies on the real axis and is given by ∑ σ= n i =1 pi − n−m ∑ m i =1 zi .Root Locus Method Distant test point 12-7 Im(s) Region containing open-loop poles and zeros Re(s) Figure 12. the roots that go to infinity are balanced about some point on the real axis.

23) is the same as that of the open-loop characteristic polynomial dG (s) dH(s).27) where p1.21. therefore. By the magnitude condition. these magnitudes are the reciprocals of the values of K that correspond to the jω -axis crossings. ∑ i =1 m zi + (n − m)σ = ∑p . and the corresponding values of K. (12.26) Rule 12. An accurate (computer-generated) phase plot of the open-loop transfer function will show the frequency or frequencies ω at which the angle of G( jω)H( jω) equals an odd multiple of 180°. The magnitudes of G( jω)H( jω) at these frequencies can then be read from an accurate magnitude plot. As K → ∞. Rule 12. therefore. pn are the open-loop poles z1.25) Comparison of Equations 12. zm are the open-loop zeros © 2011 by Taylor and Francis Group. (This is easily verified by writing q(s) in factored form. i i =1 n (12.25 yields which in turn yields Equation 12. and the other n − m roots are on the asymptotes. the sum of the closed-loop poles is established as ∑ ∑p . ri = i i =1 i =1 n m (12. the sum of the roots is also given by ∑ ∑ z + (n − m)σ . m of the roots is found at the open-loop zeros zi. the closed-loop poles ri are at the open-loop pole locations. these frequencies are the values on the imaginary axis intersected by the root locus. …. ….24) for all K. Therefore. the sum of the closed-loop poles does not depend on K. ri = i i =1 i =1 n n (12. By the angle condition.) In the case where n − m ≥ 2. 1 i i =2 n (12.7  The points at which the root locus intersects the jω axis. For K = 0.12-8 Control and Mechatronics the sum of the roots is simply the negative of the coefficient a1. the second coefficient of the closed-loop characteristic polynomial qcl (s) = dG (s) dH (s) + KnG (s)nH (s). LLC . may be determined from the magnitude and phase plots of the open-loop transfer function.24 and 12. This method is based on the angle and magnitude conditions for the root locus.8  The angle of departure of the root locus from a complex pole p1 is given by θ p1 = ∑ i =1 m ∠( p1 − zi ) − ∑ ∠( p − p ) + 180°.

32) This means that the branch approaches the zero from the right.27 and 12. as a result.29) This calculation implies that the branch approaches the zero from the left. © 2011 by Taylor and Francis Group. the only difference in the calculation is that ∠(z1 − p1) = 90°.5  Root locus plots demonstrating angles of departure and arrival: (a) θ z1 = −180° and (b) θ z1 = 0. (12.29 yields θ z1 = 0 . For Figure 12. the angle condition near the zero z1 is or which gives θ z1 = −180°. (12.5a.5 shows two pole–zero configurations for which the position of the locus between a pole and a zero is critical to determining the closed-loop system stability. The approximate position is easily determined by computing the angle of departure from the pole or the angle of arrival at the zero. the closed-loop system is stable. As an example.28 are found by applying the angle condition to test points near the complex pole or zero. Equation 12. and the closed-loop system is unstable.Root Locus Method Im(s) p1 z1 Re(s) p3 z2 (a) p2 (b) p2 z2 p3 z1 p1 Re(s) Im(s) 12-9 Figure 12. LLC .30) θ z1 + ∠(z1 − z 2 ) − ∠(z1 − p1 ) − ∠(z1 − p2 ) − ∠(z1 − p3 ) = −180° (12.5 results in two opposite conclusions concerning the closed-loop system stability. The computation of the angle of departure or the angle of arrival is helpful in this case for accurately reaching these conclusions. This calculation is especially useful for determining the position of a short branch of the root locus between a complex pole and a complex zero lying close together. the branch lies entirely in the left half-plane.31) θ z1 + 90° − (−90°) − (90°) − (90°) = −180°. The branch lies entirely in the right halfplane. (12. The angle of arrival of the root locus at a complex zero is given by θ z1 = ∑ i =1 n ∠(z1 − pi ) − ∑ ∠(z − z ) − 180°. The subtle difference between the two systems in Figure 12. 1 i i =2 m (12. Figure 12.5b.28) Equations 12. For Figure 12. In fact. instead of −90°.

9) or a computer plot of the locus to determine it. Similarly. In a few cases. compute approximate angles of departure from complex poles or arrival at complex zeros (Rule 12. If a given point in the vicinity of the open-loop poles and zeros satisfies the angle condition only approximately (not exactly). and split up again. LLC . however. a satisfactory design may result simply from introducing a gain into the loop. 5. 8. resulting in a repeated root for the corresponding value of K. Start a root locus branch at each open-loop pole (Rule 12.8).2). the steps for sketching the root locus of a system are now presented. try to visualize how each branch will arrive at a zero or approach an asymptote as it goes to infinity (Rule 12. the value of K corresponding to the roots reaches a relative maximum at the breakaway point. G(s) H(s) reaches a relative maximum at a break-in point. If the exact frequency or gain corresponding to a jω -axis crossing is critical. It is assumed that the open-loop transfer function is proper: 1.3  Steps for Sketching the Root Locus As a summary. Considering only the real axis. A complex breakpoint is a point off the real axis at which two or more branches of the root locus meet. this implies that G(s)H(s) reaches a relative minimum on the real axis at a breakaway point. Then. a © 2011 by Taylor and Francis Group. a straightforward root locus analysis is sufficient to determine the value of the gain required to achieve the desired transient response.3). or if uncertain whether a jω -axis crossing occurs. 12. plot accurate magnitude and phase plots to determine it (Rule 12.12-10 Control and Mechatronics Rule 12. 4.5 and 12.3  Compensator Design by Root Locus Method An essential aspect of feedback control design is the introduction of a compensator to alter the characteristics of the open-loop transfer function. a breakaway or break-in point is a point on the real locus for which the derivative of G(s)H(s) is zero.1). This condition may be checked directly by hand. Note that the range of gains for which the system is stable may also be determined by applying the Routh–Hurwitz test (Nise 2008.7). If necessary to further determine the positions of certain branches of the locus. Write the open-loop transfer function with numerator and denominator polynomials in factored form. at least in the sense that a modest perturbation in the plant could result in a closed-loop pole there. The same derivative rule presented for finding real breakaway and break-in points may also be used for finding complex breakpoints.2. This test can be especially useful if the root locus has multiple jω -axis crossings. 291–302) to the closed-loop characteristic polynomial. 2. and then leave the axis as a complex-conjugate pair as K increases. Fill in all intervals of the real axis that lie to the left of an odd number of real poles and zeros (Rule 12. Remember that the angle condition may be checked at any point in the complex plane to determine whether that point lies on the root locus. Sketch the asymptotes of the n − m branches that go to infinity (Rules 12. 7. If the exact location (or the existence) of a breakaway or break-in point is critical. two branches of the root locus on the real axis meet for a certain value of K. Hence.9  Points where the root locus breaks away from the real axis (breakaway points) or breaks into the real axis (break-in points) are the real values of s on the root locus that satisfy d[G(s)H (s)] = 0. 12.33) At a breakaway point.4). Plot the open-loop poles and zeros in the s plane. use the derivative rule (Rule 12. Keeping in mind that the locus is symmetric with respect to the real axis (Rule 12. These steps do not always need to be taken sequentially but may be used as general guidelines for the root locus method of analysis. or the function G(s)H(s) may simply be plotted for real s to find its relative minima and maxima on the real axis.6). ds (12. 3. then it is safe to assume that it lies near the root locus. By the magnitude condition. 6. In most cases.

an additional open-loop pole will cause the angle of G(s)H(s) to become more negative.14 that a pole of the open-loop transfer function contributes a negative angle to G(s)H(s) for points s in the upper half-plane.Root Locus Method 12-11 satisfactory design requires the introduction of a dynamic compensator.6. In other words. For each system. as well as the gain. © 2011 by Taylor and Francis Group. requiring first of all an understanding of how the root locus is affected by the inclusion of additional poles or zeros in a given loop transfer function. followed by the root locus with first one and then two additional poles. The effect on the root locus of adding poles to two different open-loop transfer functions is illustrated in Figure 12. some branches of the root move to the right as open-loop poles are added. LLC . the original root locus is shown. It is clear from Equation 12. This is a consequence of the angle condition. for a given test point in the upper half-plane. therefore. 12.3. That is. the overall system transient response depends upon the selection of the pole and zero locations. A root locus design method consists of selecting the compensator poles and zeros to position the locus branches and then selecting the compensator gain to place the roots at some desired positions along the branches. Then. some branches of the root locus will lie farther to the right. Im(s) Im(s) Re(s) Re(s) Im(s) Im(s) Re(s) Re(s) Im(s) Im(s) Re(s) Re(s) (a) (b) Figure 12. For both systems. of the compensator.1  Effect of Including Additional Poles in the Open-Loop Transfer Function The addition of poles to the open-loop transfer function increases the number of branches in the root locus and pushes some branches to the right in the s plane.6  The effect of additional poles on the root locus. the angle of G(s)H(s) will reach −180° for test points at smaller angles in the s plane. The first part of the design is the more complicated one.

frequency-domain analysis shows that the addition of a pole or poles near the origin can improve the low-­ frequency performance of the closed-loop system. In particular. Im(s) Im(s) Re(s) Re(s) Im(s) Im(s) Re(s) Re(s) Im(s) Im(s) Re(s) Re(s) (a) (b) Figure 12. Also.7. the effect of additional zeros is roughly the opposite of that of additional poles. For both systems. On the other hand. for a system with a relative degree n − m ≥ 2. some branches of the root locus move to the left as more open-loop zeros are added. On the other hand. In any case.3. In short. Even a system with some open-loop poles in the right half-plane can be stabilized using feedback if there are some zeros in the left half-plane. Adding left half-plane zeros to a system has a desirable effect on the root locus. followed by the root locus with first one and then two additional zeros. the feedback tends to be more destabilizing. LLC . care must be taken to keep the feedback gain sufficiently low that all of the closed-loop poles lie in the left half-plane. the original root locus is shown.e. For each system.7  The effect of additional zeros on the root locus. Such an undesirable effect can be discerned by the use of frequency-domain analysis techniques. Systems with low relative degree (i.. a zero that has a stabilizing effect on the system may cause a deterioration of the low-frequency performance of the system. The effect on the root locus of adding zeros to two different open-loop transfer functions is illustrated in Figure 12. 12.2  Effect of Including Additional Zeros in the Open-Loop Transfer Function The addition of zeros to the open-loop transfer function pulls the closed-loop poles to the left in the s plane. n − m ≤ 1) can be stable for all values of the feedback gain. for systems of higher relative degree n − m. each zero attracts a branch of the root locus to terminate there for K = ∞. © 2011 by Taylor and Francis Group.12-12 Control and Mechatronics Adding poles to a system has an undesirable effect on the root locus.

8  The effect of a lead compensator on the root locus. (These are the same two systems used to illustrate the effect of additional zeros in Figure 12. In many cases.e. 12. but not too close to a zero of the forward transfer function G(s). the design calls for a lead compensator.3. the larger the magnitude of the dominant poles. The effect of a lead compensator on the root locus of two different open-loop transfer functions is illustrated in Figure 12. the more heavily damped (i. When the effect of a zero is desired. however.3  Effect of a Lead Compensator If the branches of the root locus of a given system lie too far to the right. the less oscillatory) the system Im(s) Im(s) Re(s) Re(s) Im(s) Im(s) Re(s) Re(s) (a) (b) Figure 12. In effect. being locally more influential than the pole. the original root locus is shown. A lead compensator has a transfer function of the form Gc (s) = K ( s + a) .3. Generally speaking. followed by the root locus with the lead compensator. they could be moved to the left if it were possible to introduce an additional zero to the open-loop transfer function. the intersection of the asymptotes is moved to the left. Of course.Root Locus Method 12-13 12. (s + b) (12. so adding zeros also requires adding poles.. initially draws the root locus to the left. measured counterclockwise from the positive j ω axis. the faster the system response.8.34) with 0 < a < b. The angles of the asymptotes of the root locus branches remain unchanged. which are the poles nearest the origin of the s plane. the zero of the compensator.7. the speed and damping of the system response can be correlated to the locations of the dominant closed-loop poles. © 2011 by Taylor and Francis Group. LLC .4  Lead Compensator Design The goal of the lead compensator design is to improve the transient response of the system over that of the uncompensated system.) For each system. in order to obtain adequate speed of response and damping. the larger the angle of the dominant poles. a proper compensator has at least as many poles as zeros. It is called a lead compensator because its frequency response Gc( jω) has a positive angle (phase lead) for all frequencies ω > 0.

while still achieving the desired angle.35. which cannot easily be visualized using the root locus technique.4  Examples The use of the root locus method as a design technique is now illustrated by use of two examples. 370–371).9  Graphical determination of the angle of a lead compensator at a point s1. the point s1 will be a pole of the compensated closedloop system.9.35) Then. An additional criterion may be imposed to fix the two parameters a and b. The angle of the compensator at s1 is found graphically as the angle between the line segments connecting s1 with the compensator pole and zero. Find ∠Gp(s1). 12. For example. Choose the parameter K (i.12-14 Control and Mechatronics Im(s) s1 response. see Figure 12. where this angle is labeled ϕ. the gain parameter) of the compensator such that K (s1 + a) G p (s1 ) = 1 .e. (12. the pole and the zero) of the compensator such that ∠ (s1 + a) + ∠G p (s1 ) = ±180°. Choose the parameters a and b (i. Choose the desired locations (magnitude and angle) of the dominant poles in the s plane. according to the angle condition.e. Assume the dominant poles are a complex-conjugate pair. the point s1 will lie on the compensated root locus. (s1 + b) –b –a Re(s) Figure 12. © 2011 by Taylor and Francis Group. Or a and b may be chosen together to minimize the required value of K in Step 4. If necessary. 2. It is clear that the choice of the compensator pole and zero that yield a particular angle is not unique. The steady-state performance of the system depends not on the closed-loop pole locations but on the open-loop dc gain of the system. The details of the construction of the loci are omitted for the most part since they are routine and can be filled in by the reader. Note that the steady-state performance is not considered in the design of the lead compensator as presented. 3.36) Then. either by hand or by computer.. A graphical method for this approach is given in D’Azzo and Houpis (1988. and call the desired upper half-plane location s1. Given an open-loop transfer function Gp(s). where the open-loop gains can be clearly seen. the steady-state performance of the system may be improved by use of a lag compensator. There is some design freedom inherent in Step 3 above. and assuming the transient response of the uncompensated closed-loop system is not satisfactory. (s1 + b) (12. one method for the design of a lead compensator may proceed as follows: 1. the goal of the lead compensator design may be restated as the placement of the dominant poles at a desired magnitude and angle. the lag compensator design is best done by frequency-domain techniques.. according to the magnitude condition. a may be chosen to cancel a real pole of the plant near (not on) the jω axis and then b determined to satisfy Equation 12. 4. Therefore. however. LLC .

The plant is represented by G p (s ) = 1 . Nevertheless.1  Compensation of an Inertial System Consider the feedback system shown in Figure 12. but the damping cannot be improved if the poles lie on the given branches of the root locus. however.11b. Gc(s) takes the form of the a lead compensator. and the open-loop transfer function is G( s ) = K (s + 1) . 12.76 ± j2. By comparison with Figure 12. For p = 1. © 2011 by Taylor and Francis Group. the closed-loop poles simply move along the jω axis away from their starting points at the origin.10  Form of closed-loop system for examples.04 along the complex-conjugate branches of the root locus. so the asymptotes of the loci are at angles of ±90° in the complex plane. Increasing gain results in increasing frequency of oscillations. the pole and zero of Gc(s) cancel. The root locus is shown in Figure 12. s2 (12.47 on the real axis and at −0. shown in Figure 12. As the gain K is increased. so that the vertical asymptotes intersect the real axis at σ = −1.37) which describes an inertial system. then the closed-loop poles lie at −1. The design procedure consists of choosing p so that the branches of the root locus assume a desired shape and then choosing K to place the closed-loop poles at the desired positions along the branches.11a. The relative degree of G(s) is two.11b. so the root locus asymptotes are still at angles of ±90°. For p = 3.) If the compensator parameters are chosen as p = 3 and K = 7. The open-loop transfer function is G(s) = G p (s)Gc (s) = K . such as a mass in linear motion with a force input along the direction of motion.10. The system response will be characterized by undamped oscillations. The compensator is to have the form Gc (s) = K (s + 1) .1. s2 (12. Some of the corresponding step responses are shown in Figures 12. LLC . The frequency of the oscillations can be increased or decreased by varying the gain K. These complex-conjugate positions have been chosen at the point of greatest damping along the root locus of Figure 12. the zero of the compensator attracts the locus somewhat. s 2 (s + 3) (12. All of the closed-loop poles are in the left halfplane for all values of K > 0. This choice corresponds to the usual first step for a compensator design.4.40) The root locus is shown in Figure 12. The root locus is given in Figure 12. the system is not effectively stabilized for any value of K. (s + p) (12. G(s) = Gp(s)Gc(s) and H(s) = 1.11 for several values of p. Hence.12 and 12. The relative degree of G(s) is still two. is oscillatory and shows over 50% overshoot.12.39) which has no zeros and two poles at the origin.Root Locus Method + – 12-15 Gc (s) Gp (s) Figure 12. the step response of the resulting closed-loop system.13. (Frequency-domain analysis would show that the compensator has added stabilizing phase lead to the open-loop transfer function.38) where K > 0 and p > 0 are design parameters. and the compensator becomes just a gain K.

the pole of the compensator can be moved farther to the left.11c shows the root locus plots for p = 6. the compensator zero attracts the locus more strongly to the left. Again. (Frequency-domain analysis would show that the stabilizing phase lead of the compensator is increased.) Figure 12. (c) p = 6.11  Root locus plots for inertial plant with lead compensator: (a) p = 1.84 ± j1. and (e) p = 12. If the compensator parameters are chosen as p = 6 and K = 15.12-16 5 4 3 2 1 0 –1 –2 –3 –4 –5 5 4 3 2 Im(s) 1 0 Control and Mechatronics Im(s) –1 –2 –3 –4 –5 (a) p = 1 5 4 3 2 Im(s) 0 1 –12 –10 –8 Re(s) –6 –4 –2 0 (b) p = 3 5 4 3 2 Im(s) 1 0 –12 –10 –8 –6 Re(s) –4 –2 0 –1 –2 –3 –4 –5 –1 –2 –3 –4 –5 –12 (c) p = 6 –10 –8 –6 Re(s) 5 4 3 2 Im(s) 1 0 –4 –2 0 –12 (d) p = 9 –10 –8 –6 Re(s) –4 –2 0 –1 –2 –3 –4 –5 –12 –10 –8 –6 Re(s) –4 –2 0 (e) p = 12 Figure 12.32 on the real axis and at −1. then the closed-loop poles lie at −2. LLC .75 along the complex-conjugate branches of the root locus. To obtain a better-damped response. these © 2011 by Taylor and Francis Group. With the compensator pole more distant. (d) p = 9. (b) p = 3.

11d through e.84 ± j1.2 1 Amplitude 0.76 ± j2.6 1. as shown in Figure 12. the actual closed-loop pole locations will be extremely sensitive to the value of K.4 1. there are at least two disadvantages to such a design. K = 15. the system response will still show substantial overshoot because of the zero at s = −1. The step response.13.Root Locus Method 1.6 0.12  Step response of inertial plant with lead compensator. © 2011 by Taylor and Francis Group.11c.13  Step response of inertial plant with lead compensator.04. LLC .2 0 0 1 2 3 4 Time (s) 5 6 7 8 12-17 Figure 12.8 0. Second.8 0. p = 6. shown in Figure 12. It might seem desirable to fix p > 9 and to choose the gain K to obtain repeated poles at the break-in point on the negative real axis.4 1.2 0 0 1 2 3 4 Time (s) 5 6 7 8 Figure 12. regardless of the increased damping associated with the pole locations. the complex-conjugate branches of the locus are attracted to the left and downward sufficiently that they meet the real axis.32 and −1. is well damped but still shows considerable overshoot resulting from the zero at s = −1.75.4 0. However. so that the poles will not be accurately placed in practice. K = 7. 1. complex-conjugate positions have been chosen at the point of greatest damping along the root locus of Figure 12. For p > 9.47 and −0. First.6 0. Closed-loop pole locations are −2. resulting in break-in and breakaway points to the left of the compensator zero. p = 3. and thus achieve critical damping. Closed-loop pole locations are −1.2 Amplitude 1 0.4 0.

41) This transfer function has two poles on the imaginary axis and may represent an undamped springmass system with an external force input acting on the mass. LLC . s(s 2 + K d s + 1) + K p (s + α) (12.10.44) which gives the closed-loop transfer function T(s) as T (s ) = G p (s)Gc (s) K d s2 + K ps + K i = 3 . and Ki are the design parameters.12-18 Control and Mechatronics 12.2  PID Compensation of an Oscillatory System Consider again the feedback system shown in Figure 12.47) © 2011 by Taylor and Francis Group. the PID controller can be realized only approximately in practice.4. therefore. the three closedloop poles may be placed arbitrarily.42. For this example. In a typical design procedure. s(s 2 + 1) (12. Since Gc(s) is not a proper transfer function.10 is G p (s)Gc (s) = K d s2 + K ps + K i . by choice of the three compensator parameters. K = 1.45) The system is third order. (s + 1) 2 (12. the design approach will be first to define the ratio Ki = α. Kp (12. the open-loop transfer function of Figure 12. the derivative term will have to be implemented as Kds ≈ Kds . The parameter ε will be ignored in the presentation of this example. εs + 1 (12.42) where Kd. For the compensator given in Equation 12. let the plant transfer function be given by G p (s ) = K . 1 + G p (s)Gc (s) s + K d s 2 + (1 + K p )s + K i (12.43) where ε is a positive parameter much smaller than 1. Kp and Kd are chosen to meet transient response specifications and Ki is chosen to satisfy steady-state requirements.46) so that the closed-loop transfer function may be written as T (s ) = K d s 2 + K p (s + α ) . Kp. s (12. In this example. The root locus method will be used in the design of a PID (proportional + integral + derivative) controller for this system. The assumed form of the compensator is Gc (s) = K d s + K p + Ki . Specifically.

.Root Locus Method 12-19 The poles of the closed-loop system are determined by the characteristic equation s(s 2 + K d s + 1) + K p (s + α) = 0 .49) fixes the zero of the equivalent open-loop transfer function at s = −α . as Kd is increased from 0 to 2 (Figure 12. as shown in Figure 12. The zeros of T(s) lie at −0.43 on the real axis and at −0.14a through d).25 ± j0. corresponding to the integral term in the compensator.15b.86 along the complexconjugate branches of the root locus. For Kd = 2 (Figure 12.50.14a). the closed-loop poles lie at −0.14e).51) In this example. and the complex branches of the root loci lie farther to the left. © 2011 by Taylor and Francis Group. and Kp is allowed to vary from 0 to ∞. For Kp = 2. The zeros of T(s) lie at −0.48 is equivalent to a root locus problem as defined by Equation 12. Two branches of the root locus lie entirely in the right half-plane. the zero in the root locus plots) is kept fixed at 0. Note that.78 ± j1.14e. the open-loop poles are repeated at s = −1.e.35 on the real axis and at −0.14a through d shows some root locus plots of the closed-loop characteristic equation (12.6.50 ± j0. As the real-axis pole has moved to the left. Let the derivative gain be chosen as K d = 2. (12. The equivalent numerator polynomial n(s) = (s + α) (12. LLC . the settling time is reduced. which guarantees that the closed-loop system output will track constant inputs with zero steady-state error. As Kd is increased. the closed-loop poles lie at −0. as they turn out not to dominate the system response. Note that the two zeros of T(s) are not seen directly on the root locus plots but may be found directly as the roots of the polynomial nT (s) = K d s 2 + K p (s + α). which causes a relatively long settling time.15a through d shows four different step-response plots for the closed-loop system. The value of α (i. corresponding to four different gains Kp. the complex-conjugate pair of open-loop poles follow a semicircular path at a unit distance from the origin.31 along the complexconjugate branches of the root locus. the root locus lies entirely in the left half-plane.50) sets the poles of the equivalent open-loop transfer function.48).82 ± j0. Two of these poles depend on the parameter Kd. Figure 12. Then different values of the proportional gain Kp produce closed-loop poles at different positions along the locus of Figure 12. the controller provides no damping. meaning that the system is stable for every Kp > 0. the damping ratio associated with the open-loop poles increases.5. meaning that the system is unstable for every Kp > 0. The other one always lies at s = 0. The branches of each root locus plot represent the closed-loop pole locations as Kp varies from 0 to ∞.48) If α and Kd are considered as constant parameters.14b). The equivalent denominator polynomial d(s) = s(s 2 + K d s + 1) (12. then finding the solutions of Equation 12. as shown in Figure 12. For Kd = 0. The real-axis pole is associated with a slow exponential decay in the error. Figure 12. For Kd = 0 (Figure 12.15a. the design may be done without much regard for these zeros. As Kp increases. Four different values of Kd are used for the four plots.43.5 (Figure 12. with α and Kd held constant. resulting in four different locations for the open-loop complex poles. For Kp = 1. but the overshoot increases as the damping ratio associated with the complex-conjugate poles decreases. the speed of response increases as the real pole moves to the left. (12.

5 (b) Kd = 0.75 ± j2.5 2 2.5 –2 –1. The complex-conjugate poles now cause more oscillation and hence a slight increase in the settling time. The zeros of T(s) lie at −0.77 along the complexconjugate branches of the root locus.5 Control and Mechatronics 2 1.5 Re(s) 1 1.5 Im(s) 0 –1 –1.77 ± j1.0 Re(s) 0. For Kp = 8.5 0 –0. as shown in Figure 12.5 –1 –0.5 1 0.5 –1.5 –2.5 –0.5 1 0.47 on the real axis and at −0. and (e) Kd = 2.5 2 1. LLC .5 (c) Kd = 1.5 2 2.5 –2.5 Im(s) 0 –1 –1.5 0.5 –1 –0.0. as shown in Figure 12. the closed-loop poles lie at −0.5 0 0. although the complex-conjugate poles are beginning to produce some oscillation in the response. the closed-loop poles lie at −0.5 –2 –2. © 2011 by Taylor and Francis Group.5 0 0.5 2 2.48 on the real axis and at −0. (c) Kd = 1.5 Figure 12.5 –1.5 –1 –0. (b) Kd = 0. and the zero at −0.5 1 0. with Ki/Kp = 0.59 contributes to the large first peak in the response.5 –0.15d.5.5 2 2.5 –2 –1.5 –2 –1. (d) Kd = 1.15c.59 and −3.5: (a) Kd = 0.5 Re(s) (e) Kd = 2.5 –2 2 1.92 along the complexconjugate branches of the root locus.5 –1 –0.5 Im(s) Im(s) 0 –1 0 –1 –0.5 –2 –1.12-20 2 1.5 Re(s) (a) Kd = 0 –2 1 1.5 –2 –1.0 –2 1 1.5 0 0.5 –2.41. For Kp = 4.14  Root locus plot for oscillatory plant with PID compensator.5 (d) Kd = 1.5 Im(s) 0 –1 –0.5 1 1.00. The settling time is further reduced.5 –1 –0.5 2 2.5 –1.5 1 0. The zeros of T(s) are repeated at −1.5 0 Re(s) 0.5 2 1 1 1.5 –2 –2.

4 1. (b) Kp = 2.4 0. LLC . The root locus method is by itself a useful tool in feedback control design. Its focus is naturally on the system transient response. a complete design must also account for the system steady-state response and the stability margins.8 0.15  Step response plots for oscillatory plant with PID compensator.4 0.8 0. the root locus method is more powerful when used in conjunction with frequency-response plots.2 1 Amplitude 0.5.4 1. As an alternative.2 0 (b) Kp = 2 1. however. if Kd is increased too much.4 0.8 0.Root Locus Method 1. © 2011 by Taylor and Francis Group.2 0 0 1 2 3 4 5 Time (s) 6 7 8 9 12-21 1. and (d) Kp = 8. which is closely related to the closed-loop pole and zero locations. However. although this will tend to produce a slower decay in the exponential mode of the response. Ki/Kp = 0.5. Without tedious calculations. Kd = 2. which may not be easily seen from the root locus plots. The damping of the response may generally be improved by increasing Kd.5  Conclusion The root locus plots reveal the general effects of feedback compensator parameters on the poles of a closed-loop system transfer function.6 0. Kd = 2: (a) Kp = 1.4 0. Thus. hand-sketched root locus plots yield considerable useful information on the closed-loop pole locations.2 1 Amplitude 0. (c) Kp = 4. These parameters give a closed-loop system with the step response shown in Figure 12. which show the loop gain and phase directly. the zeros of T(s) will become complex and move to the right.15c. The design may be concluded by choosing α = Ki/Kp = 0. 12.6 0.2 (c) Kp = 4 0 (d) Kp = 8 0 1 2 3 4 5 Time (s) 6 7 8 9 Figure 12.4 1.2 0 0 1 2 3 4 5 Time (s) 6 7 8 9 0.4 1.6 0.8 0.2 1 Amplitude 0. possibly causing severe overshoot in the step response. and Kp = 4. Computer-generated root locus plots yield more detailed and precise information that may be needed to exactly specify the desired compensator parameters. the damping may be improved by decreasing the parameter α .2 1 Amplitude 0 (a) Kp = 1 1 2 3 4 5 Time (s) 6 7 8 9 1.6 0.

Inst. New York. 1995. 2008. N. Pearson Prentice Hall.. W. Wiley. Am. Trans. Graphical analysis of control systems. Nise. Franklin. Englewood Cliffs. 6th edn. Prentice Hall. © 2011 by Taylor and Francis Group. J. 3rd edn. 1988. C.. 7th edn. B. New York. NJ.. F.. Control Systems Engineering.12-22 Control and Mechatronics References D’Azzo. Evans. A. Liner Control System Analysis and Design: Conventional and Modern.. Kuo. R. C. Automatic Control Systems. H. J.. D.. S. McGraw-Hill. 67:547–551. and Houpis. NJ. Upper Saddle River. J. LLC . 5th edn. G. Powell. Electrical Eng. Feedback Control of Dynamic Systems. 1948. and Emami-Naeini. 2009.

.............. 13-24 Control in Power Converters  •  SMC in Electrical Drives  •  Sliding Modes in Motion Control Systems Control in Linear System  •  Discrete-Time SMC  •  Sliding Mode–Based Observers Nadira Šabanovic ´-Behlilovic ´ Sabanci University The study of the discontinuous control systems has been maintained on high level in the course of the development of the control system theory.. the system with sliding mode may become invariant to variation of the dynamic properties of the control plant....... The problem of dynamical plant with discontinuous control has been approached by mathematicians.... The study of dynamical systems with discontinuous control is a multifaceted problem....... The control is used to restrict the motion of the system on the sliding mode subspace while the desired character of the motion in the intersection of discontinuity surfaces is provided by an appropriate selection of the equations of the discontinuity surfaces.................... 13-24 References.......... the motion in a sliding mode may occur in the system..2 Design........1 Sliding Mode in Continuous-Time Systems........................................................VI92]....... 13-10 13.......... This allows the initial control problem to be decoupled into problems of lower dimension dealing with enforcement of sliding mode and desired equations of motion in sliding mode............... 13-5 Equivalent Control and Equations of Motion  •  Existence and Stability of Sliding Modes Asif Šabanovic ´ Sabanci University 13.... The discontinuity of control results in a discontinuity of the right-hand side of the differential equations describing the system motion.TS55].....................................4 Conclusion.................. LLC .......... 13-2 13... While in the sliding mode the trajectories of the state vector belong to the manifold of lower dimension than of the state space........... Relay regulators were present in very early days of the feedback control system application [FL53............. Under certain condition.... Sliding modes are the basic motion in Variable Structure Systems (VSS) [EM70.. In most of the practical systems.. In some fields (power electronics and electrical drives)............................ physicists..... and engineers to solve problems that arise in their own field of interest. If such discontinuities are deliberately introduced on certain manifold in the system state space............................ 13-1 © 2011 by Taylor and Francis Group.. which embraces control theory and application aspects................... This type of motion features some attractive properties and has long since been applied in relay systems..... The unremitting interest to discontinuous control systems enhanced by their effective application to solution of problems diverse in their physical nature and functional purpose is a basic element in favor of the importance of this area....3 Some Applications of VSS........ the motion in sliding mode is control independent and is determined by the plant parameters and the equations of the discontinuity surfaces.. the switching is a “way of life” and the systems that are treated in these fields are discontinuous by themselves without any artificial introduction of switching.....13 Variable Structure Control Techniques 13........................... so the order of differential equations describing motion is reduced...

…. u ∈ℜm +  ui (x . in the cases in which the so-called regularization method [VI92] yields an unambiguous result.1) xT =   x1 uT =  u1 σT =  σ1 … … … It is assumed that f (x .13 -2 Control and Mechatronics In this chapter. if the distance from surface and its rate of change are of the opposite signs [EM70]. m (13. the equations of motion in discontinuity surface exist and can be derived from the equations of discontinuity surface and the system motion outside of the discontinuity surface. ui− ) and consequently conditions for the uniqueness of solution of ordinary differential equations do not hold in this case. etc. t . t ) ui =  − ui (x . ui− and σi are continuous functions of the system state. due to changes in control input. motion of a control system with discontinuous right-hand side may be described by differential equation x = f (x . the so-called sliding mode motion can arise. u). In such a system. The motion will stay on surface σi(x) = 0. function f (x. u ˜). LLC . basic methods of the analysis and design of VSS with sliding modes will be discussed. It has been shown that. the motion in the boundary layer tends to the motion of system with ideal control. The motion in discontinuity surfaces is so-called sliding mode motion. The procedure may lead to unique solution for the sliding mode equations as the function of the motion outside the manifold σi(x) = 0. Regularization method consists in replacing ideal equations of motion (13. t. © 2011 by Taylor and Francis Group. The fact that motion belongs to certain manifold in state space with a dimension lower than order of the system results in the motion equations order reduction. If thickness Δ of the boundary layer tends to zero. The discussion on the VSS application will focus on switching converters and electrical drives—systems that structurally have discontinuous element—and on motion control systems. Equations of motion obtained as a result of such a limit process are regarded as ideal sliding mode.1) by a more accurate one x ˙ = f (x. Our intention is not to give full account of the VSS methods but to introduce and explain those that will be later used in the chapter. t )  σi ( x ) > 0 σi (x ) < 0 xn   um   σm   i = 1. t . u ) have solution in conventional sense but motion is not anymore in manifold σi(x) = 0 but ˙ ˜ run in some Δ—vicinity (boundary layer) of that manifold. u) x ∈ℜn . which allows simple procedure of finding equations of motion in sliding mode. ui+ ) ≠ f (x 2. It may stay on one of the discontinuity surfaces σi(x) = 0 or on their intersections. ui+. This motion is characterized by high frequency (theoretically infinite) switching of the control inputs. Equations x = f ( x . the regularization allows substantiation of so-called equivalent control method. 13. dynamics nonidealities. t. t . This motion is represented by the state trajectories in the sliding mode manifold and highfrequency switching in the control inputs. which takes into account nonidealities (like hysteresis.) in implementation of the control input u ˜ . t . u) on the different side of the discontinuity surface (x1 ≠ x2) satisfy relation f (x 1. t . In general. Motion of the system (13.1  Sliding Mode in Continuous-Time Systems VSS are originally defined in continuous time for dynamic systems described by ordinary differential equations with discontinuous right-hand side. delay. For systems linear with respect to control. Additionally.1) may exhibit different behaviors depending on the changes in control input.

By inserting these m components into (13. Substitution of equivalent control into (13. By virtue of the equation σ ˙  (x) = Gf (x. t) and this solution is plug in equation x ˙ = f (x. u = ueq) = 0. t. with lent control. t ) xT =   x1 u1 uT =  σT =  σ1 hT =  h1 … … … … σi ( x ) > 0 σi ( x ) < 0 xn   um   σm   hp   i = 1.Variable Structure Control Techniques 13 -3 A formal procedure is as follows: the projection of the system motion σ(x. it is obvious that motion in sliding mode will proceed along trajectories that lie in manifold σ(x) = 0.2) in manifold σ(x) = 0 can be from (13.4) enforces the projection of the system velocity vector in the direction of gradients of surfaces defining sliding mode manifold to be zero.5) Equation 13. G = ∂σ(x. t. if initial conditions are σ(x(0)) = 0.2) control should be replaced by the equiva˙  (x. which is the solution to σ σ(x. ueq) = 0.6) describes the ideal sliding mode motion.5 along with equation of sliding mode manifold (13. Equation 13. the sliding mode equations can be © 2011 by Taylor and Francis Group. m (13.1.t ) = Gf + GBu + GDh.…. t. If det(G(x)B(x)) ≠ 0 for all x.6 allows to determine m components of the state vector x as a function of the rest of n − m components. the equivalent control for system (13. LLC .2) yields the equation x = f + Bueq + Dh with σ(x ) = 0 (13. f. 13. in (13. h ∈ℜ p ui+ (x .4) The equivalent control (13.5).t ) ∈ R m ×n ∂x (13.6) = I − B(GB)−1G ( f + Dh) ( ) (13. ueq). x. t )  ui =  −  ui (x . t. t) = 0 in manifold σi(x) = 0 expressed as σ ˙  (x) = Gf (x.1  Equivalent Control and Equations of Motion Further analysis will be restricted to the systems linearly dependent on control x = f (x ) + B(x )u(x ) + Dh(x . ueq) = 0 with [∂σ/∂x] = G is solved for control u = ueq(G. u ∈ℜm .2) In accordance with this equivalent control method.3) calculated as σ(x ) = 0 ⇒ ueq = −(GB)−1 G( f + Dh) (13. t ) x ∈ℜn . Equations obtained as a result of such a procedure are accepted as sliding mode equations in manifold σi(x) = 0.3) The rows of matrix G are gradients of functions σi(x).

9) It is obvious that by selecting structure of ζ(V.5 can be rewritten in the following form f + Dh = Bλ + ξ ⇒ x = (I − B(GB)−1G)ξ (13. So far we have been discussing the equations of motion and their features under assumption that initial state lies in the sliding mode manifold.KD96] by finding a continuously differentiable positive definite function V(σ. for appropriate function ζ(V.13 -4 Control and Mechatronics obtained as n − m dimensional system with trajectories in sliding mode manifold if initial conditions are σ(x(0)) = 0. The equation (13. This result offers a way of assessing the invariance of the sliding mode motion to different elements of the system (13. be expressed as V (σ. This shows potential benefits of establishing sliding mode in dynamic systems. x ) = σT σ 2 (13.). σ.11) © 2011 by Taylor and Francis Group. x ) = σT σ ≤ −ζ(V . x). x) must be negative definite on the trajectories of In order to satisfy Lyapunov stability conditions. In order to effectively use these potential in the control system design.1. Projection matrix P = (I − B(GB)−1G) in Equation 13. σ. σ.3) describing projection of the system motion in sliding mode manifold can be rearranged into σ = GB(u + (GB)−1G( f + Dh)) = GB(u − ueq ) (13. x) such that its time derivative along trajectories of system (13. etc. the question of the sliding mode existence conditions should be clarified along the sliding mode design methods. The sliding mode equations are found to be of the lower order than the order of the system and under certain conditions may be invariant to the certain components of the system dynamics. Let stability requirement. Equation 13.8) ˙ (σ. LLC . disturbances. The stability of nonlinear dynamics (13.2  Existence and Stability of Sliding Modes Existence of sliding mode has to do with stability of the motion in the sliding mode manifold. If vector ( f + Dh) can be partitioned as f + Dh = Bλ + ξ. M (x ) > 0 [ sign(σ)]T =   sign(σ1 ) … sign(σm )  (13.5 satisfies PB = (B − B(GB)−1GB) = 0.2) is negative definite. then Equation 13.10) By selecting control as in (13. one can determine specific rate of change of V(σ. V system (13. Without loss of generality let Lyapunov function candidate be selected as V (σ. x) > 0. x ) (13.2) in sliding mode manifold can be analyzed in the Lyapunov stability framework [VI92. x) and can thus define the transient from initial state to the sliding mode manifold.11) with M(x) being a scalar function u = ueq − (GB)−1 M (x )sign(σ).RD94. 13.2).7) The sliding mode equations are invariant in components of vector ( f + Dh) that lie in the range space of system input matrix B—so-called matching conditions [DB69].5 depends on matrix G —thus the selection of the sliding mode manifold σ(x) = 0.2) dynamics (parameters.

Variable Structure Control Techniques 13 -5 ˙ (σ.2) with matched disturbance Dh = Bλ in the so-called regular form [LU81] x1 = f1(x1 .5 and 13. x2 ) + B2 (u + λ) + G0 f1(x1 .13) describes mth order system with m-dimensional control vector and full rank gain matrix B2. In the first step.13) motion in manifold (13.2  Design Decreasing dimensionality of the equations of motion in sliding mode permits to decouple the design problem into two independent steps involving systems of dimensionality lower than the original system: a. discontinuous control u should be selected enforcing sliding mode in manifold σ = x2 + σ0 (x1 ) = 0 (13.13). 13. x2 ) + B2 (x1 . x2 )(u + λ) x1 ∈ℜn − m x2 ∈ℜm (13. LLC . The first equation is of n − m order and it does not depend on control.14) and the dynamics described by first equation in (13. it is useful to transform system (13.13) may be rewritten in the following form: x1 = f1(x1 .17) © 2011 by Taylor and Francis Group. Equations 13. In the second step.16) can be written as σ = x2 + σ0 (x1 ) = f 2 (x1 . Control (13. Assume x2.12) With control (13.11).6 describe (n − m) order continuous dynamical system but selection of the sliding mode manifold providing desired behavior requires selection of matrix G.15) The second equation in (13. the desired dynamics is determined from (13. In this equation.9) are satisfied and consequently sliding mode exists in manifold σ(x) = 0. vector x2 can be handled as virtual control. the stability conditions (13.16) The projection of the system (13. In order to make selection of the sliding mode manifold simpler.5) and (13. x) can be expressed as the derivative V V (σ. is given by x2 = −σ0 (x1 ) (13.11) is component-wise discontinuous and Lyapunov function reaches zero value in finite time. −σ0 (x1 )) (13. The selection of function M(x) determines the rate of change of the Lyapunov function. the matched disturbance and nonsingular matrix B2 ∈ Rm×m are assumed. b. control input should be selected to enforce sliding mode or stated differently to make projection of system motion in sliding mode manifold (13.3) stable. satisfying some performance criterion. In the second step of design.13) In writing (13. It can be selected as linear or nonlinear function of state thus enforcing different transients in reaching the equilibrium. x2 ) x2 = f 2 (x1 .6) by selecting sliding mode manifold. x2 ) ∂σ G0 = 0 ∂x1 (13. x ) = − M (x )σT sign(σ) < 0 (13.

A12) being controllable. u ∈ℜm .19) In (13. u ∈ℜm (13. Manifold will be reached in finite time and motion in sliding mode is described by (13. we will discuss the realization of enforcement of sliding mode in a linear system: x = Ax + Bu + Dh x ∈ℜn . As shown in [VI92. some estimation of the equivalent control ûeq and matrix B2 will be available.21) and. x2 )sign(σ).19).ED98]. Let scalar function L > 0 be the component-vice upper bound of B2(ûeq − ueq). In this situation. consequently transformed system can be written in the following form x1 = A11x1 + A12 x2 x2 = A21x1 + A22 x2 + B2 (u + λ) x1 ∈ℜn − m x2 ∈ℜm . stability in manifold (13. In this case.19) and with M(x1. j = 1.15)—thus satisfying the system specification of the dynamics of closed loop system.13 -6 Control and Mechatronics By selecting Lyapunov function candidate as in (13. x2) ≥ L is negative definite and consequently sliding mode will be enforced in manifold σ = x2 + σ0(x1) = 0. x2 ) > 0 u=u (13. 2) being constant matrices and pair (A11. In most of the real control systems.18)   sign(σ)  =  sign(σ1 ) T … sign(σm )  Problem with control (13.8) on the trajectories of system (13. Then the time derivative of the Lyapunov function (13.22) with Aij(i. In this section. LLC .2. matrix B may be partitioned into two matrices B1 ∈ℜ(n − m) × m and B2 ∈ℜm × m with rank(B2) = m. rank(B) = m. M (x1 . © 2011 by Taylor and Francis Group. there exists nonsingular transformation xT = Tx such that  B1   0  TB =   =    B2   B2  (13. B) is controllable. the stability of the sliding mode can be proved as follows. M (x1 .20) Further we will assume that pair (A. B ∈ℜn × m .18) is the requirement to have information on equivalent control—thus full information on the system’s structure and parameters. h ∈ℜ p A ∈ℜn ×n . the estimate of equivalent control is used instead of the exact value and discontinuous part does not depend on the unknown gain matrix.13) with control (13. D ∈ℜn × p (13. x2 )sign(σ).1  Control in Linear System The discontinuous control strategies become transparent when applied to linear systems.8). x2 ) > 0 −1 ueq = − λ − B2 ( f 2 + G0 f1 ) (13.16) will be enforced if control is selected as −1 u = ueq − B2 M (x1 . and disturbance satisfies matching conditions Dh = Bλ . 13. the structure of the control can be selected in the form ˆ eq − M (x1 . By reordering components of vector x.

LLC . The salient feature of the sliding mode motion is that system trajectories reach selected manifold in finite time and remain in the manifold thereafter. 13. By selecting x2 = −Gx1.Variable Structure Control Techniques 13 -7 Sliding mode design procedure allows handling x2 as a virtual control input to the first equation in (13. The chattering appears due to the finite switching frequency limited by sampling rate and problem of realization of the continuity of the equivalent control.27) © 2011 by Taylor and Francis Group.26) – ) ≠ 0 from (13. (k = 1.23) σ = x2 + Gx1 = 0 (13.KD96] ukeq = −(GB)−1 σ k + G( A − I )x k + GDhk ( ) (13. The change of sliding mode manifold can be expressed as σ k +1 = Gx k +1 = GAx k + GBuk + GDhk = σ k + G( A − I )x k + GDhk + GBuk (13. …). first equation in (13. thus yielding motion in sliding mode manifold. Control (13.23) by selecting control −1 u = ueq − αB2 −1 2 σ . (k + 1)T ] with u(t) = u(kT ) and disturbance constant during sampling interval. …. A12).24) has discontinuities in manifold (13. and. D = e A(T − τ)Dd τ 0 0 (13. thus guaranteeing the sliding mode.20).25) ∫ T ∫ T Let sliding mode manifold be defined as σk = Gxk. This requires to determine control input at sampling time kT such that σ(kT + T ) = 0 with initial condition σ(kT ) = 0. due to the controllability of the pair (A11.2.22). 2. σ = σ (σ σ) T (13. and integrating solution over interval t ∈[kT. the concept means that after reaching sliding mode manifold. With this understanding in discrete-time systems. Control should be selected to enforce desired value of x2 = −Gx1 or to enforce sliding mode in manifold (13. matrix G may be selected in such a way that (n − m) eigenvalues are assigned desired values.26) control enforcing σ = 0. h(t) = h(kT ). α > 0. is If det(GB k+1 called equivalent control [DR89. By applying sample and hold process with sampling period T. m).22) becomes x ˙1 = (A11 − A12G)x1.23) Sliding mode may be enforced in manifold (13.20) may be represented as x k +1 = Ax k + Buk + Dhk A = e AT .23) but not in each surface σi = 0. The concept will be again explained on linear time invariant plant (13. (i = 1. the distance from it at each subsequent sampling interval should be zero.24) ueq = − λ − B  ( A21 + GA11 )x1 + ( A22 + GA12 )x2   −1 T The derivative of the Lyapunov function V = σT σ/2 is V = −αB2 σ σ / σ < 0 . B = e A(T − τ) Bd τ.VI93.2  Discrete-Time SMC The direct application of discontinuous control in sampled systems will inevitably lead to chattering due to the finite sampling rate. the discrete time model of plant (13.

3  Sliding Mode–Based Observers In most of the control systems. The sliding mode–based state observers are well detailed in the available literature [VI92. control input of actual plant. The idea of sliding mode application in observer design rests on designing control that forces some output of the nominal plant to track corresponding output of the real plant in sliding mode. The most significant difference between continuous time and discrete time sliding modes is that motion in the sliding mode manifold may occur in discrete time systems with continuous (in the sense of discrete-time systems) control input. The required magnitude of control (13. Then equivalent control can be used to determine information reflected in the difference between nominal and real plant. a) unknown continuous scalar function of state. the equivalent control can be expressed as From σ = x − x ˆ . They are mostly based on the idea of Luenberger observers with control enforcing sliding mode thus achieving finite time convergence. time.2. The realization of the control law requires at least information to evaluate distance from sliding mode manifold and in some realizations the equivalent control or at least some of its components. B to be examined for discretized system. then control can be implemented as in (13. ao).28)  ukeq  uk =  ukeq M u keq  if ukeq ≤ M if ukeq > M (13. ao ) uoeq = f (x . If the control resources are defined by ∙ukeq∙ < M. f (x. In order to illustrate some other ideas related to sliding mode–based observers let look at first-order system with scalar control input x = f ( x . a) − f o (x ˆ=x x (13.30) Let observer control input uo be selected such that sliding mode is enforced in ˆ σ=x−x (13. Assume that nominal plant as integrator with input consisting of the known part ˆ. Estimation of the system state variables and estimation of the system disturbance are often part of the overall control system design. and the additional—observer control input of the system structure fo(x uo are to be determined: ˆ = f o (x ˆ . a) + u (13. 13.31) ˆ = 0.27) may be large and limitation should be applied. D – and have It is important to note that matching conditions now are defined in terms of matrices G. observers are constructed based on available measurements and nominal—known—structure of the system. The situation in control systems with sliding mode is not different. In general.32) © 2011 by Taylor and Francis Group. design information on control plant is not complete and should be filled by estimating missing data from known structure of the system and the available measurements.29) Assume control input u and output x measured.13 -8 Control and Mechatronics –.ED98]. and parameter a. ao ) + u + uo x (13.28) It can be proven that such control will enforce sliding mode after finite number of sampling intervals. LLC .

u ∈ℜm . and consequently for x = x ˆ = f a (x )a − f a (x ˆ )uoeq = 0 x−x ˆ x=x ⇒ ˆ ) ⇒ uoeq = a f a (x ) = f a (x (13. In the case that number of parameters is higher than order of system. a) + Bu + Dh(x . (13. Let us construct a model as ˆ = f x (x ˆ ) + f a (x ˆ )uo + u x (13. a) but not its components separately.Variable Structure Control Techniques 13 -9 In sliding mode for system (13. ao ) + Bu + Buo ˆ = f o (x x ˆ ∈ℜn . Assume MIMO system (13. Interesting application of this idea for vision-based motion estimation is presented in [UN08]. with measured x.33) allows estimation of the function f (x.34) is linear with respect to control uo.34) leads to algebraic equation linear with respect to unknown ˆ unknown parameter can be determined: parameter. rank(B) = m x (13. d ∈ℜ p . u and unknown vector valued function f (x. then application is straight forward. u) is measured. the procedure follows the same steps. rank(B) = m.34) ˆ = 0. m < n (13. LLC . a) = fx(x) + fa(x)a.29) be linear with respect to unknown parameter a.37) ˆ . rank(G) = m σ = G( x − x (13. uo ∈ℜm . Let system (13.38) © 2011 by Taylor and Francis Group. a) and assumed plant fo(x plant unknown structure as ˆ . In the case of MIMO systems.35) Extension of this result to MIMO systems is possible under assumptions that the system dynamics can be linearly parametrized with some set of parameters. Assume pair (x. t) = Bλ: x = f (x .36) Let “nominal” plant of system (13. t ) x ∈ℜn . then additional constraint equations should be found. That would allows us to estimate the real between real plant f (x. It is obvious that (13.36) be given by (13.29). a) = f o ( x (13.31) is established.33) The estimation (13.33) is valid after sliding mode in manifold (13. The selection of these parameters is neither simple nor unique. ao ) + uoeq f ( x . the equivalent control is equal to the difference ˆ . in manifold (13.36). a). u ∈ℜm . The difference comes from dimensionality of the system and the unequal dimensions of the system and the dimension of the sliding mode manifold. structure of functions fx(x) and fa(x) known continuous functions. thus may be expressed as f(x.37) Let control uo be selected to enforce sliding mode in manifold ˆ ) = 0 σ ∈ℜm .31). matched unknown and unmeasured disturbance Dh(x. If number of such parameters is lower or equal to the order of system. ao) structure. thus sliding mode can be established in σ = x − x In sliding mode for system (13.29). System (13.

a) − f o (x ˆ)= 0 G(x − x (13. At one side are systems—like power electronics converters—in which switching is natural consequence of operational requirements and are not introduced due to the control system preferences.37) in manifold (13. ao )) uoeq = λ − (GB)−1G ( f (x . discontinuity of control may not be easy to implement or it may cause excitement of unmodeled dynamics or excessive wear of actuators.39 allows developing different schemes in the sliding mode–based observers. by partitioning ˆ 2 = B2 (u + uo ) system state as in (13.36).39) Equation 13.13). Despite the fact that in many such systems electrical machines are primary source of actuation and consequently power converters are used in their control structure. © 2011 by Taylor and Francis Group. For example.13). one can include power converters as electrical sources and electrical drives as example of the power conversion control. By feeding disturbance estimation into the system input.38) determined as ˆ) = 0 is ˙  (G.3  Some Applications of VSS As shown sliding mode–based design can be applied to systems with and without discontinuities in control.39). x2) + B2λ . These systems are very good examples of application of sliding modes for systems with continuous inputs.40). which enforces sliding mode in manifold σ = x2 − ˆ x2 equivalent control becomes B2uoeq = f2(x1. a) is known then from (13. control of power flow is accomplished by varying the length of time intervals for which one or more energy storage elements are connected to or disconnected from the energy sources. one can obtain unknown input cancellation. if the plant structure f (x. In such systems. the dynamics of overall system reduces to x1 = f1(x1 . That offers a wide scope for the application of sliding mode–based design to different dynamic systems. x2 ) x2 = B2u (13. 13. x.3. LLC . complex mechanical systems like robots represent challenge in control system design due to their complexity and nonlinear nature. robustness of the sliding mode with respect to matched disturbances and system parameters is very attractive feature. mechanical motion systems are mostly modeled with forces/torques as control input variables. This result can be predicted by analyzing (13.40) Proposed structure effectively linearizes second block in system (13.39) projection of the disturbance to the range space of input matrix can be fully estimated by structure (13.13 -10 Control and Mechatronics Assuming det(GB) ≠ 0 equivalent control for system (13. Feeding −B2uoeq to the input of second block in (13.36) through (13. modeling block describing dynamics of x2 by simple integrator x and selecting uo.13) by compensating matched disturbance and the part of the system dynamics. x the solution of σ ˆ . At the same time. (13. That offers a possibility to develop estimation of disturbance guarantying finite time convergence. Application of sliding mode control in such systems is consequence of the system nature and at the same time illustrates very clearly the peculiarities of the sliding mode design.1  Control in Power Converters The role of a power converter is to modulate electrical power flow between power sources. On the other hand. Disregarding wide variety of designs in most switching converters.1). That allows analysis of converter as switching matrix enabling interconnections and that way creating systems with variable structure. For example. 13. In this category. A converter should enable interaction of any input source to any output source of the power systems interconnected by converter (Figure 13.

© 2011 by Taylor and Francis Group.3. LLC Sources S11 . j = 1. n. For purposes of mathematical modeling operation of a switch may be described by a two-valued parameter uij(t). (i = 1. so basic structures are represented by two storage elements and switch realizing switching of the current path. boost. …. m). The laws governing the operation of electrical circuits restrict operation of the switches in a sense that voltage source cannot be shortened and current source cannot be left out of the closed loop. states. (b) boost.1.Variable Structure Control Techniques 13 -11 1 1 Sources k Sk1 Skl Snl Sn1 Snm S1l l n Skm m S1m Figure 13. 13.1  Converter as interconnecting element.2  Topology of DC-to-DC converters: (a) buck. The capacitances are used as storage elements to smooth the output voltage.1  DC-to-DC Converters In control of DC-to-DC converters. and (c) buck–boost. and buck boost (Figure 13. The peculiarities of the structures are reflected on the mathematical vo S1 Vg S2 L R C (a) S2 S1 C (b) S1 Vg L S2 vo vo L Vg R R C (c) Figure 13. These restrictions limit set of permissible combinations of the switches. …. having value 0 when the switch is open and value 1 when the switch is closed.2)—in continuous current mode will be analyzed. All three of these structures use inductance to transfer energy from the source to the sink. only “canonical structures”—buck.

is treated as the discontinuous control input and is defined as in (13. If | diL from diL/dt = (Vgu − vo)/L and tracking requirements σ = iL /dt + vo | < |Vg | is satisfied and control is selected as u = (sign(σ) − 1)/2.2) can be represented in regular form.44) Despite simple structure with only two energy storing elements DC-to-DC converters represent complex dynamical systems due to the presence of switches and the variable structure. 1 if switch S1 = ON and switch S2 = OFF  u= 0 if switch S1 = OFF and switch S2 = ON  (13.41) dvo iL v = (1 − u) − o dt C RC diL Vg vo = − (1 − u) dt L L dvo = f v (vo ) + b1(iL )(1 − u) dt diL = fi (Vg ) + b(vo )(1 − u) dt (13. The switching element in buck converters is placed in front of the overall dynamics and influences only one storage (inductance) second element is not directly influenced by control (as it can be see from the model (13. The peculiarities of the converters can be better understood from their dynamic structure. k > 0. In the second step.2c) are combination of the buck and boost structures and can be mathematically modeled as in (13. vo ) dt diL = fi (vo ) + b(Vg )u dt (13. the switching element is placed in direct and feedback paths and acts on both storage elements—thus both blocks depend on the control input.3 (a) buck structure. By describing each of the energy storing elements and inserting switch block the block diagram for each of the generic converters can be depicted as in Figure 13.43) with control parameter expressed as in (13. First current should be selected from v ˙o = (iL − vo/R)/C to ensure specification in the voltage ref ref control.2b) both first and second block depend explicitly on control as shown in (13.13 -12 Control and Mechatronics description of the converters.2a) is described as in (13. Scalar parameter u.42)). and (c) buck–boost structures.42) dvo iL vo = − dt C RC v diL Vg u− o = dt L L dvo = f v (iL .41). The same is in buck–boost structures as clear combination of the buck and boost structures. All three structures (Figure 13.41) dvo iL v = (1 − u) − o dt C RC v diL Vg u − o (1 1 − u) = dt L L dvo = f v (vo ) + b1(iL )(1 − u) dt diL = fi (Vg )u + b(vo )(1 − u) dt (13.41) Operation of buck converter (Figure 13. (b) boost structure. control u should be selected ∆vo = vo − vo will be governed by Δv ˙o o ref ref − iL = 0. describing operation of the switch.43) The buck-boost converter (Figure 13. the positive definite function V = σ2/2 will have negative definite © 2011 by Taylor and Francis Group. by selecting iL = iL = vo /R + C(vo − k ∆vo ) the error in voltage control loop ref + k Δ v = 0. Dynamics of buck converters is written in regular form and design procedure may be done in two steps.44) with control parameter expressed as in (13.42) For the boost converter (Figure 13. In boost structure. For example. LLC .

discontinuous control ref − iL . ref − iL = 0 can be derived from The discontinuous control enforcing sliding mode in σ = iL σ= ref di di ref Vg − vo vo diL − L = L − − u dt dt dt L L (13. In the second step. As shown in [BI83. Boost and buck–boost topologies are also written in block form and control input is entering both blocks thus motion separation as in buck converter is not so obvious. the control system design procedure is slightly different than for the buck converters. For boost and buck boost converters in the first step. the positive definite function V = σ2/2 ref − iL = 0. LLC .45) If |Vg| < |vo| is satisfied and control is selected as u = (sign(σ) −1)/2.3  Dynamic structure of generic DC-to-DC converters: (a) dynamic structure of buck converters. The operation of switching devices is directly defined by value of control and mapping (13.VR86. the rate of change of the current is much faster than the rate of change of the output voltage. assuming that current reference is is selected to enforce sliding mode in the current loop σ = iL continuous function of system state and converter parameters.42). That allows separation of motion method to be applied for these converters and consequently splitting design into two steps.Variable Structure Control Techniques 13 -13 Vg Sw u (a) Vg u diL = fi (vo) + b (Vg)u dt iL dvo = fv (iL. (b) dynamic structure of boost converters. In real converters.45) can be expressed as σ = 0 ⇒ 1 − ueq = ref Vg 1 diL −L vo vo dt (13. ref − iL = 0 thus transient in voltage derivative and consequently sliding mode will be enforced in σ = iL ref loop will track dynamics dictated by iL = iL . equivalent control ueq derived in first step is plug in the equation describing output voltage and then current reference. and (c) dynamic structure of buck–boost converters. ref Let iL for a boost converter be continuous current reference to be tracked by the inductor current.MA04]. as a control in obtained system. is determined to obtain desired transients in the voltage loop. vo) dt vo iL diL = fi (Vg) + b (vo) (1–u) dt Sw dvo = fv (vo) + b1 (iL) (1–u) dt vo (b) Vg Sw u iL diL = fi (Vg) u + b (vo) (1–u) dt Sw Sw dvo = fv (vo) + b1 (iL) (1–u) dt vo (c) Sw Figure 13. will have negative definite derivative and consequently sliding mode will be enforced in σ = iL The equivalent control obtained from (13.46) © 2011 by Taylor and Francis Group.VR85.

2 Three-Phase Converters The topology of switching converters that may have DC or three-phase AC sources on input and/or output side are shown in Figure 13. 2.47) 2 Dynamics (13.47) can be simplified by discarding current derivative term.4a is described by the vector with elements being parameters uij.47) describes the power in boost converters. Based on linearization around Voref . In order to show some peculiarities in sliding mode.48) The dynamics (13. In the following section. By inserting this value of equivalent control in voltage dynamics yields ( ) ) (  ref Vg  ref dv v L d iL C o+ o = iL − 2Vg dt dt R Vg + v o   ( )  2   (13. the same approach with sliding mode in σ = iL ref control ueq = L diL ͞dt + vo ͞(Vg + vo ) .SOS92]. That allows application of the methods of continuous system design in voltage control loop. Assuming converters are supplied or are sources of three-phase balanced voltages or currents. If reference current is constant or slow.48) is complicated but linearization around steady-state point (constant due to the fact that ref DC-to-DC converter is analyzed) may be easily applied.3. the dynamics (13. By selecting voltages and currents as state variables in the three-phase (a. i = 1. Dynamics (13. we are able to describe the three-phase converters dynamics. By doing that and selecting 2 ref ref reference current as iL = vd ͞(RVg ) . I o . The deliberate introduction of sliding mode in DC-to-DC converters control shows potential of such a design in the systems in which discontinuity of control is already present. referref ref ref ence current should be selected as iL = Vo Vg + Vo ͞(RVg ). Simplification of the system description could be obtained by mapping corresponding equations into orthogonal stationary (α . That shows peculiarity in the dynamical behavior of the boost converters. Direct selection of reference current based on (13. which define the ON–OFF state of switches S = [u11  …  uik  …  unm] [SI81. changing (13. The enforcement of sliding mode in current control loop allows calculation of equivalent control and effective derivation of the averaged—continuous—dynamics in the voltage loop after sliding mode is established in current control loop. n. β) frame of references.46) into first equation in (13. The DC-to-AC (inverters) and AC-to-DC (rectifiers) three-phase converters have the same switching matrix. …. more complex. Further simplification may be obtained © 2011 by Taylor and Francis Group.4c) will be analyzed in more detail. zero-component is zero and can be neglected in the analysis. ….47) is linear in vo . ref − iL = 0 gives equivalent For buck–boost converter. Further we will limit analysis to voltage source converters and will discuss both inverters and rectifiers. The design follows wellestablished separation on subsystems of lower order. LLC . 2. It has been shown how to deal with systems for which block control form does not lead to the separation on the blocks that do not depend on control and blocks that depend on control.48) describes complex nonlinear behavior. c) frame of references. j = 1.4a. b. design buck inverter (Figure 13.43) and taking into account equality iL ref 2 ) C dvo v2 L d (i L ref + o = Vg i L − 2 dt 2 dt R 2 (13. three-phase converters will be analyzed and control in sliding mode framework will be developed to deal with these systems. The state of a switching matrix in Figure 13.47) will ensure stable transient to steady-state ref value vo = vd .1. ( ) ( ) 13.4b) and boost rectifier (Figure 13. one obtains By inserting (13.13 -14 Control and Mechatronics ref = iL. m.

c) frame of references. S2. Matrix dq αβ F(θr ) = A αβ A abc describes transformation from three-phase (a. S6. S4. b. S8} of the switches in the switching matrix. q). c) to synchronous orthogonal (d.Variable Structure Control Techniques 13 -15 c b a S11 S21 S31 S11 S12 S13 R c L C Sources Source Source b a S12 S22 S32 S21 S22 S23 (a) (b) vo c b a S12 S22 S32 S11 S21 S31 Figure 13.49) u T abc T udq = ud uT αβ =   uα T = [ua ub uc ] is the control Notation used is as follows: uT = [ud  uq] is the control vector in (d. β) and A dq αβ for (α . c) to (α . q) frames of reference dq αβ udq = F (θr ) uabc = Aαβ Aabcuabc Sources (c) dq Aαβ  cos θr =  − sin θr = ua ub uq   uβ    1 sin θr  αβ  . All eight permissible connections can generate six voltages different from zero and two zero values vectors as depicted in Figure 13. LLC . q) with d-axis having θr position with respect to α-axis of stationary frame of references [SOS92]. (c) boost rectifier. by mapping from (α . β) or (d. b. mapping from (a. c) to either (α . uabc dq αβ vector in (a. Mapping between these αβ frames of reference is defined by matrix A abc for (a. Aabc =  cos θr  0   uc   −1 2 3 2 −1  2   − 3 2   (13. ( ) © 2011 by Taylor and Francis Group. q) frame of references is not unique. b. (b) buck inverter. β) to (d. The buck inverter and boost rectifier have eight permissible connections S = {S1. In the analysis. q) frame of reference is selected to be synchronous with voltage vector on the three-phase side of a converter (input side for rectifiers and output side for inverters). S5.4  (a) Basic topology of three-phase converters. b. S3. q). β) frame of references into rotating frame of references (d.5. (d. Due to rank(F (θr )) = rank Aαβ Aabc = 2 . S7.

1.5  Control vectors for permissible states of the switching matrix. 0. 0.1 1.1.1. 0. 0.1. 0] S8 = [0.50) 1  0 0 ud    1 uq  (13. LLC . 0. 0. 0] S6 = [1. ω r ) + iL dt C  diLd   vod   dt   − L + ω r iLq  Vg =  +  diLq   − voq − ω i  L r Ld    L     dt  Vg diL = fi (vo . S8 S1 α S5 S6 Figure 13.1. 0. q) frame of references with d-axis collinear with the reference output voltage vector. iL .1.1] S4 = [0.1.1. ω r ) + udq dt L 0 iLd    1 iLq  (13. 0. 0.1] S2 = [1.1. may be described as   dvod   vod  dt   − RC + ω r voq  1 1 =  +   dvoq   −ω v − voq  C 0 r od  RC       dt  dvo 1 = f v (vo . 0] S5 = [0.2.13 -16 Control and Mechatronics S3 β S2 S4 S7.1. in (d. S1 = [1. 0] S7 = [1.1] S3 = [ 0.1] 13.1  Dynamics of Three-Phase Converters Dynamics of three-phase buck inverter.51) © 2011 by Taylor and Francis Group. 0.3. 0.1. 0. 0.1. 0. 0.

i = (1.Variable Structure Control Techniques 13 -17 where T vo = [vod voq ] stands for the capacitance output voltage vector T iL = [iLd iLq ] stands for inductor current vector Vg is amplitude of input voltage source ωr = ˙ θ r stands for angular velocity of the frame of references R. 8) dt (13. implicitly applied in most of the so-called space vector ( ) ( ) dq αβ PWM algorithms. In real applications. Boost rectifier in (d. Controls ud. c) frame of references in order to decide the corresponding switches ON and OFF pattern. mapping from (d.2. Components id are continuous functions to be determined later. σq = iq (t ). Another dq αβ problem of this algorithm is in the fact that transformation uabc = Aαβ Aabc udq can be realized with the accuracy to the sign of the control. so it has to be discussed separately. uq should be mapped back to the (a. the sliding mode conj j eq ditions will be satisfied if sign u j − u j = − sign(σ j ). iL . design of control in (d. This solution has disadvantage that zero vector is not used and consequently switching is not optimized. b ≠ 0.…. iq (t ) of reference current vector σT = [σd  σq] with σd = id (t ) − iq. q) with d-axis collinear with the voltage supply vector Vg may be described as iLd ud + iLquq dvo v v 1 T =− o + = − o + iL udq dt RC C RC C  diLd   dt  ω r iLq + Vg  vo 1 − =  L  L 0  diLq    − ω i r Ld        dt  diL v = fi (Vg . c) frame of references does not have unique solution. sliding mode current control will be designed for the current control loop and in the second step voltage control loop will be designed.1. based on using transformation uabc = Aαβ Aabc udq has been proposed. In [SI81] solution. q). In the first step.2  Current Control in Three-Phase Converters Current control is based on the sliding mode existence in the manifold σT = [iref (t) − i] = 0. uq may be determined from σd σ ˙ d 〈0 and σq σ ˙ q 〈0. Due to the fact that rank(F(θr)) = 2. The amplitude of the control cannot be mapped correctly since number of available control vectors is limited as shown in Figure 13. LLC . From (13. Application of sliding mode control to boost rectifier used as reactive power compensator can be found in [RS93]. q) to (a. b.3. 13. where vector ref ref ref ref (t ) − id .50.53) Design may follow the same procedure as one shown for DC-to-DC converters.51) and (13.53) follows that component-vise analysis may be applied and controls ud. C stand for converter parameters The structure is described in the regular form where current can be treated as virtual control in Equation 13.5.52) (13.SOS93] can be found by expressing time derivative of σT = [σd  σq] for systems (13. Another solution [SOS92. b.54) © 2011 by Taylor and Francis Group. c) frame of references.51) and (13. By representing rate of change of the current errors as dσ j ͞dt = b j u j − u eq . L. ( j = d . q) frame of references requires mapping of the control vector from (d. ω r ) − o udq dt L 0 ud    1 uq  (13. q) to (a. This mapping is not unique. ( j = d. q ).53) in the following form ( ) T ( ) T dσdq eq  = Budq  udq (Si ) − udq  . b.

55) (13. control vectors could take values from the discrete set S = {S1. − 1 ≤ ueqj ≤ 1 −1 BN fn < 1 (13.13 -18 S2 β q ueq d S4 S7. Ambiguity in selection of the control vector based on selected ud and uq allows a number of different PWM algorithms that satisfy sliding mode conditions.57) This line of reasoning with some variations has been the most popular in designing the sliding mode– based switching pattern [RM86. there is more than one permissible vector (Figure 13. The simplest solution is for ϑ [SI81. S2. That leads to an ambiguous selection of the control and consequently existence of more than one solution for the selection of switching pattern. S3. q) can be found from sign udq (Si ) − udq sliding mode existence conditions σi σ some combinations of errors. S5. For  ˙i 〈0(i = d. S8}.6). Matrix Budq for both converters under consideration is diagonal positive definite and equivalent control may be easily found from (13.VGS99].51) or (13.5.RM86.54) and (13.53) depending on the converters to be analyzed. so that (13. l ) that satisfy eq = − sign(σdq ). ….57) enforces balanced condition for average © 2011 by Taylor and Francis Group. S4. In early works related to electrical machine control. As shown in Figure 13.54) is augmented to have the form ( )  dσdq   ref  dt   di − fidq   −  Budq F (θr ) u (S )  =  dt   abc i T   bϑ    dϑ     f ϑ      dt   dσ N T = f N − BN uabc (Si ) uabc (Si ) =  ua dt ub uc   (13. S8 α Control and Mechatronics S3 S1 S5 S6 Figure 13.56) ˙ (t) = ua + ub + uc vector bϑ should be selected so that rank(BN) = 3. LLC . For a particular combination of errors σd and σq all permissible vectors Si(i = 1. then matrix BN will have full rank. the following solution was proposed: add an additional requirement ϑ(t) = 0 to the control system specification. S6. The sliding mode conditions are satisfied plest way is to use the nonlinear transformation σ s = BN if the control is selected as sign(u j (Si )) = − sign(σ sj ). S7.6  Effective rate of change of control errors for permissible control vectors.VGS99]. Solution (13. the sim−1 σ N . To determine the switching pattern.

In this case. It may be designed. the reference currents id ref ref (t ) − id = 0 and σq = iq ing mode dynamics of current.Variable Structure Control Techniques 13 -19 values of phase controls ∫(ua + ub + uc) dτ = 0.60) 2 Dynamics (13. control loop is reduced to σd = id (t ) − iq = 0.4.60). optimization of switching can be formulated as one of the requirement of the overall system design. the dynamics of output voltage with sliding mode in current control loop becomes  ref 2 C dvo v2 L d id ref + o = Vg id −  2 dt 2  dt R  ( ) 2 + ref d iq ( )  2 dt   (13.52).SA02. The equivalent ref ref − id = 0 and σq = iq control in σd = id − iq = 0 can be expressed as  di ref  1 eq ud =  Vg + Lω r iq − L d  dt  vo  ref   1 diq u = −  Lω rid + L v dt   o eq q (13. dvo/dt = f v(vo. it ref can be treated as a disturbance. A new class of switching algorithms based on the simple requirement that control should be selected to give the minimum rate of change of control error. All of the above algorithms operate in so-called overmodulation as a normal operational mode without change of the algorithm. iq (t ) shall be determined. That generates unnecessary rise of switching frequency in the system real application. This algorithm can be easily extended by requiring the non-zero sum ∫(ua + ub + uc) dτ = z (actually standing for the average of the neutral point vector in star connection for there phase system). LLC . Then id can be selected the same way as in DC-to-DC boost converter.59) into (13. The q-component of the input current represents the reactive power flow in the system and in (13. Buck inverter dynamical model (50). 8) This algorithm is universal in a sense that it works for all converters that have switching matrix as shown in Figure 13. ref ref (t ).2. For boost rectifier approach established for boost DC-to-DC converter can be applied.PO07].59) By inserting (13. so the same current control error will be achieved with less switching effort [SOS92. Modification with introduction of the optimization of the total harmonic distortion (THD) had been successfully applied [CE97]. with components of the current as virtual controls.58) { } i = (1. The algorithm can be formulated in the following form:      Si = sign    sign      AND  eq ud (Si ) − ud  ⋅ σd (t ) = −1     AND  eq uq (Si ) − uq  ⋅ σq (t ) = −1     min u(Si ) − u eq (t ) { } (13. …. ωr) + (1/C)iL describes a two-dimensional system affine in control and design of the currents may follow the same procedure as for buck DC-to-DC converter. In the slidIn order to complete the design.60) is a first-order system in vo —the same form as the dynamics of DC-to-DC converter. © 2011 by Taylor and Francis Group.

For three phase machines. Lm. in comparison with buck inverter.SA02]. is like three-phase buck inverters in which output filter is replaced by machine.13 -20 v ref o ref iL Control and Mechatronics + – Voltage control Current loop iL dvo = fv(iL . The structure of the power converters control system can be presented as in Figure 13.2. With such selection sliding mode algorithms for current control developed for three-phase converters can be directly applied for electrical machines as well [SI81. σ stand for machine parameters © 2011 by Taylor and Francis Group. The topology of interconnection of the switching matrix and machines. vo) dt vo ref iL + – Vector selection iL diL = f v .3. LLC . lies in the selection of the (d. The selected structure is only one of the several possible solutions. q) frame of references.61)  dis RL  1  Lm  = − Br ψ r + r m is  − Rsis + us    dt σLs  Lr  Lr   (13.62) where ψT ψ rβ ] stands for rotor flux vector r = [ψ r α T is = [isα isβ ] stands vector of stator current uT usβ ] stands for stator voltage s = [usα ω is velocity Rs.1  Induction Machine Flux Observer Description of the induction machine in the (α . Lr. Another field of application of sliding modes in electrical drives is estimation of states and parameters of electrical machines [VII93.3. 13.SG95. Br =  Lr dt Lr ω    −ω   Rr  Lr   (13.7  Structure of the sliding mode control for switching power converters. for standard three-phase machines. As an example. β) frame of reference can be written as  Rr  dψ r Rr Lm = − Br ψ r + is . V + b (vo .7 where the current control loop operates in sliding mode with discontinuous control.VGS99].2  SMC in Electrical Drives Control of electrical drives requires usage of a switching matrix as interconnection between electrical machine and the power source. Ls. flux and velocity observer for Induction Machine (IM) will be shown. Vg) u dt i( o g) Figure 13. Specifics of the sliding mode current controller design for three-phase electrical machines. 13. d-axis should be selected collinear with vector of the flux (rotor or stator). Rr.

Variable Structure Control Techniques 13 -21 Formally. q) + g (q) = n(q.64) is straight forward. q) ∈ℜn ×1 represents vector of coupling forces including gravity and friction τ ∈ℜn ×1 is vector of generalized input forces Vector n(q.3  Sliding Modes in Motion Control Systems Control of multibody systems is generally analyzed in joint space (often called configuration space) and in task space (often called operational space). it is possible to design a stator current observer based on voltage and current measurements and with a rotor flux vector derivative as the control input: ˆs di 1 = dt σ Ls  Lm ˆ s + us  − uψ − Rsi    Lr  (13.65) Extension of this idea to estimation of the flux and velocity is discussed in details in [SG95. 13.ZU02].67) © 2011 by Taylor and Francis Group. In this section. q) − g (q) + τ (13. Model (13. application of SMC will be shown for both joint space and task space.66) can be rewritten in regular form q=v A(q)v = −b(q.66) where q ∈ℜn stands for vector of generalized positions q ∈ℜn stands for vector of generalized velocities A(q) ∈ℜn×n is the generalized positive definite inertia matrix with bounded parameters n(q. q) (13.3. The estimation error is determined as d εi d( is − iˆ)s 1 = = dt dt σ Ls  Lm     RL  − Br ψ r + r m is  + uψ  − Rs εi   L  Lr    r   (13.3.64) one can find  eq Rr Lm  ˆ r = Br−1  uψ ψ − is . or selecting the unknown velocity as the control input [DI83.64) uψβ ] to enforce sliding mode in (13. LLC . q) + g (q) = τ b(q.SA02]. 𝛆i = 0 and under the assumption that the angular velocity is known. After sliding Selection of control uT ψ = [uψα mode is enforced from d𝛆i/dt = 0.1  Joint Space Control Mathematical model of fully actuated mechanical system (number of actuators equal to number of the primary masses) in configuration space may be found in the following form A(q)q + b(q. 13.3. application of sliding mode control based on current feedback. from (13.63) This selection of the observer control input is different from the usually used current error feedback. q) ∈ℜn ×1 satisfies matching conditions.   Lr  (13.

Then control enforcing sliding mode in manifold (13. Physical interpretation of control is now apparent. Very similar result may be obtained for motion control in operation space.66) is readily available. b. if one assumes that in motion control systems the torque input is generated by electrical machines. In the second step. In general. then application of the (13. It consists of the disturbance rejection term (ˆ  b(q. velocity vector v. q) + g ˆ (q) ) − τ sign(σ).70) is to interpret the term (v ˙ ref − GΔv) = v ˙ des = q acceleration in the closed loop system (which is equal to the solution for acceleration of the σ ˙   = 0 and in that sense stands for “equivalent acceleration”).2  Operation Space Control In the analysis of the motion system.70).68) may be guaranteed and strictly speaking no sliding mode motion will be realized in the system and the motion will be in the vicinity of manifold (13. Another solution can be developed by merging disturbance observer method [OK96] and sliding mode technique. can be selected as Δv = −GΔq with Δv = v − vref. In that sense. taken as virtual control in (13.70) can be rewritten as ˆ (q. In this case.71) may cause problem and it may be that better way is to use just proportional term τ0σ instead of discontinuous one τ0sign(σ) as a convergence term [SA08]. τ 0 > 0 τ = Av ˆ q des τ=A 0 ( ) ˆ (q. control τ should be selected to enforce sliding mode in σ = ∆v + G∆q = 0 (13.3. q) + g (q) + A(v ref − G∆v) (13. LLC .3. This solution will have large amplitude and in most cases will cause chattering. In real systems.69) The simplest solution to enforce sliding mode is to select τ = −Mo sign(σ) with Mo large enough [YK78]. g. This situation is already encountered in the power converters and electrical drives control with sliding mode in inner current loop.67). x ∈ℜ p ×1 . Δq = q − qref and qref. The dynamics of the mechanical system associated with performing a certain task defined by a minimal set of coordinated x = x(q). sliding mode manifold (current error) is reached in finite time but since reference current is continuous the dynamics in outer loop (voltage or velocity or position) is at best exponentially stable. + (b τ0 > 0 (13. 13. p < n involves mapping of the configuration space dynamics (13. ¨ des as desired Another way of looking at control (13. this term may be replaced by proportional term τ0σ. a joint space description (13. but then only exponential convergence to manifold (13.68) The equivalent control for system (13.68) may be written as ˆ (q. Then (13. the weighted desired (equivalent) acceleration Av ¨ des and convergence term τ0sign(σ) g ˙ des = Aq enforcing finite-time convergence to manifold (13. q) + ˆ (q)). q) + g ˆ des + b ˆ (q) − τ 0 sign(σ). vref continuous differentiable functions.70) In (13. λ = A(v ˙ ref − GΔv) are the same then component-wise disturbance observers may be used. τ0 > 0 is determined to compensate for the disturbance estimation error.71) . q) + g ˆ (v ref − G ∆v ) − τ 0 sign(σ) ˆ (q) + A τ= b τ0 > 0 ( ) (13. operation space coordinates may represent any set of coordinates © 2011 by Taylor and Francis Group.68) takes the form σ = A −1(τ − τ eq ) τ eq = b(q.68).68). Since dimension of the vectors τ.66) into new set of coordinates.66) in manifold (13.13 -22 Control and Mechatronics In the first step.

Let internal configuration–posture consistent with the given task be defined by a minimal set of independent coordinates x P ∈ℜ(n − p)×1.78) © 2011 by Taylor and Francis Group.66). For given task vector x = x(q). Then (13.71) to obtain ˆ +g ˆ ) − Jq − f o sign(σ x ). the . x ∈ℜ p ×1 . The dynamical models (13. then JA −1Γ1 J P = 0 holds. p ×n following relationship is obtained x ˙ = (∂x(q)/∂q)q = J(q)q . The task–posture velocity vector can be written as  x  J  q J Γ =   =   q = JJP q  xp   Γ   x   JA −1J T   =  −1 T x p    ΓA J JA −1ΓT   fx   JA −1 (b + g )   Jq  +     −  −1 ΓA −1ΓT  ΓA (b + g )   fp     Γq    (13. ∆ x = x − x ref (13. p < n.76) Similarly by selecting sliding mode manifold ref σ P = ∆x P + G∆x P = 0. f x = Λ x x des + Λ x JA −1(b x des = x ref − G∆x ( ) fo > 0 (13.72) The task and posture dynamics can be written in the following form: (13. fP = Λ Γ xP + Λ Γ ΓA −1 (b (13.74) are in the same form as joint space dynamics (13. . By selecting task space sliding mode manifold σ x = ∆x + G∆x = 0. That opens a way of applying the same procedure in the sliding mode design. ∆x P = x P − x P control enforcing desired internal configuration–posture can be expressed as des ˆ +g ˆ ) − Γq − f oP sign(σ P ). Then posture velocity can ·. where J (q) ∈ℜ is task Jacobian matrix.73) describes two decoupled subsystems. but they are usually selected to represent a description of motion control task. Γ ∈ℜn ×n stands for the matrix to be selected in such · = Γq be expressed as x P 1 1 P a way that task and posture dynamics are decoupled. which can be written as Λ x x = f x − Λ x JA −1(b + g ) + Λ x Jq Λ Γ xp = fp − Λ Γ ΓA −1(b + g ) + Λ Γ Γq Λ x = ( JA −1J T )−1 Λ Γ = (ΓA −1ΓT )−1 (13.73) If Γ1 ∈ℜn ×n is selected as task consistent null space projection matrix Γ1 = (1 − J# J ) with J# = A−1JT T T ( JA−1J T )−1.Variable Structure Control Techniques 13 -23 defining kinematic mapping between joint space and the task space. [OK87].74) Design of control input for both task and internal configuration–posture is now straightforward ­ having established consistent equations describing dynamical behavior of the system.77) ( ) f oP > 0 ref des xP = xP − G ∆x P (13. with Γ = ( J Γ ) ∈ℜ(n − p)×n. LLC . with posture Jacobian J P = ( ∂x P ∂q ) ∈ℜ(n − p)×n .75) can be expressed in the same form as (13.75) task control enforcing sliding mode in manifold (13. Fundamental relation linking forces f ∈ℜ p×1 and f P ∈ℜ(n − p)×1 with joint spaces forces τ ∈ℜn ×1 is defined as τ = J T f x + ΓT f P .

1970. Proceeding of the Power Conversion Conference. A. March. H. Sliding Mode Control: Theory and Applications (Hardcover). K. Motion control for advanced mechatronics. 117-D(7).. [SA08] Sabanovic. Z. September 2008. [KD96] Young.. and Ohnishi. Pasadena.. Music... pp. V. 56–67. Orlando. Problems in analysis.13 -24 Control and Mechatronics 13. O.. K... New York. and Spurgeon. K. Problems of Multivariable Systems Control. M. Multivariable nonlinear induction motor state identifier using sliding modes (in Russian). R.. Buck converter regulator operating in sliding mode. A. I. Part 1).. Variable Structure Systems: Towards 21st Century. [LU66] Luenberger. 2002. and Drakunov. Germany.. 273–278. D. 146–152. PhD thesis. December 5–6. Sliding mode control design via Lyapunov approach. 1–14. Ü. Observers for multivariable systems. Sliding modes in constrained systems control. 1953. 190–197. Tokyo. January–February 2007. 287–295.. April 19–21. FL... pp. S. 1969. A.. FL. Y. and Jezernik. © 2011 by Taylor and Francis Group.. control and design of switching inverters and rectifiers. 1(1). Princeton University Press. [DI83] Izosimov.. On discrete-time sliding modes.. CRC Press LLC. International Journal of Robotics Research. LLC . pp. E.... 4055–4064. Automatica.. Ohnishi. D.. Discontinuous Automatic Control. Moscow (in Russian). [ED98] Edwards. [EM70] Emelyanov. CALTECH. S.. and Jezernik.. Automatica Remote Control. A.. IEEE/ ASME Transactions on Mechatronics. I. Proceedings of the 1996 IEEE International Workshop on Variable Structure Systems (VSS-96). C. V. Nauka. 1996. December 2004. K. and Xu. and Utkin. O. Boca Raton. Proceedings of the Nonlinear Control System Design Conference. Event-driven current control structure for a three phase inverter. J. 1st edn. G. The invariance conditions in variable structure systems. July 1997 (in Japanese). [RS93] Radulovic. 42(4. Lecture Notes in Control 274. V. K. G. Sliding mode control strategy of the three phase reactive power compensator.. B. M. pp. Lake Buena Vista. Yokohama. A. Conference Record of the Power Conversion Conference (PCC’93).). A unified approach to motion and force control of robot manipulators: The operational space formulation. and Utkin. Theory of Variable Structure Systems. Springer-Verlag.4  Conclusion The aim of the chapter has been to outline the main mathematical methods and sliding mode control design techniques and to demonstrate its applicability in power electronics and motion control systems. 55(9). March 1996.. Shibata. N. Sliding modes applications in power electronics and electrical drives. [SA02] Sabanovic. [MA04] Ahmed. 28–35. Methods of reducing equations of dynamic systems to regular form. PCI’83. Moscow. 1983. Sliding mode control for switched power supplies. [OK96] Ohnishi. IEEE Transactions. 1987. [OK87] Khatib. T. 1981. Transactions of IEEE of Japan... [FL53] Fluge-Lotz. and Murakami. 3(1). Japan. 5–14. and Sabanovic. [PO07] Polic.... and Sabanovic.. Proceedings of the 33rd Conference on Decision and Control. K.-X. Eds.. X. 1966.. and Sabanovic. [LU81] Luk’yanov. 43–53. AC-11. Utkin. Kobayashi. [CE97] Chen. [DB69] Drazenovic. 5. FL. December 1994. A control engineer’s guide to sliding mode control. 1986. A. International Review of Electrical Engineering (Testo stamp. 2(1). 581–586. IEEE Transactions on Industrial Electronics. [DR89] Drakunov. Berlin.. 1983. References [BI83] Bilalovic. [RM86] Mahadevan. Yu. V. R. D. Masada. University of Technology. 1993. [RD94] De Carlo. F. CA. and Özgüner.. K. 1989. V. Direct instantaneous distortion minimization control for three phase converters. Finland. M. Sabanovic. Elitas. I. Capri. Lappeenranta. PhD thesis. Many interesting results from the wide range of results accumulated in the variable structure systems remain beyond the scope of this chapter. 1998. S. Fujikawa. I. August 27. Italy. A. S.

E. A. Proceedings of the IECON 2002.. A. [TS55] Tsypkin... International Journal of Systems Science. IEEE Transactions on Systems. Man and Cybernetics. Sliding mode control design principles and applications to electric drives. FL. 1995. Yilmaz. and Izosimov. Theory of Relay Control Systems.1981.. and Utkin. [VR86] Vankataramanan.. 3. Robust position control based on chattering free sliding modes for induction motors. FL. CA. 1985. Pasadena. Boca Raton. A. pp. Visual motion and structure estimation using sliding mode observers. [SOS92] Šabanovic.. K. Orlando. and Šabanovic. S. M. pp.. 1992. V. N. [VI92] Utkin. V. 1999. 1955. 40(1). [VII93] Utkin. Proceedings of the IECON’85. [SOS93] Sabanovic.. Proceedings of the Industrial Electronics Conference (IECON 95). Gostekhizdat.. Z. April 1983. 39(2). B. Ya.. I.. Spain. Ohnishi.. 1993. V.. PhD thesis. Germany. V. Sabanovic. Sliding mode control in discrete time and difference systems. Z.K.. [YK78] Young. Conference Record of the Power Conversion Conference (PCC’93). Sliding mode control of switching converters. R.S.. K. pp. U. Sliding modes control of three phase switching converters. Moscow (in Russian).-K. Proceedings of the IECON’92 Conference. Control of PWM three phase converters: A sliding mode approach. 1978. Sabanovic.. J. London.. [ZU02] Yan. 210–218.. CA. IA 17(1). A. [SI81] Šabanovic. N. CALTECH. B. Controller design for a manipulator using theory of variable structure systems. Ed. and Dogan. 1986.. D. © 2011 by Taylor and Francis Group. Ohnishi. V.. [VI93] Utkin. [VGS99] Utkin. Guldner. 421–434. Sabanovic. 267–273. R. Sliding mode control of DC-to-DC converters. A. LLC . and Shi. K. 2002. 41–49.. February 2008. IEEE Transactions on Industrial Electronics. 1992. [VR85] Venkataramana.... Taylor & Francis. Sliding mode observers for electrical machines—An overview. 188–193. M. Vol. Springer Verlag. I. and Cuk. Yokohama. 1993. Sliding Modes in Control and Optimization.Variable Structure Control Techniques 13 -25 [SG95] Sahin.. and Sabanovic. 149–161. I. pp. 512–517. A. Sliding Modes in Electromechanical Systems. J.. Springer-Verlag. 1842–1947. 319–325. CA. IEEE Transaction on Industrial Applications. San Diego. Application of sliding mode to induction motor control. Variable Structure and Lyapunov Control. C. [UN08] Unel. 8. Berlin.. D. Sevilla.. pp. San Francisco.. Zinober A. and Gokasan.

....4 Introduction....2] for details............. 14-1 © 2011 by Taylor and Francis Group... not available to continuous-time design... followed by a discussion on “additional discrete-time controllers” that are unique to the direct discrete-time design path.. etc....................................... Digital PID Design  •  Input Shaping Design  •  State Feedback Design Timothy N. 14 -1 Discretization of Continuous-Time Plant....... linear phase [2]............. digital control has become an important tool for the analysis........... Finally..... 14 -6 Digital Control Design................2...4]...... This approach is shown on the right-hand side path in Figure 14.....................3..1 14... the popular digital PID design is presented..... A typical digital control system is shown in Figure 14... high-speed signal acquisition and processing hardware...... Details of this approach are given in Section 14...... direct discrete-time design includes the ability to embed delay into the state-space model..... a continuous-time controller is first designed and then discretized (digitized).... 14 -2 Discretization of Continuous-Time System with Delay..2 are the two paradigms for digital control design.................. Chang New Jersey Institute of Technology John Y. are available.........1 of this chapter...5 Multirate Controllers. generally by means of a zeroth-order hold circuit [2]. LLC .....1  Introduction Enabled by low-cost.......... the emulation method [4] has been historically applied to obtain digital proportional–integral–derivative (PID) type of control....... Finally.. 14 -14 14... Shown in Figure 14......... readers interested in reviewing sampling and reconstruction are referred to [1........... The digital–analog converter (DAC) in turn converts the discrete-time sequence into a continuous-time signal....... 14 -15 References.............. The discrete-time signals are then processed by a microprocessor to generate the discrete-time control outputs.. Other variations of this scheme are also possible [8] but the fundamental analytic and design methods presented here are generally applicable............... 14 -4 ..........4..... On the left side is the discrete emulation approach where... Hung Auburn University 14.................... An advantage of this approach is that a number of additional tools........ A smoothing filter is sometimes added to remove harmonics present in the passband....... In Section 14.6 Conclusions....14 Digital Control 14.. In this chapter.... achieve deadbeat behavior..... Embedding delay into the discrete-time system is discussed in Section 14.... For example...... based upon the continuous plant.................. 14 -15 14.... A DC servo motor system serves as a running example in this chapter.2 14........... However..................1 where the continuous-time signals are low-pass filtered by an anti-aliasing filter and sampled by an analog–digital converter (ADC) into discrete-time signals..2. and implementation of control systems [1... design.................4....................... a more direct approach is to first discretize the plant model and then design the corresponding digital control........ conversion of a continuous-time model into discrete time is first discussed..................3 14.. Controllers directly designed from the discrete time usually can tolerate a lower sampling frequency as well.

N-by-M. and R-by-M matrices. LLC .1) can be readily discretized so that the resultant discrete-time plant is given as x(n + 1) = Ax(n) + Bu(n) y(n) = Cx(n) + Du(n) (14.2  Digital control design paradigms.). C. Sampling Ts Continuous time plant Discrete time plant Design Design Continuous time controller Sampling Discrete time controller Additional discrete time controller FIGURE 14. Therefore.2) N-by-N. input. Nth order continuous-time plant be represented by the following linear. and output vectors where x  x1(t )   x (t )   2 . Given a sampling period Ts and that the DAC is based on a zeroth-order hold device. 14.2  Discretization of Continuous-Time Plant Let the M-input. D are.1  Typical digital control system.3) – – – –. R-output. the continuous-time plant (14. respectively. y –(t) are the continuous-time state. time-invariant state-space model: x (t ) = Ax (t ) + Bu (t ) y (t ) = Cx (t ) + Du (t ) (14. © 2011 by Taylor and Francis Group.14 -2 Control and Mechatronics Antialiasing filter y(n) ADC Microprocessor u(n) DAC Smoothing filter y(t) u(t) FIGURE 14.1) –(t). A . ū(t). R-by-N. B (A line drawn over a variable denotes a continuous-time signal.    u (t )  M   y1(t )   y (t )  2     y (t )  R  x (t ) u (t ) y (t ) (14.    x (t )  N   u1(t )   u (t )   2 .

Under those circumstances.9704   0. etc. B = e Aτ Bd τ 0 ∫ Ts (14.Digital Control 14 -3 with A e ATs .6) where V. the discrete-time model is 1. n = 0.….0018  x (n) +    u (n) 0. LLC . coefficient © 2011 by Taylor and Francis Group. especially as the sampling frequency becomes large relative to the plant dynamics. the Cayley–Hamilton method can be applied to determine e ATs Example 14.3547 (14.5) –.0099 0. W are the right and left modal matrices λ– 1.1  Discretization of DC Servomotor A DC motor [4] has the following simplified transfer function model: T (s) = 36 s ( s + 3) It can be represented in state-space format as 0 x (t ) =  0 y (t ) =  1 (14. Calculation of A. For low-order systems (typically N ≤ 3) or those with a sparse – A matrix. …. λ– N are the continuous-time eigenvalues [7] In general. D = D where it is understood that x(n) x (nTs ). Dimensions of the matrices in discrete model are the same as for the continuous model. the following inverse LaPlace transform formula can be handy: e ATs = L−1{(sI − A)−1}t =Ts (14.9) It should be noted that a discrete-time model can be sensitive to numerical resolution. B can be done in a number of ways.0000 x (n + 1) =   0 y (n) =  1 0  x (n) 0. 1.7) 1 0 x (t ) +   u (t ) −3  36 0  x (t ) (14. for diagonalizable A e  e λ1Ts  =V   0 0   W λ N Ts  e  e ATs (14.4) C = C. ATs can be readily computed as On the other hand. 2.8) For a 100 Hz sampling rate.

0018 ( z − 1)( z − 0. Modern computer software tools such as MATLAB ®.10) So that the corresponding discrete transfer function matrix is  T1. and (3) take the Z transform. the discrete-time transfer function is 4  3Ts − 1+ e −3Ts z + 1− e −3Ts − 3Tse −3Ts   T (z) =  ( z − 1) z − e −3Ts ( ) ( ( ) ) (14. it is more customary to use table look up [2] or computer software.11) T (z ) = D + C(zI − A)−1 B =  Elements of the discrete-time transfer function are given by Ti .3  Discretization of Continuous-Time System with Delay It is quite common that a delay τd is present in various design problems: x (t ) = Ax (t ) + Bu (t − τ d ) y (t ) = C x (t ) + Du (t − τ d ) (14. (2) sample the corresponding impulse response at t = nTs. LLC . are widely available.14) 14. M (s)  T1.1(z )   TR. the discretization can also be carried out on the transfer function corresponding to (14.01 s.0018z + 0. j (s)/s. M (z )  T1. However. j (z ) = Z     Ti . Ts = 0.9704) (14. { } Example 14.2  Discretization of Transfer Function Applying (14.13) For 100 Hz sampling frequency.1(s) −1   TR. j (s)  −1  (1 − z ) s     (14.1):  T1. j (s)) s literally means (1) take the inverse LaPlace transform of Ti . Some tools offer multiple approaches to discretization. Alternatively. which perform continuous-to-discrete conversions.7).15) © 2011 by Taylor and Francis Group. each yielding different numerical characteristics. owing to the notion that the hold device produces rectangular pulses of duration Ts.14 -4 Control and Mechatronics rounding can result in significant errors. T (z) = 0. M (z )  (14.1(z )    TR . the resulting discrete model is also called the “pulse” transfer function.12) The expression Z (Ti .1 (s)  T (s) = D + C(sI − A) B =    TR . M (s)  (14.12) on the DC motor transfer function (14. When the continuous plant is preceded by a zeroth-order hold device (D/A converter).

Case 1: Ts > τd = 0.0013  0. B  0    0.1760 .20) Consider the same sampling of 100 Hz. τd > δ > 0. C  0      IM  »(n) (C 0 0 0) (14. the design of controllers directly in the discrete time permits directly absorbing the delay in an equivalent discrete-time state-space model.18) 0 0 0 0 0 0 IM represents the M by M identity matrix and Ts − δ B0 ∫e 0 At Bdt .9704 0 0. B1 e ( A Ts − δ ) e At Bdt ∫ 0 δ (14. LLC . A   A 0  B1  .1787 .17).3  DC Servomotor with Delay The DC motor model (14. two cases are presented here. »      1  (C 0) . C  M (C 0) (14. i.e.0004   0. a transfer function approximation of the transcendental form e − sτd . Case 1: Ts > τd > 0.7) is augmented with a delay at the input: T (s) = 36 s ( s + 3) e − sτ d (14. A criticism of the Pade approximation is that a high-order transfer function may be required to reasonably model the time delay.17) Case 2: τd > Ts so that d > 1 x(n)   u n − d +1  )  (  . A    u (n − 2)   u (n − 1)    A 0     0  0 B1 0 B0 IM 0 0 0 0   . the delay is within one sampling period »(n)  x(n)   u(n − 1) . Consider the delay τd = (d − 1)Ts + δ where d ≥ 1 is an integer representing the integral delay factor and δ is the fractional delay. where A 0 1   0 0. B 0   B0  I .Digital Control 14 -5 One approach to handling time delay in continuous-time design employs the Pade approximation. D = 0 © 2011 by Taylor and Francis Group.19) Example 14. In contrast. The model (14..16) where »(n) is the augmented state vector. depending on the magnitude of the delay.16) takes on two forms. B  IM   0  0  0    .005 The delay-embedded discrete-time model is given by (14.2]: »(n + 1) = A » (n) + B u(n) y(n) = »»(n) + Du(n) (14.0099 0. The delay-embedded discrete-time model can be written as [1.

14.9704 0 0 0.3.5 2 2. »  0  1   A (C 0 0 ).4. One continuous-time version is given as  1 u (t ) = K  e (t ) + TI   P ∫ 0 t e (τ)d τ + TD I de (t )   dt   D (14. discrete time without embedding delay.14 -6 60 50 40 30 20 10 0 –10 0 0.0099 0. Case 2: τd = 0.3  Response of DC motor (solid. and delay-embedded discretetime plant is shown in Figure 14.4  Digital Control Design 14.5 4 FIGURE 14. discrete time with delay embedded).015 > Ts so that d = 2 and δ = 0. diamond.1  Digital PID Design The PID control is a well-established technique in many industrial applications [3.18). The delayed-embedded discrete-time system correctly tracks the continuous-time response. square.5 1 1. the delay-embedded system is 1 0  0 0  0.0013 0.1787 . continuous time.005 Applying (14.5 3 Control and Mechatronics 3. D = 0 with »(n)  x (n)   u(n − 2)      u(n − 1)  A simulation of the continuous-time plant with delay.5].0004 0.21) © 2011 by Taylor and Francis Group. LLC . discrete-time plant.1760 0 0 0. B 1  0    0  0  .

24):  T u(n) = K e(n) + S TI   n −1 ∑ k =0 e(k) +  TD (e(n) − e(n − 1)) TS   TD S (14.22) –(nT ).Digital Control 14 -7 where K is the controller gain TI is the integration time TD is the derivative time –(t) = yref(t) − y –(t) is the process error e yref(t) is the reference command A common variation is that the derivative action is applied only on the process variable y(t). etc. It is derived by subtracting (14.21) above: a.23 are non-recursive and are referred to as “position algorithms. the notation here is that e(n) = e S b.” More commonly.24)  T u (n − 1) = K e(n − 1) + S TI   ∑ e(k) + T (e(n − 1) − e(n − 2)) k =0 n−2  (14. Using rectangular rule to approximate integration and one-step finite difference for derivative  T u (n) = K e(n) + S TI   n −1 ∑ e(k) + T k =0 TD S   e(n) − e(n − 1)    (14.25) from (14.26) © 2011 by Taylor and Francis Group. Digital PID is obtained by discretizing (14.22 and 14. Again.23) Both Equations 14. Using trapezoidal (bilinear or Tustin) rule instead of rectangular rule results in    T u (n) = K e (n) + S TI        e(0) + e(n)    TD e ( k ) + ( ) ( 1 ) e n e n + − − { }   2 T S     k =1   better fit  ∑ n −1 (14. the velocity algorithm (14.26) is used. LLC .25) Resulting in the following recursive equation:   T T u(n) − u(n − 1) = K e(n) − e(n − 1) + S e(n − 1) + D {e(n) − 2e(n − 1) + e(n − 2)} T T I S   (14. so that changes in the reference command will not introduce excess control action—this is further explained below.

27): Use position algorithm (14. the parameters are  T T  q0 = K  1 + S + D   2TI TS   T T  q1 = K  −1 + S − 2 D   2TI TS  Some implementation considerations: a.22) to calculate u(0):   T u(0) = K e(0) + D  e(0) − e(−1)    TS   where e(0) = yref(0) − y(0) e(−1) = yref(−1) − y(−1) b. yref e – I u + PD Plant y (14. so that changes in reference yref will not induce a sharp PD reaction. u(n) = u(n − 1) + q0e(n) + q1e(n − 1) + q2e(n − 2)  T  q0 = K  1 + D   TS   T 2T  q1 = K  −1 + S − D  TI TS   T  q2 = K  D   TS  (14. For trapezoidal rule integration.28) q2 = K TD TS = q0e(0) − q2e(−1) ≠ 0 in gener ral FIGURE 14. Similarly. Initialization for (14. LLC . PID implementation: For the case that the reference command yref(t) contains sharp jumps/steps.4  Set-point-on-I PID control. This is achieved by replacing the error e(n)’s with the output y(n)’s in the P and D terms. © 2011 by Taylor and Francis Group.14 -8 Control and Mechatronics or more conveniently. the set-point-on-I configuration shown in Figure 14.27) Velocity algorithms are simpler to implement.4 [1] is recommended.26) or (14.

These FIGURE 14.  Tuning Rules for PID controller: The Ziegler–Nichols tunPlant ing rules are applicable to digital PID tuning: the “transient response” method for well-damped systems and the Unit step y(n) “ultimate sensitivity” method for oscillatory systems. •  Ultimate Sensitivity Method: A proportional feedback is applied to the plant as shown in Figure 14.1  Approximate PID Parameters Obtained from the Transient Response Method Controller Type P PI PID K 1 RL 0.1.5 and the plant response is recorded.7. In both cases. the sampling rate should be 5X–20X of the unity gain cross over frequency of the open-loop plant (i. For systems with high complexity. the estimated PID parameters are tabulated in Table 14. Adjust the proportional control until a sustained oscillation appears at the output. the tuning procedures may fail. as shown in Figure 14. y R L t FIGURE 14. Define critical gain as the value of the proportional gain at which the oscillation occurs and Tp as the period of oscillation as depicted in Figure 14. and TD. LLC . Table 14.2.e. for most digital PID.6.8.6  Measurement of steepest slope (R) and dead-time (L) for the “transient response” tuning method. it is necessary to fine tune K. TI. 9 RL 1. © 2011 by Taylor and Francis Group. to minimize phase lag) [2].5L TI TD + – Plant y K FIGURE 14. 2 RL 3L 2L 0.Digital Control 14 -9 c.7  Set-up for the ultimate sensitivity method for tuning digital PID controller.5  Set-up for the methods are summarized as follows: • T  ransient Response Method: A unit step is applied to the plant “transient response” method for tuning PID controller. Upon measuring the steepest slope (L) and dead-time (L) from the unit step response shown in Figure 14. Furthermore.. The corresponding PID parameters are listed in Table 14.

8  Measurement of period of oscillation for the “ultimate sensitivity” tuning method. then the total response at a preselected settling time Let y i i i t = tN. By eliminating sin (ωdtN) and cos(ωdtN) terms.6KC Tp 1.29) – (t) be the response to an impulse A δ(t − t ). 2 Tp 2 Tp 8 TI TD Ziegler–Nichols tuning is also employed in some modern autotuning controllers. One common method employs relay control to force the feedback system into an oscillation.30) where Bi = ( Ai ωn ) 1 − ς 2 and ω d = ωn 1 − ς 2 . the residual vibration V can therefore be expressed as © 2011 by Taylor and Francis Group. The ultimate gain can be computed from the amplitudes of the process variable and relay output.6] here based on a second-order system subjected to two-impulse excitation: G(s) = ( 2 ωn 2 s 2 + 2ςωn s + ωn ) (14. The oscillation period is measured as described above.2  Input Shaping Design Input shaping is a feedforward technique to suppress command-induced vibrations [6]. LLC . tN−1 can be written as − ζωnt N y (t N ) = e       ∑ i =1 m   Bie ζω nt i cos(ω dt i ) sin(ω dt N )−      ∑ i =1 m    Bie ζωnti sin(ω dt i ) cos(ω dt N )     (14. The design is generally initiated in continuous time while leaving the implementation in discrete time due the requirement for relatively high amplitude and timing precisions.14 -10 y Tp Control and Mechatronics t FIGURE 14. t2. ….4. A brief overview of input shaping is presented [5.2  Approximate PID Parameters Obtained from the Ultimate Sensitivity Method Controller Type P PI PID K 0.5KC 0.45KC 0. 14. tN > t1. Table 14.

tp = π ω 0 1− ς 2 = 0. consider closing a simple unity gain proportional loop on the DC Motor (14.5408 Therefore. ς. t N ) = y(t N ) =e − ςωnt N     ∑Be i i =1 m ςωnti   cos(ω dt i ) +      2 ∑Be i i =1 m ςωnti  sin(ω dt i )   2 (14. which was originally studied in the 1960s [10]. − πς A1 = OS 1 .43% overshoot in the regular step response. the result is a staircase command. It is observed that the input shaper produces fast and nonoscillatory transients compared to the 44. It can be readily shown that − πς 1−ς 2 OS = e = 0. The method is also known as Posicast control.9.35). 1− ς 2 . the ZV-shaped input command is obtained by convolving the reference command with a sequence of two impulses.4  Input Shaping of DC Servomotor For this example. LLC . tp = π ωn 1 − ς 2 (14.4443. For step inputs.33) OS and tp are the overshoot and peak time of the unit step response of second-order system (14. For example. A2 = 1 + OS 1 + OS (14.3. Example 14.9  DC motor with simple proportional loop. respectively. However. to zero out the residual vibration.34) u(t ) = A1δ(t − t1 ) + A2δ(t − t 2 ) (14. it should be noted that the ZV shaper is sensitive to parametric variations and other shaper designs should be considered for plants with higher degree of uncertainty [9]. the zero vibration (ZV) shaper is derived as [6]: and ∆T = t 2 − t1 = t p (14.29).Digital Control 14 -11 V (ωn .7) as shown in Figure 14. the shaper parameters are computed as shown in Table 14.10.35) That is. © 2011 by Taylor and Francis Group. Unit step response of the DC Motor with unity gain feedback is shown in Figure 14.32) OS = e and the ZV shaper is given in (14. Step + – Add 1 Gain 36 s2 + 3s DC Motor Scope FIGURE 14.31) In the case of m = 2. The closed-loop pole polynomial is s2 + 3s + 36.

Control and Mechatronics Table 14.5 0 0 0.10  DC motor step response with input shaping (solid line) and without input shaping (dotted line).5408 A1 0. the plant (14. The implementation of the necessary delay functions can be accomplished with timers that are often integral subsystems of microcontrollers. Let the open-loop characteristic polynomial of the discrete plant be given as Po = z N + aN −1z N −1 + + a1z + a0 (14.5 2 2. define the desired closed-loop characteristic polynomial as Pc = z N + α N −1z N −1 +  1 a  N −1 Construct the Toeplitz matrix ϒ =  a N − 2    a1 1. LLC .2. It has an appealingly simple structure: u(n) = − K s x(n) (14.4 t1 0 t2 0.37) Similarly.14 -12 shaping within a feedback configuration (as opposed to feedforward shaping) has been proven in power electronic applications [10.3).38) 0 1 a N −1 a2 a3 1 0.3) must be controllable and all of the state variables must be measured. © 2011 by Taylor and Francis Group. a number of pole placement algorithms are available to determine the feedback gain Ks [1.5 + α1z + α 0 0 0 1 0 0  0   1 (14.5 4 4.3  State Feedback Design State feedback control is among the most powerful control designs.6924 A2 0.5 5 FIGURE 14.3076 14.3  ZV Shaper Parameters for Example 14. Consider the case of a single-input system so that M = 1 in (14.7].36) However. The Bass–Gura method is considered as possessing the best numerical stability and is described below.4.5 3 3. an observer/state estimator may be used to estimate the state vector.5 1 1.11]. The latter may not always be possible due to restriction on sensor deployment. The readers are referred to the corresponding chapters in this handbook for observer/state estimator designs. In such case.

Slowing down the system allows the energy to be dissipated in a more reasonable manner.2152 0.12 reveals that the velocity transient is unacceptably large. a closer examination of the velocity response in Figure 14.39) Here. continuous time. discrete time). the state feedback gain is given as K s = (α N −1 − aN −1 α N − 2 − aN − 2 … α 0 − a0 )ϒ −1M c−1 (14. This is achieved by assigning all of the discrete-time poles to the center of the z-plane z = 0.1391] x (n) 1 Response to an initial condition of x (0) =   is shown in Figure 14.2 1 0.2 0 –0.13 and 14. that the system response can be driven to the set point levels in finite time.05 0. deadbeat control must be carried out with consideration of a reasonable speed of convergence.04 0.5  Deadbeat Control of DC Motor Using the pole placement algorithm (14. The primary design parameter in deadbeat control is the sample period.35).01 0.09 0.11  DC motor position response to 100 Hz deadbeat control (solid line. Reducing the sampling rate to 10 Hz. The next example illustrates this effect.Digital Control 14 -13 then.6 0. However. the deadbeat feedback is given by u(n) = −[281. and even instability [1]. Overly aggressive designs may lead to saturation.02 0.11.07 0. dotted line. However.8 0. The desired characteristic equation is given by zN = 0. Mc is the controllability matrix. the corresponding control is u (n) = − 3. © 2011 by Taylor and Francis Group.3909  x (n) The responses are shown in Figures 14.03 0. Example 14.14.1 FIGURE 14. M c = [B AB A2 B … A N −1B] A unique property of the discrete-time domain is the so-called deadbeat behavior.08 0.2 0 0.06 0.4 0. LLC . where it is observed that the motor 1 position converges to zero within two sampling periods (0.9653 4.02 s). inter-sample oscillation. 1.

04 0. discrete time). continuous time. continuous time.05 0. In general. 1.13  DC motor position response to 10 Hz deadbeat control (solid line.2 0. dotted line.1 FIGURE 14. More importantly.2 0 –0.4 0.02 0.3 0.12  DC motor velocity response to 100 Hz deadbeat control (solid line. multirate sampling can achieve state feedback performance without the use of an observer.2 1 0.07 0.8 0.08 0.6 0.4 0. Different sampling periods may be justified due to hardware limitations of the plant or digital controller.2 0 0.01 0. Excellent tracking performance and disturbance rejection can be achieved by multirate sampling control.8 0. and the plant input zeroth-order hold is changed with period Tu. three different sample periods could be employed in a digital controller: sampling of the output or process variable occurs at period Ty.7 0. discrete time).06 0.09 0. The interested reader can explore some of the references [12–14].5 0.5  Multirate Controllers Another approach to state feedback employs multirate sampling. heavy line.9 1 FIGURE 14. the reference is sampled at period Tr. 14.1 0.6 0.03 0. LLC .14 -14 20 0 –20 –40 –60 –80 –100 –120 0 Control and Mechatronics 0. © 2011 by Taylor and Francis Group. Multirate sampling means that different sample periods are employed within the system.

2006. but a more flexible approach is to carry out direct control design in discrete time. W. 1979. J. Digital Control.. Germany. June 1995. Motion control firmware for high speed robotic systems. B.6  Conclusions Digital control is considered in this chapter. 2004. 14. to implement control in great precision.P. Vols.19. 2. Prentice-Hall. T. T.8 0. T. 7.Digital Control 4 2 0 –2 –4 –6 –8 –10 –12 –14 0 0. Chapter 37. The process can either be a plant or a controller. References 1. 1713–1722. Journal of Dynamic Systems. Chang.N. Wittenmark. Englewood Cliffs. March 1990.N. Linear Systems (Prentice-Hall Information and System Science Series).14  DC motor velocity response to 10 Hz deadbeat control (solid line.. Fundamentals of discrete control theory have been in existence for approximately 50 years. Prentice-Hall. 1989. Timing problems in real-time systems..C. NJ. 1997. Hung..2 0. Finally. discrete time).43.. the popular digital PID control is also presented in this chapter. Kailath. Journal of Vibration and Control. B.. 3. Astrom. 76–82. Encyclopedia of Life Support Systems. The ability to embed delay.. 9(12). 5. P. Proceedings of the American Control Conference.7 0. Optimal input shaper design for high speed robotic workcells. 6.9 1 14 -15 FIGURE 14. © 2011 by Taylor and Francis Group. Singer. T. Hou. N.. 9. R. NJ. 1 and 2. Chang. 6. IEEE Industrial Electronics Society handbook. and Godbole. Isermann.. Cheng. Discretization of continuous-time process is first discussed. and Seering. but advances such as multirate sampling continue to develop as electronic technologies evolve. and to carry out deadbeat response is unique to the direct approach and can be advantageous in many applications. K. December 2003.. Seattle. Chang. WA.. dotted line. V. and Cultural. Springer-Verlag.13. and Sriwilaijaroen. Computer Control Systems: Design and Theory.4 0.3 0.. 112. E. New York.1 0. 1359–1376. 2000–2004. Nilsson. Berlin. continuous time. J.. Preshaping command inputs to reduce system vibration. 1997. Measurement. M.6 0. and Control. and Trent. Emulating continuous controllers in discrete time is very popular and requires minimal knowledge of discrete system theory. and Wittenmark.N. IEEE Transactions on Industrial Electronics. United Nations Educational. Scientific.5 0. B. K. Sec. 25(5). pp. Digital Control Systems. 4.. Servo control systems. and Tormgren. LLC . Englewood Cliffs. 8.

A. T.M. H. Fujimoto. © 2011 by Taylor and Francis Group. Fujimoto. June 2006.. Design of a stable state feedback controller based on the multirate sampling of the plant output.. 50(1). Hagiwara.. Feng. H.. June 2001. Osaka. R. April 2–5. IEEE Transactions on Industrial Electronics. pp.. Proceedings of the 2002 Power Conversion Conference. 13. Y. and Kawamura. Y. Feedback control with Posicast.Y.. Japan. 14. LLC . 2002. J. Hori. September 1988.14 -16 Control and Mechatronics 10. Q.. IEEE Transactions on Automatic Control. Hung. 196–201. and Hung. J. 812–819. Perfect tracking control based on multirate feedforward control and applications to motion control and power electronics-a simple solution via transfer function approach. IEEE Transactions on Industrial Electronics. 94–99.. 11.Y. 33(9). and Araki. 636–664. S. Nelms. M. 759–767.. February 2003. IEEE Transaction on Industrial Electronics. Posicast-based digital control of the buck converter.. 12. Perfect tracking control based on multirate feedforward control with generalized sampling periods. Hori. and Kondo.. 48(3). 53(3).

................1  Introduction An early description of the phase-locked loop (PLL) appeared in the papers by Appleton [1] in 1923 and de Bellescize [2] in 1932.........3 Applications of PLL-Based Control...... the software PLL (SPLL)..... Presently... 15-1 © 2011 by Taylor and Francis Group.......12]... the all-digital PLL (ADPLL). 15-3 15. Nowadays.. PLLs have become the inevitably essential devices in many systems... 15-1 15..... control..1 Introduction............ power electronics......... the applications of the PLL came to be widely used in modern communication systems... the digital PLL (DPLL)....... Since then..... Since then the PLL has made much progress and has turned its earlier professional use in high-precision apparatuses to its current use in consumers’ electronic products....... the theoretical description of the PLL was well established [3–5]. researchers in the control field first turned their attention to the realization of the PLL for a synchronous motor [14]. optics...... servomechanism....... LLC .. the phase-locked servo system (PLS) has been rapidly developed for use in AC and DC motors’ servomechanisms with the analog PLL ICs [14–17]...... such as the enhanced PLL or the fast PLL...1 [6].......... The advent of the PLL has contributed to coherent communication systems without the Doppler shift effect.... such as microprocessors... 15-9 Phase Detector  •  Voltage-Controlled Oscillator  •  Loop Filter and Other Subsystems Guan-Chyun Hsieh Chung Yuan Christian University 15........ New types of controllers to increase the features of the PLL have been developed continuously so as to accomplish an easy-use and easy-control strategy for system applications.. owing to the fast development in IC design........ and so on........... In the 1970s..... Over the past two decades..... and Hybrid PLLs... PLLs are widely put in use in areas of communication........ power systems.. digital signal processors.... PIC controllers...... It has enabled modern electronic systems to improve performance and reliability...........15 Phase-Lock-LoopBased Control 15........... 15-11 References�������������������������������������������������������������������������������������������������� 15-11 Phase-Locked Servo System  •  Power Electronics  •  Active Power Filters  •  Power Quality  •  Lighting  •  Battery Charger 15.. etc..... In the late 1970s....... the PLL can be classified as the linear PLL (LPLL)... but the PLL did not achieve widespread use until much later because of the difficulty in realization. With the rapid development of integrated circuits in the 1970s..... high-performance digital ICs and versatile microprocessors have resulted in a strong motivation for the implementation of PLLs in the digital domain..2 Basic Concept of PLL.. especially in common electronic appliances used daily... as shown in Figure 15...... etc..... After 2000. power.... because it can contribute the stable control function to increase system reliability in versatile applications..4 Analog...8...... including communications. New topologies for increasing the performance of the PLL have been developed in succession.. [7....... mixed-mode integrated circuits and so on. Digital.....1 and Table 15. and renewable energy systems.

θi (b) RC loop filter Controlled signal. θo Clock FIGURE 15. and (d) software PLL (SPLL). θi (d) Software PLL D/A converter Output signal fo.15 -2 Error signal ve. θe Control and Mechatronics Analog multiplifier RC loop filter Controlled signal. θe Digital dector Input signal fi . θo OSC output signal Voltage-controlled oscillator Digital detector Error signal Digital loop filter Input signal fi. θi (a) Output signal fo. vc N Output counter signal fo . θo Error signal ve . vc Voltage-controlled oscillator Input signal fi. LLC . θi Digital-controlled signal Output signal fo.1  Configuration of PLLs: (a) linear PLL (LPLL). (c) all-digital PLL (ADPLL). TABLE 15.1  Classification of PLLs Type Linear PLL (LPLL) Digital PLL (DPLL) All-digital PLL (ADPLL) Software PLL (SPLL) Phase Detector Analog multiplier Digital detector Digital detector Software multiplier Loop Filter RC passive or active RC passive or active Digital filter Software filter Oscillator Voltage-controlled Voltage-controlled Digital-controlled Software-controlled © 2011 by Taylor and Francis Group. θo Digital-controlled oscillator (c) Fixed oscillator (clock) A/D converter Input signal fi. (b) digital PLL (DPLL).

producing a negative feedback loop.2  Basic Concept of PLL A phase-locked loop or phase lock loop (PLL) is a negative feedback control system that generates a signal that has a fixed relation to the phase of a reference signal. Thus. which consists of a phase detector (PD).4).2.Phase-Lock-Loop-Based Control 15 -3 15.5) Phase detector. it will be observed that the VCO output signal xo has become synchronous with the input signal xi. locking the reference one by adaptively reducing their phase error to a minimum. respectively. which is very stable in frequency. a PLL initially compares the frequencies of two signals and produces an error signal that is proportional to the difference between the input frequencies. If the output frequency drifts. PD ve . the output is locked to the frequency at the other input.13]. The functional block diagram of a typical PLL is shown in Figure 15. After a period of time sufficiently long for transient phenomena. More precisely. We presume that xi and xo are. If the loop is initially unlocked and the PD has a sinusoidal characteristic. The PLL can cause a system to track with another one by keeping its output signal synchronizing with a reference input signal in both frequency and phase. which can be expressed as [3–4] xi (t ) = A cos(ωit + θi ) xo (t ) = B cos(ω ot + ϕo ) (15.1) (15. the significant output signal ve(t) at the output of PD is given by ve (t ) = K d cos[(ωi − ωo )t + θi − ϕo ] (15. the input and the VCO signals. θe Error signal Loop filter. the error signal will increase. θi fo.2  The functional block diagram of a typical PLL. LF Controlled signal.2) The angular frequency of the input signal is ωi. vc Input signal fi. Generally.2) has become a linear function of time expressed as ϕo = (ωi − ωo )t + φo (15. driving the frequency in the opposite direction so as to reduce the error.4) From (15. The error signal is then low-pass filtered through a RC or charge pump scheme to drive a VCO that creates an output frequency [9–11. and a voltage-controlled oscillator (VCO). the PLL simply controls the phase of the output signal. © 2011 by Taylor and Francis Group. The output frequency is fed through a frequency divider back to the input of the system. θi and φo are the phase constants. and ωo is the VCO central angular frequency. the quantity φo in (15. LLC .1) and (15.3) where Kd is the gain of the PD and the higher frequency item ωi + ωo is negligible here due to the rejection of the LF. θo Output signal Voltage-controlled oscillator. VCO FIGURE 15. This input is called the reference and is often derived from a crystal oscillator. a loop filter (LF). Signal xo can then be expressed as xo (t ) = B sin(ωit + φo ) (15.

6) The LF is of the low-pass type.13) If the angular frequency difference ωi − ωo is much lower than the product KdKv.10) Substituting (15.14) © 2011 by Taylor and Francis Group.11) into (15.12 clearly shows that it is the DC signal vc = ve that changes the VCO frequency from its central value ωo to the input signal angular frequency ωi. For this reason.7) The VCO is a frequency-modulated oscillator.. we substitute the phase constant ϕo for the constant θo. we have from which we have φo = θi − cos −1 ωi − ωo KdKv (15. i. around the central angular frequency ωo.3) becomes a DC signal given by ve (t ) = K d cos(θi − φo ) (15. and (15.9) d (ωot + ϕo ) dt (15. whose instantaneous angular frequency ωinst is a linear function of the controlled signal vc(t). ωinst = ωo + K v vc = ωi (15. the phase quadrature actually corresponds to ωi = ωo.. so that the controlled signal vc(t) is given by vc (t ) = ve (t ) = K d cos(θi − φo ) (15.6).9).15 -4 Control and Mechatronics and the PD output signal ve(t) in (15.11 becomes θi − ϕo ≈ cos−1 0 = π/2. i.e.6).e.5). Then ve = K d cos(θi − φo ) = K d sin(θi − θo ) (15.8) = ωo + K v vc (t ) where Kv is the VCO sensitivity. LLC . we obtain ve = ωi − ωo Kv (15.11) ωi − ωo = K d K v cos(θi − φo ) (15. From (15.12) Equation 15. so that θo = ϕo − π/2. ωinst = and dϕo = K v vc (t ) dt (15. It indicates that the VCO signal is actually in phase quadrature with the input signal while the loop is exactly in lock. Equation 15. (15. Strictly speaking.

+2π) for a sequential phase/frequency detector (PFD). The digital-circuit equivalence of a multiplier PD is an exclusive-OR circuit. digital circuits can be used instead of analog circuits. they are often called digital PDs.e. The sinusoidal PD inherently has a phase-detected interval (−π/2. If rectangular waveforms are applied at both inputs of any multiplier or switch PD.3. (−π . are shown with the same slope at the phase error θe = θi − θo = 0. +π/2) for a triangular PD. with a maximum DC output equal to the peak signal amplitude. which is null when the initial frequency offset is null. with ωi − ωo KdKv φo = θi − cos −1 (15. and ve = Kd sin(θi − θo) when it is working. +π) for a sawtooth PD.16) (15. An increased PD output capability provides a larger tracking range and a larger lock limit than those obtainable from a sinusoidal PD.1  Phase Detector The PD in the PLL can be described by two categories. If the input signal is sinusoidal. When the difference θi − θd is sufficiently small. They can generate PD characteristics that are difficult or impossible to obtain with multiplier circuits.17) The PD output can be represented by ve(t) = Kd sin[(ωi − ωo)t + θi − φo] when the loop is out of lock. ve ≈ K d (θi − θo ) (15. (−π . which means that the different PDs all have the same factor Kd.15) Another interpretation for the signals in (15. if waveforms are rectangular. the output characteristic becomes triangular.3. Remarkably. A widely used sequential PD of greater complexity consists of four flip-flops plus additional logic and is available in several versions as single-chip ICs. +π) for a rectangular PD. Sequential PDs contain memory of past crossing events. and (−2π . 15.. the following approximation is used.e. the PD characteristic is also sinusoidal. When the difference |ωi − ωo| exceeds the loop gain K in a sinusoidal-characteristic PD.2. © 2011 by Taylor and Francis Group. All aspects of the PD characteristics are depicted in Figure 15. rectangular input waveforms.18). LLC . which is a zero memory device. Sequential PDs are usually built up from digital circuits and operate with binary. All curves of Figure 15. Accordingly. sinusoidal and square signal PDs. In fact.2) can also be represented by xi = A sin(ωit + θi ) xo = B cos(ωot + ϕo ) (15. except the rectangular. the shape of the PD characteristic depends on the applied waveforms and not necessarily on the circuit.18) The product K = KdKv is referred to as the loop gain.1) and (15..Phase-Lock-Loop-Based Control 15 -5 The difference θi − θo is called the phase error between the two signals. a proper θo for lock can no longer be found by means of (15. +π/2). It is called a phase/frequency detector (PFD) because it also provides an indication of frequency error when the loop is out of lock [4]. but that characteristic is more readily obtained from sequential PDs. The synchronization no longer maintains and the loop falls out of lock. It operates as a multiplier. The square signal PDs are implemented by sequential logic circuits. A sawtooth characteristic results from sampling a sawtooth input waveform. i. i. The characteristics of the square signal PDs are of the linear type over the phase-detected interval (−π/2.

15 -6 Control and Mechatronics Vd Vd –π/2 –π/2 0 +π/2 θe 0 +π/2 θe (a) (b) Vd Vd –π 0 +π θe –π 0 +π θe (c) Vd (d) –2π 0 +2π θe (e) FIGURE 15. 15. and (e) phase/ frequency. (2) low noise in the amplifier portion. and Clapp circuits make © 2011 by Taylor and Francis Group. (d) rectangular. (b) triangular. (2) large frequency deviation. and YIG-tuned oscillators [3. i. the standard Hartley. (4) linearity of frequency versus control voltage. The phase stability can be enhanced by a number of factors: (1) high Q in the crystal and the circuit. resonator oscillators. If a wider frequency range is required.2  Voltage-Controlled Oscillator The VCOs used in the PLL are basically not different from those employed for other applications. (3) high modulation sensitivity Kv. The main requirements for the VCO are as follows: (1) phase stability. and (5) capability for accepting wideband modulation.. and (4) mechanical stability. (c) sawtooth. crystal oscillators (VCXO). Colpitts. The phase stability is directly in opposition to all other four requirements. In this application. Remarkably. LLC .2. The four types of VCO commonly used are given in order of decreasing stability. RC multivibrators.e.3  Characteristics of the PD: (a) sinusoidal. (3) temperature stability. an oscillator formed by inductor (L) and capacitor (C) must be used. such as the modulation and the automatic frequency control. much of the phase jitter of an oscillator arises from noise in the associated amplifier.4].

and the PD output voltage is proportional to the phase error. in which F(s) is the loop transfer function. YIGtuned Gunn oscillators have become popular. The servo scheme of a typical PLL (or PLS) under the locking state is shown in Figure 15. © 2011 by Taylor and Francis Group.19) For the active filter.20) R1 R2 C (a) F1(s) = sCR2 + 1 sτ + 1 = 2 sC(R1 + R2) + 1 sτ1 + 1 where τ1 = (R1 + R2)C and τ2 = R2C –A(sCR2 + 1) sCR2 + 1 + (1 + A)(sCR1) R2 C F2(s) = For large A R1 (b) –A F2(s) ≈ – sCR2 + 1 sτ2 + 1 = sCR1 sτ1 where τ2 = R2C and τ1 = R1C FIGURE 15.Phase-Lock-Loop-Based Control 15 -7 their appearance. the closed-loop transfer function H ″(s) is found to be H ′′(s) = K o K d (sτ 2 + 1)/τ1 s 2 + s(K o K d /τ1 ) + K o K d /τ1 (15. Any PLL is at least. We assume that the loop is in lock. that the PD is linear. whereas a PLL with a passive filter is type one. respectively. The tuning is accomplished by means of a varactor. the type of loop is determined by the number of perfect integrators within the loop. In servomechanism.2. For microwave frequencies. The widely used passive and active filters for the PLL are shown in Figure 15. LLC .3  Loop Filter and Other Subsystems The LF in the PLL is of a low-pass type.4a and b. that is.15a) The phase error voltage ve is filtered by the LF.4. According to servo terminology. 15.4  Filters used in the second-order loop: (a) passive filter and (b) active filter. It is used to suppress the noise and high-frequency signal components from the phase error θe and provides a dc-controlled signal for the VCO. If the LF contains one perfect integrator. a servo motor replaces the VCO and can be controlled to maintain constant speed by detecting its phase with an encoder. the closed-loop transfer function H ′(s) of the PLS (or PLL) is H ′(s) = K o K d ( sτ 2 + 1) /τ1 s + s ( +1K o + K d τ 2 /τ1 ) + K o K d /τ1 2 (15. Tuning of the YIG-tuned oscillator is accomplished by altering a magnetic field. For the passive filter. then the loop is type two. a type one loop because of the perfect integrator inherently in the VCO. vd ≈ K d ( θi − θo ) (15. A second-order PLL with a high-gain active filter can be approximately a type two loop.

the © 2011 by Taylor and Francis Group.24) We define the DC gain of the loop as K D = K v K d F (0) (15. and τ 2 = R2C . 2 The two transfer functions are nearly the same if 1/KvKd << τ2 in the passive filter. and τ 2 = R2C. ζ= τ2  K o K d   τ1   2 1/2 = τ 2ωn . LLC . τ1 = R1C. τ1 = ( R1 + R2 )C . ζ= 1  KvKd   τ1   2 1/2  1  τ2 + . a PLL locks up with just a phase transient.    KvKd  and for an active filter are K K  ωn =  o d   τ1  1/2 .15 -8 Control and Mechatronics For convenience in description. but for the second. there is no cycle slipping prior to lock. The relevant parameters for a passive filter are K K  ωn  v d   τ1  1/2 . We define a hold-in range of a loop as ΔωH = K D. (15.25) A large K D is generally required for achieving a good performance of the loop [4]. If the input frequency closes sufficiently to VCO frequency. The frequency range over which the loop acquires the phase to lock without slips is called the lock-in range of the PLL.22) in which ωn is the natural frequency of the loop and ζ is the damping ratio.21) H ′′(s) = 2 2ζωn s + ωn 2 s 2 + 2ζωn s + ωn (15.19) and (15.23) and the closed-loop transfer function can be given by H (s ) = G(s) 1 + G(s) (15.or higher order loops. the lock-in range is equal to the hold-in range. The open-loop transfer function of any PLL is given by G(s) = K o K d F (s ) s (15. In a first-order loop.20) may be rewritten as 2 2 s 2ζωn − ωn /K o K d + ωn 2 s 2 + 2ζωn s + ζωn and H ′(s) = ( ) (15.

first-order loop. etc. The usual configurations of the PLL in communications are well developed and widely used for FM. The use of the PLL. © 2011 by Taylor and Francis Group. the self-acquisition of the phase as the phase lock-in. the microcomputer-based variable slope pulse pump controller (VSPPC) for stepping up the motor position control. the PLS configuration is composed of a PD. AM. θe Error signal Loop filter F(s) Controlled signal. the two-phase-type detection. etc. is slower. vc ωo . commercial apparatus. that a higher order loop has nearly the same lock limit.3. over which the loop will acquire lock after slipping cycles for a while. for signal conditioning. as shown in Figure 15. or simply the pull-in. which are suitable for use in a variety of frequency-selective demodulations. video. To ensure stable tracking. The lock-in range ΔωL can be approximately estimated as ∆ωL ≈ ± K d K v F (∞) (15.1  Phase-Locked Servo System Integrated PLLs. control systems.5  Typical phase-locked servo system (PLS).. θo Output signal Km –——–– sτm + 1 Servo motor Input signal ωi . it is common practice to build LFs with equal numbers of poles and zeros. the frequency-pumped controller (FPC). or in frequency synthesis applications.1. and the sensorless position control for the induction motor. and requires more design attention than does the phase acquisition. such as speed or position. signal processing. or lock-in. the phase-controlled oscillator (PCO) for stepping up the motor speed servo control. At high frequencies.Phase-Lock-Loop-Based Control 15 -9 lock-in range is still less than the hold-in range. smaller than the hold-in interval and larger than the lock-in interval.26) The acquisition of frequency in the PLL is more difficult. an LF. vector control.15]. developed since the 1970s. As a fair approximation. A PLS is certainly a frequency feedback control configuration that continually maintains the motor speed or the motor position by tracking the phase and the frequency of the incoming reference signal corresponding to the input command. and the fuzzy control for the DC motor position servo control. began in the 1970s [14. θi FIGURE 15. in which the servo motor operates like the VCO in the PLL as shown in Figure 15.3 Applications of PLL-Based Control 15. The lock-in limit of a first-order loop is equal to the loop gain. Remarkably.5. are capable of versatile functions. with the analog PLL IC—NE565. a variety of PLL controllers with DSP. LLC . Phase detector Kd ve . We argue here. and a servo motor. This interval is called the pull-in range. we can say that the higher order loop has the same lock-in range as the equivalent-gain. Basically. or intelligent strategies were developed in succession to increase the servo performance of a versatile servomechanism. 15. Besides. there is a frequency interval. [14–34]. first developed by Signetics for synchronous and DC motors. the self-acquisition of the frequency is known as the frequency pull-in. the loop is indistinguishable from a first-order loop with gain K = KoKdF(∞). the voltage pump controller (VPC) and the adaptive digital pump controller (ADPC) for the DC motor speed servo control. telecommunication systems. They are. From 1985 to 2009.

LLC . The PLL applied in a charger is based on the charge trajectory.3.15 -10 Control and Mechatronics 15. This information may be used to synchronize the turning on/off of power devices.5  Lighting The electronic ballast. formed by a series-resonant topology. In the charger system. and so on [56–61]. and finally it is also used to track the resonant frequency. such as AC to DC converters. and other energy storage systems coupled with electric utility. the more important charging processes include the bulk current charging process. The methodologies for the aforementioned achievement are realized with the aid of a current loop proportional–integral regulator. or a piezoelectric transformer with a current-equalization network. The highprecision control of a single.3. used in grid connections. a software implementation of a robust synchronizing circuit. with a power utility is being widely explored nowadays. static VAR compensators. The PLL system is quite often recommended for the making or the tuning of the phase angle of the utility voltage.6  Battery Charger Nowadays.2  Power Electronics The high-power inverter. battery chargers for versatile apparatuses have already come to play important roles in our daily lives. cycle-converters.4  Power Quality The phase angle of the utility voltage is a critical piece of information for the operation of most apparatuses. etc. is often controlled by employing a capacitor current feedback with a PLL compensator that minimizes the steady-state error of the output voltage. an operation of a PLL system is widely adopted under distorted utility conditions. the hysteresis current-control method. A lot of detections exist during the charging process. © 2011 by Taylor and Francis Group. active harmonic filters. voltage unbalance/loss. a class-D inversion. the instant charging state.3 Active Power Filters An approach using the PLL technique to realize the active power filter (APF) is widely applied to achieve high accuracy in tracking highly distorted current waveforms and to minimize the ripple in multiphase systems. calculate and control the flow of active/reactive power. Thus.or a three-phase PWM inverter for constant voltage and constant frequency applications. the current charging process. Furthermore. in particular for renewable energy resources including fuel cell and solar photovoltaic cell systems. The primary objective is to increase ballast stabilization and efficiency. or transform the feedback variables to a reference frame suitable for control purposes. has been tried to be stabilized by a PLL. so as to maintain high accuracy in the charging process [73–77]. voltage detection. The quality of the lock directly affects the performance of the control loops in the aforementioned applications. such as uninterruptible power supply. 15. the power supply system with a synchronous rectifier frequently uses as DPLL technique to synchronize the secondary driver signal with the switching frequency so as to raise the power efficiency [35–55]. 15. The angle information is typically extracted using some form of a PLL.3.3. and the float charging process. which also includes the detection of the phase signals of both the resonant-tank input voltage and the two leakage-inductor terminals in the transformer. such as line notching. a fully digital control algorithm with a field-programmable gate array (FPGA). especially for an operation under common utility distortions. which is programmed to frequency to achieve the auto-tracking and the auto-locking of the charging status. 15. including charging current detection.3. and frequency variations [62–68]. 15. and to improve the temperature effect [69–72].

C. Y. M. © 2011 by Taylor and Francis Group. 1665–1674. Blanchard. 5.. New York: John Wiley & Sons. an extensive accessibility to application notes. LLC . Adaptively enhanced phase locked loops. Circuits Syst. 948–950. 231. and Hybrid PLLs Due to the rapid progress in digital ICs in the 1970s. the Motorola digital phase/frequency comparator (PFD) MC4044. Thereafter. 1140–1145. IEEE Trans. Signal Process. M. M. Z. Sawan. November 2003. 2nd edn. IEEE Trans. Mayaram. and the serial programming (and reprogramming with flash memory) capability. pp. H. Proc. 69(4). C. Philos. June 1932. 6. A. Electron. A new phase-locked loop (PLL) system. pp. Montreal. Circuit Syst. and C. Camb. and a VCO. In order to extend the lock-in range of the PLL. New York: IEEE Press. M. I: Regular Papers. An application of the second-order passive lead–lag loop filter for analog PLLs to the third-order charge-pump PLLs. a large user base. Canada. frequency modulation. Garder. A dual-slope phase frequency detector and charge pump architecture to achieve fast locking of phase-locked loop. W. CMOS wide-swing differential VCO for fully integrated fast PLL. IEEE. Yeh. Analysis of charge-pump phaselocked loops. Gokcek. The PLL ICs have become one of the most important communication circuits. October 2005. 11. The PD is a multiplier. 11. 1922–1923. Phase-Locked Loops: Application to Coherent Receiver Design. E. Circuits Syst. Fouzar. 50(11). Karimi-Ghartemani and M. vol. a programmable counter. F. 2. Iravani. Proc. I: Regular Papers. M. IEEE Midwest Symp. and a VCO into a modular device to obtain a DPLL. They are integrated on a chip with a PD. versatile PLL components for raising the locking performance were developed in succession. Brownlee. Canada. 1. Hanumolu.Phase-Lock-Loop-Based Control 15 -11 15. April 1981. 55(2). and Un-Ku Moon. H. II: Analog Digit. V. PLLs can be analog or digital.. signal detection. 7. Programmable interface controllers (PICs) are popular with developers and hobbyists alike due to their low cost and wide availability. P. M. 12. 13. 10. The first analog PLL ICs—NE565 and the CD4046 were developed by Signetics and RCA in the 1970s. La reception synchrone. Yang. 421–424. Inc.. 1997.. and Y. capable of a phase-detection range from −2π to +2π was proposed in 1972. 2. Onde Electron. Proc. Y. 410–431. Ind. IEEE Trans. Easy programming PICs are easily available for PLL applications to help increase system functions. IEEE MWSCAS 2007. Jeltema. III). 52(10). W. February 2008. Circuits Syst. References General 1. motor-speed control. August 2005. and a variety of other applications. but most of them are composed of both analog and digital components. Wang. and C. Yao and C. it is a future-exploited trend to combine a PD. Phaselock Techniques.. de Bellescize. Appleton. Digital. the integrated PLL was developed for filtering. K. 51(9). an LF. 3. Toronto. IEEE Trans. 21(pt. B. whose phase-lock range is from −π/2 to +π/2. Chie. A. S. 1976.. J. 2128–2138. Proceedings of the IEEE Conference on Control Applications. a prescaler. Cheng. Gokcek. K. September 2004. 892–896. Razavi (ed. 8. Nowadays. Soc.). 4. 9. August 2001. the availability of low cost or free development tools. 230–240. Proc. Monolithic Phase-Locked Loops and Clock Recovery Circuits. K. demodulation. August 2000. frequency synthesis. New York: Wiley. 972–974. Savaria. R. Ying. 1979. Lindsey and C. An analysis of charge-pump phase-locked loops. A hybrid PLL realized by combining discrete analog and digital components was then achieved. Automatic synchronization of triode oscillators.4 Analog. A survey of digital phase-locked loops. C. B.

. Electron. J. Melito.. April 1992. Volpe. Analysis of unlocked and acquisition operation of a phaselocked speed control system. Electron. Electron. IEEE Trans. L. T. 10. Inst. 290–299. December 1998. C. Hsieh. A design for an adjustable fuzzy pulse pump controller in a frequency-locked servo system. Switching-behavior improvement of insulated gate-controlled devices. Tal. Ind. 15.. Xu. 36. Power Electron. Lee. 30. Electron. IEEE Trans. Raciti. Electron. R. C. IEEE Trans. 53(4). Automatic Contr. H. Li. IEEE Trans. Holtz. J. 441–452. 26. Power Electronics 35. 88–95.. C. 25. and S. K. A.. Y. Variable slope pulse pump controller for stepping position servo control using frequency-locked technique. Galluzzo. F. Pan and E. Ind. 27. C. L. 45(6). Advanced control of induction motor based on load angle estimation. Chung. 47(6). Electron. 23. 12(4). M. 51(1). Chen. 28. F. 3415–3425. Ind. June 2006. Y. J. L. A novel.. Ind. Comanescu and L. Chin. Li and G. Shen. M. T. and H. February 1977. C. Laopoulos. Ind. Liu. 1022–1034. W. An improved flux observer based on PLL frequency estimator for sensorless vector control of induction motors. Petridis.. L. 28(3). Karybakas and T. IEEE Trans. P. Instrum. IEEE Trans. Wallmark and L.. September 2008. Z. A phase-locked loop control system for a synchronous motor. R. 53(1). 24. 44(1). A. Electron. Lee. Fang. G. A. 138–140. Ind. April 1973. March 2003. IEEE Trans. S.. A design of fast-locking without overshoot frequency-locking servo control system by fuzzy control technology. Hsieh. IEEE Trans.15 -12 Control and Mechatronics Servo Control 14. Moore. 55(3). 31. Emura. 645–653. Hsieh. May 2005. robust DSP-based indirect rotor position estimation for permanent magnet AC motors without rotor saliency. IEEE Trans. 379–386. © 2011 by Taylor and Francis Group. 32. IE-36. Electron. IEEE Spectr. February 2004. AC-15. Harnefors. Electron. IECI-24. Lee. Musumeci. 17. 27(1). A high-precision positioning servo controller based on phase/frequency detecting technique of two-phase-type PLL. 18(2). 19. 1989. March 2008. H.. Hsieh. A. C. 1981. and H. IEEE Trans.. 1995. Electron. G. IEEE Trans. J. Toliyat. A study on position servo control systems by frequency-locked technique. and H. 41–49. Sensorless position control of induction motors-an emerging technology.. Electron. and M. 379–386. 55(9). Electron. IE-32.. C. Geiger. B. Krzeminski. IEEE Trans. A phase/frequency-locked controller for stepping servo systems. IEEE Trans. Yamanaka. A phase tracking system for three phase utility interface inverters. IEEE Trans. Sensorless control of ultrahigh-speed PM brushless motor using PLL and third harmonic back EMF. and H. J. 18. 1179–1187. 16. Hsieh. G. Speed control by phase locked servo system—New possibilities and limitations. Lai. Ind. January 2004. 61–67. Voltage pump phase-locked loops. M. 50–56. D. 1298–1306. 118–125. Power Electron. 431–438. Chin. LLC . 29. Xu. 15(3). T. 21.. R. IEEE Trans. G. Ying and N. Electron. Power Electron. 42(3). New York: Wiley. IEEE Trans. G.. C.. M. Abu-Rub. X. Ind. A phase-locked-loop-assisted internal model adjustable-speed controller for BLDC motors. and Y. 39(2). 34. February 2006. July 1997.. C. February 1997. N. 539–546. Ind. Space vector sequence investigation and synchronization methods for active front-end rectifiers in high-power current-source drives. IEEE Trans. 5–14. April 2006. Ind. 22. O. M. Electron. Zargari. Inst. 840–851.. 33. Phase Lock Loops for DC Motor Speed Control. Ind.. Nakamura. 20. Guzinski. 53(2). D. H. Wu. May 2000. Ind. Wu. February 1985. and C. Eng. Wang. J. December 2000. 421–428. Ind. 365–373.. February 1970. Ind. Eng. J. Hsieh. Sensorless control of salient PMSM drives in the transition region. 1987. P.. W. S. G. IE-34. IEEE Trans. Phase-locked loops for motor-speed control. Ertugrul. Chen. 147–151. Ind. Wu. A.. Testa. Margaris and V. Iwasaki. Electron. C. Contr. An adaptive digital pump controller for phase locked servo systems. L. IEEE Trans. and N.

. 2006–2017.. March–April 2008. L. 44(2). Matsuki. Pou. A. 395–407. Tomasin. Power Electron. R. S. S. 2185–2192. © 2011 by Taylor and Francis Group. 49. Torres. 41(10). Taipei. Y. August 2008. Kim. 3997–3999. Nakahara. 45. M. 581–590. and P. W. and E. Yazdani. 40. 50. R. A fast PLL method for power electronic systems connected to distorted grids. 55(5). Active Power Filters 56. Keyhani. M. G. and R. and M. J. IEEE Trans. 16(5). H. F. Teodorescu and F. M. Sato. Rodriguez-Amenedo. August 2008. Multiple reference frame-based control of three-phase PWM boost rectifiers under unbalanced and distorted input conditions. 57. March 2008. Iravani. 54(2). 55(8). Murata. 55(8). January 2008. 3012–3021. IEEE Trans. R. IEEE Trans.. 73–78. Tuning software phase-locked loop for series-connected converters. Choi. J. Chen. Electric Power Appl.. Electric Power Appl. 300–308. Moran. J. H. IEEE Trans. M. Shinnaka. October 2005. Venayagamoorthy. Koczara. H. K. P. and D. R. Corzine. A. M. 12(5). J. 153(1). 55(8). 23(4). 429–436. January 2006. Maximum power tracking of piezoelectric transformer HV converters under load variations. J. Arai. Kim.. A robust single-phase PLL system with stable and fast tracking. A. 52. S. IEEE Trans. Ind. K. DFIG-based power generation system with UPS function for variablespeed applications. and G. Karimi. 343–352.. January 2008. August 2008. M. A. Ind. J. Power Electron.. Ind. J. Comparison of three singlephase PLL algorithms for UPS applications. P. P. IET Power Electron. 23(1). Chen. Yaakov and S. and K. and A. Candela. H. Power Electron. Iwanski and W. Lee. Electron. 47. 41. Harada. 46. C. Blaabjerg. Malesani. K. A. September 2001. 54. IEE Proc. 1240–1242. 20(1).. P. Rodriguez. Bergas. March 2005. 584–592. and A. Line-interactive UPS using a fuel cell as the primary source. 3047–3054. Chen. and H. K. 1702–1707. Awad.. Chung. A. Busada. Duarte. M. January 2005. J. and M. A robust phase-locked loop algorithm to synchronize static-power converters with polluted AC systems. Matysik. Electron. 48.. 44. Ind. S. S. 1323–1332. Flexible control of small wind turbines with grid failure detection operating in stand-alone and grid-connected mode. H. IEEE Trans. May 2008. Miura. S. Jung. IEE Proc. High-performance hysteresis modulation technique for active filters. C. N. Precision control of single-phase PWM inverter using PLL compensation. Power Delivery.. and T. 298–307. L. 51. Dai. Tao.. Cortizo. J. J. L. IEEE Trans. IEEE Trans. Marwali. Perez. H. B. Hendrix.Phase-Lock-Loop-Based Control 15 -13 37. K. Lineykin. G. 624–633. IEEE Trans. Power Electron. F. Electron. G. 152(2). pp. 42. IEEE Trans. Mattavelli. IEEE Trans.. A. IEEE Trans. 23(1). Proc. Eloy-Garcia. September 2004. Arnaltes.. Souza. Boroyevich. A. LLC . L. Digital PLL control for single-phase photovoltaic system. 53. 19(5). IEEE Trans. and H. 55. IEEE Trans. Power Electron. M. IEEE Trans. P. L. 39. IEEE IECON’07.. Espinoza. J. IEEE Trans. I. Power Electron. Synthesis of sinusoidal waveform references synchronized with periodic signals. July 2008. April 2007. Svensson.. 38. 23(2). K. Surge analysis of induction heating power supply with PLL. B. 1(3). Filho. Electron. A. Sato. 702–709. 876–884. September 2008. IEEE Trans. Balda. B. 40–46. and J. M. Direct power control of voltage source inverters with unbalanced grid voltages. March 2007. C. Power flow control of a single distributed generation unit.. 2923–2932. Ind. November 2007. 21(1). W. A synchronous rectification using a digital PLL technique for contactless power supplies. W. F. Electron. Negative-sequence current injection for fast islanding detection of a distributed resource unit. Torres. T. Power Electron. J. Taiwan.. and J. H. The current and voltage phase shift regulation in resonant converters with integration control. Power Electron. Wei and Z. January 2006. Electron. Ind. P. 22(2). Chiacchiarini. Seixas. September 1997. Bollen. Peng. 43. Araya. Shin.. Burgos. Magnetics. Decoupled double synchronous reference frame PLL for power converters control. Power Electron. X.

V. IEEE International Conference on Clean Electrical Power. 58–63. IEEE Trans. December 2007. 71. Lian. Lin and J. Karimi-Ghartemani. Undeland. A. Filho. Electron. Karimi. C. 54(2). Han and B. Y. Stefanutti. pp. Kazmierkowski. IEEE Trans. M. Ind. 72. 51(2). Suul. LLC .. Cataliotti. Power Electron. Meas. E88-C(6). V. 68. R.. C. Y. H. 212–216. W. R. R. April 2008. December 2004. Dougal. and S. IEEE Trans. and M. 51. L. IEEE Trans. and B. Steady-state and dynamic study of active power filter with efficient FPGA-based control algorithm. R. Electron. Chen. Electron. Z.. H. IEEE Trans. 60. N. Shu. J. Mattavelli. January 2006. J.. December 2006. Chou. G. Lin and C. and J. Croatia. IEEE IAS. © 2011 by Taylor and Francis Group. Battery Charger 73.. Ind. M. Spiazzi. R. Pou. 511–517.15 -14 Control and Mechatronics 58.. H. Tenti. A design of a digital frequency-locked battery charger for Li-ion batteries. China. Y. Power Deliv. Lin and Y. 1024–1031. L. Chen.. Chen. IEICE Trans. G. 410–419. Rodriguez. Ind. Electronic ballast for fluorescent lamps with phase-locked loop control scheme. Lighting 69. A phase-locked loop for the synchronization of power quality instruments in the presence of stationary and transient disturbances. D.. Electron. Chen. 1–10. Silva. Ou. IEEE Trans. April 2004. EPE 2009. Wen. pp. Power Quality 62. L. IEEE Trans. PLL structures for utility connected systems. 65.. 2(2). Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE). Ind. A.. Kaura and V. Power Electron. Lin. Guo. Cichowlas. 61. Electron. 1253–1262. P.. The tracking of the optimal operating frequency in a class E backlight inverter using the PLL technique. October 2001. 70. B. L. J. Operation of a phase locked loop system under distorted utility conditions. Active filtering function of three-phase PWM boost rectifier under different line voltage conditions. K. Instrum. Electron. Dubrovnik. Ju. 55(4). H. Aredes. Blasko. 64. 1244–1346. September 2006. and J. and B. 2232–2239. IEEE Trans. 53(6). Iravani. 66. January 2006. Young and R. SRF-PLL with dynamic center frequency for improved phase detection. IEEE Trans. IEEE Trans.. Guilin. 254–262. L. H. Analysis and software implementation of a robust synchronizing PLL circuit based on the pq theory. T. and T. Novel phase-locked loop using adaptive linear combiner. April 2007. G. Ind. tuning and testing of a flexible PLL for grid synchronization of three-phase power converters. Rodrigues da Costa Jr. 1919–1926. 59. Malinowski. Ind. Eliminating the temperature effect of piezoelectric transformer in backlight electronic ballast by applying the digital phase-locked-loop technique. 513–514. C. C. D. 160–169. Appl. 2655–2660. L.. P. pp. Lu. 4(30 September 4). M. 21(1). L. September 2009. June 2009. April 2005. Digital control of single-phase power factor preregulators based on current and voltage sensing at switch terminals. Barbosa Rolim. Liu. and M. 63. 1093–1098. 52(2). and P. C. Arruda. 21(5). L. PLL control scheme for the electronic ballast with a current-equalization network.. Bae. Design. and C. Electron. A magnitude/phase-locked loop system based on estimation of frequency and in-phase/quadrature-phase amplitudes. Issue 9–11. June 2005. Ljøkelsøy. S. 56(6). J. Nuccio. Cosentino. M. P. B. Chiu. 33(1). 21(1). Ind. P. M. 74. June 2005. Display Technol. June 2006. January–February 1997. IEEE Trans. K. S. Sobczuk. 67. R. 1527–1536. A. 1356–1363. PLL-based battery charge circuit topology.

. Singapore. J. IEEE Trans. Wang. Modeling. 2007. Jaw. C. L. L. S. J. Chin. J. Chiu. Inst. June 2008. Han. 77. 2482–2488. A resistance-compensated phase-locked battery charger. S. Electron. Eng. analyzing and designing of a phase-locked charger. J. L. L. R. Han. Chen and C. Y. P. and C. J. 76..Phase-Lock-Loop-Based Control 15 -15 75. Liu. Chen. 1037–1046. Current-pumped battery charger. Chen. © 2011 by Taylor and Francis Group. Proceedings of the IEEE International Conference on Industrial Electronics and Applications (ICIEA). Chen. 30(6). Chou. and J. May 2006. R. N. 55(6). R. Y. Y. LLC . Ind.

....... Becerra University of Reading 16............1................... which influenced Ludovico Lagrange (1736–1813) who provided a way of solving such problems using the method of first variations...... The theory of finding maxima or minima of functions can be traced to the isoperimetric problems formulated by Greek mathematicians such as Zenodorus (495–435 BC)... or any other quantity of importance to the designer of the control system............. a measure of energy consumption....... 16 -2 16.. his brother Jacob Bernoulli (1654–1705)..............................................2 Formulation of Optimal Control Problems...1... In 1699.....16 Optimal Control 16.........5 16..1  Introduction 16..........or Interior-Point Constraints  •  Singular Arcs  •  The Linear Quadratic Regulator Optimal Control  •  Origins  •  Applications of Optimal Control Victor M................. ......................... a measure of the amount of time taken in achieving an objective. 16 -1 16.............. Johan Bernoulli (1667–1748) posed as a challenge to the scientific community the brachistochrone problem—the problem of finding a path of quickest descent between two points that are not on the same horizontal or vertical line....1  Optimal Control Optimal control is the process of determining control and state trajectories for a dynamic system over a period of time to minimize a performance index...................... a set of necessary conditions for an extremum of a functional..................4 16. The problem was solved by Johan Bernoulli himself..... and (anonymously) Isaac Newton (1642–1727)................... 16 -2 Preliminaries  •  Case with Fixed Initial and Final Times and No Terminal or Path Constraints  •  Case with Terminal Constraints  •  Example: Minimum Energy Point-to-Point Control of a Double Integrator  •  Case with Input Constraints: Minimum Principle  •  Minimum Time Problems  •  Problems with Path.............. 16 -14 Computational Optimal Control.... Leonhard Euler (1707–1793) worked with Johan Bernoulli and made significant contributions.........6 16............... for example....... which in turn led Euler to coin the term calculus of variations.................... 16 -15 Obstacle Avoidance Problem  •  Missile Terminal Burn Maneuver References�������������������������������������������������������������������������������������������������� 16-20 16.2  Origins Optimal control is closely related in its origins to the theory of calculus of variations........... LLC ..........7 Discrete-Time Optimal Control........................... Andrien 16 -1 © 2011 by Taylor and Francis Group.......................3 Continuous-Time Optimal Control Using the Variational Approach... The performance index might include... Gottfried Leibniz (1646–1716)................................. 16 -13 Dynamic Programming.. a measure of the control effort.......................1 Introduction........ 16. a measure of the tracking error..... 16 -14 Examples...

Readers interested in further details may wish to consult [GF03. resulted in the Hamilton–Jacobi equations. 16. the presence of different types of constraints.16 -2 Control and Mechatronics Legendre (1752–1833) and Carl Jacobi (1804–1851) developed sufficient conditions for the extremum of a functional.2  Formulation of Optimal Control Problems There are various types of optimal control problems.Lei81.3  Continuous-Time Optimal Control Using the Variational Approach 16. only fairly simple optimal control problems could be solved. 16. is called the admissible set © 2011 by Taylor and Francis Group. economics. x2. discrete).1  Preliminaries This section introduces some fundamental concepts in optimization theory and calculus of variations. and the formulation of the linear quadratic regulator (LQR) and the Kalman filter by Rudolf Kalman (b. depending on the performance index. See the review papers [SW97] and [Bry96] for further historical details. Consider the following optimization problem: minimize subject to f (x ) x ∈Ω where f : Rn R is a real-valued objective function x = [x1. Before the arrival of the digital computer in the 1950s. and it continues to be an active research area within control theory. William Hamilton (1805–1865) did some remarkable work in mechanics.1. which had a strong influence in calculus of variations. finance. and management science. LLC . The arrival of the digital computer has enabled the application of optimal control theory and methods to many complex problems. …. the type of time domain (continuous. while Adolph Mayer (1839–1907) and Oskar Bolza (1857–1942) worked on generalizations of calculus of variations. bioengineering. and what variables are free to be chosen. process control. the development of the minimum principle by Lev Pontryagin (1908–1988) and coworkers also in the 1950s.3. xn]T ∈ R n is called the decision vector The set Ω . including aerospace.Wan95]. which. robotics. optimal control. which is a subset of Rn. Some important milestones in the development of optimal control in the twentieth century include the formulation dynamic programming by Richard Bellman (1920–1984) in the 1950s. together with later work by Jacobi. The formulation of an optimal control problem requires the following: • A mathematical model of the system to be controlled • A specification of the performance index • A specification of all boundary conditions on states and constraints to be satisfied by states and controls • A statement of what variables are free 16. which was developed soon after the Second World War [BMC93]. Modern computational optimal control also has roots in nonlinear programming (parameter optimization with inequality and equality constraints). Karl Weierstrass (1815–1897) distinguished between strong and weak extrema.3 Applications of Optimal Control Optimal control and its ramifications (such as model predictive control) have found applications in many different fields. 1930) in the 1960s. and dynamic programming.

Definition 16. Definition 16. λ *) = [∂f (x . x ≠ x*. m ≤ n. LLC . Then there exists λ* ∈ Rm such that  ∂h(x )  ∇f (x *) +   λ* = 0  ∂x  x = x * T (16.1 as follows: where ∇f (x *.3  (Functional) A functional J is a rule of correspondence that assigns a unique real number to each vector function x(t) in a certain class Ω* defined over the real interval t ∈ [t0. See [CZ96] for the proof. λ) /∂x]T x = x *. λ*) = 0 where δx is called the variation of x δx(t) is an incremental change in function x(t) when the variable t is held fixed © 2011 by Taylor and Francis Group. λ = λ* .1  (Local minimizer) Suppose that f : Rn   R is a real-valued function defined on a set Ω ⊂ Rn.1  (Lagrange multiplier theorem) Let x * ∈ Ω be a local minimizer of f : Rn R subject to h(x) = 0.2) ∇f (x *. Theorem 16. such that Ω = {x ∈ Rn: h(x) = 0}. where h: Rn Rm. that is. λ) = f (x ) + λT h(x ) it is possible to express Equation 16.2  (Regular point) A point x* satisfying the constraints h(x*) = 0 is said to be a regular point of the constraints if rank[∂h/∂x] = m. The following well-known theorem establishes the first-order necessary condition for a local minimum (or maximum) of a continuously differentiable function f(x) subject to equality constraints h(x) = 0. Assume that h is continuously differentiable. f (x. and ∥x − x*∥ ≤ ε. with h: Rn R m. Definition 16. Consider now the case when there admissible set is defined by m equality constraints. A point x* ∈ Ω is a local minimizer of f over Ω if there exists ε > 0 such that f (x) ≥ f (x*) for all x ∈ Ω.1) where ∇f (x *) = [∂f (x ) /∂x]T x = x * is the gradient vector of f λ* is the vector of Lagrange multipliers Note that by augmenting the objective function f (x) with the inner product of the vector of Lagrange multipliers λ and the constraint vector function h(x).4  (Increment of a functional) The increment of a functional J is defined as follows: ∆J = J (x + δx ) − J (x ) (16.Optimal Control 16 -3 Definition 16. the Jacobian matrix of h has full rank. t f]. Assume that x is a regular point.

t f] • ∙αx ∙ = ∙α∙ ∙x∙ for all α ∈ R • ∙∙ x(1) + x(2)| ≤ ∙∙x(1)∙ + ∙ x(2)∙ Definition 16. δx ) = 0 (16. t ) dt where W : Rn × Rn × R t0 ∫ tf ˙ = d x /dt.16 -4 Control and Mechatronics Definition 16.5) Definition 16.3) In general. defined for t ∈ [t0. the variation of a differentiable functional of the form J= can be calculated as follows:   ∂G(x )   δJ =    δ x  dt   ∂x   t ∫ G(x) dt t0 tf (16.5  (Norm of a function) The norm of a function is a rule of correspondence that assigns to each function x ∈ Ω*. t f].7) Suppose now that we wish to find conditions for the minimum of a functional J ( x ) = W ( x .6) The fundamental theorem of calculus of variations states that if a functional J has a local minimum or maximum at x*. x . LLC .7  (Local minimum of a functional) A functional J has a local minimum at x* if for all functions x in the vicinity of x* we have ∆J = J (x ) − J (x *) ≥ 0 (16. x .t ) + λ (t )g(x. The norm of x. ∙ x ∙ = 0 if and only if x = 0 for all t ∈ [t0. a real number. satisfies the following properties: • ∙x∙>0 if x ≠ 0. x. subject to the constraints R. denoted as ∙ x∙. x. x g(x . Define an augmented functional J a (x . λ) = J (x ) = ∫ {W (x. t ) = 0 where g: Rn × Rn × R R m is a vector function. The necessary conditions for a minimum of J are often found using Lagrange multipliers.6  The variation of a differentiable functional J is defined as follows: δJ = lim ∆J = lim J (x + δx ) − J (x ) ||δx || 0 ||δx || 0 (16. then the variation of J must vanish on x*: δJ (x *.4) ∫ 0 tf (16.t )} dt T t0 tf © 2011 by Taylor and Francis Group.

t ) – such that J can be written as tf J = ϕ(x(t f ). t f] is the time interval of interest. L : R nx × R nu × R R is an intermediate cost function.1 as defined above is known as the Bolza problem.2  Case with Fixed Initial and Final Times and No Terminal or Path Constraints If there are no path constraints on the states or the control variables. t f ) + ∫ {L(x. u(t ). to define an augmented performance index J : tf J = ϕ(x(t f ). Adjoin the constraints to the performance index with a time-varying Lagrange multiplier vector – function λ : [t 0 . λ(t). Note that the performance index J = J(u) is a functional. Problem 16. if φ(x(t f). u(t ). 16. ϕ : R nx × R R is a terminal cost function. In this section. t ) = L(x(t ). then J(x) = Ja(x. u(t ). t f ] ⊂ R R nu to minimize the performance index J = ϕ(x(t f ). Note that Equation 16. so that the Lagrange multipliers can be chosen arbitrarily. t ) + λ(t )T f (x(t ). The necessary conditions for a minimum of Ja(x.8) where [t0. and if the initial and final times are fixed. then the problem is known as the Mayer problem. u(t). g = 0. a fairly general continuous time optimal control problem can be defined as follows: Problem 16.t ) − x } dt T to Define the Hamiltonian function H as follows: H (x(t ).t ) − λ (t )x} dt T t0 © 2011 by Taylor and Francis Group. calculus of variations is used to derive necessary optimality conditions for the minimization of J(u). Note that if the constraints are satisfied. λ).8 represents the dynamics of the system and its initial state condition. u.Optimal Control 16 -5 where λ: [t0. t f] R m is a vector of time-dependent Lagrange multipliers.3. u(t ).1 Find the control vector trajectory u : [t 0 . u(t ). λ) can be analyzed using the fundamental theorem of calculus of variations with x and λ as variables (see [Kir70]). LLC . t f ) + L(x(t ). t) = 0. t f ) + ∫ {H (x(t). and f : R nx × R nu × R R nx is a vector field. u. t ) dt subject to t0 ∫ tf x(t ) = f (x(t ). it is known as the Lagrange problem. t f ] R nx (also known as the costate).t ) + λ (t ) f (x. x : [t 0 . If L(x. u. t f) = 0. λ(t ). if the initial state conditions are fixed. t ) x(t 0 ) = x 0 (16. t f ] R nx is the state vector.

such that λ(t )T = − ∂H ∂x (16. they can be selected to make the coefficients of δx(t) and δx(t f) equal to zero. strong local. Now consider an infinitesimal variation in u(t).11) Equations 16.Lei81. Sufficient conditions for general nonlinear problems have also been established. Such a variation will produce variations in the state history δx(t). and a variation in the performance – index δJ . which can be expressed as follows:     ∂ϕ   ∂H  ∂H   T + +  + λ  δx +  δJ =   − λT  λ T δx  δ u  dt   δx    t = t 0   ∂u     t =t f   ∂x   ∂x t0 ∫ tf Since the Lagrange multipliers are arbitrary. it is necessary that δJ = 0. Equation 16.16 -6 Control and Mechatronics Assume that t0 and t f are fixed. These necessary optimality conditions.1 is also subject to a set of terminal constraints of the form ψ(x(t f ). The theory presented in this section does not deal with the existence of an optimal control that minimizes the performance index J. See the book [Lue97] for details on this issue. Moreover.8 through 16.10) – This choice of λ(t) results in the following expression for J . they are useful to check the extremality of solutions found by computational methods. Sufficient conditions are useful to check if an extremal solution satisfying the necessary optimality conditions actually yields a minimum.9) λ(t f )T = ∂ϕ ∂x t =t f (16. which is denoted as δu(t).9 is known as the costate (or adjoint) equation.3  Case with Terminal Constraints In case Problem 16. 16. LLC . are very useful as they allow to find analytical solutions to special types of optimal control problems and to define numerical algorithms to search for solutions in general cases.10 and the initial state condition represent the boundary (or transversality) conditions. which define a two-point boundary-value problem. and the type of minimum that is achieved.12) © 2011 by Taylor and Francis Group. See [GF03. Equation 16. This gives the stationarity condition ∂H =0 ∂u (16.3. assuming that the initial state is fixed.Wan95] for further details. a key point in the mathematical theory of optimal control is the existence of the Lagrange multiplier function λ(t).t f ) = 0 (16. so that δx(t0) = 0:   ∂H   δJ =    δ u  dt   ∂u   t0 ∫ tf – For a minimum. Distinctions are made between sufficient conditions for weak local. and strong global minima. Moreover. See the [Ces83] for theoretical issues on the existence of optimal controls.11 are the first-order necessary conditions for a minimum of the performance index J.

Optimal Control 16 -7 where ψ : R nx × R R nψ is a vector function.4  Example: Minimum Energy Point-to-Point Control of a Double Integrator Consider the following optimal control problem min J = subject to the differential constraints x1(t ) = x2 (t ) and the boundary conditions x1(0) = 1.3. Also. 16.8. 16.13) where ν ∈ R nψ is the Lagrange multiplier associated with the terminal constraint δt f is the variation of the final time δx(t f) is the variation of the final state Note that if the final time is fixed.14) u (t ) 1 1 u(t )2 dt 2 ∫ 0 The first step to solve the problem is to define the Hamiltonian function.15) x2 (t ) = u(t ) (16.11. 16. LLC . which is given by The stationarity condition (16. then δt f = 0 and the second term vanishes. it is possible to show by means of variational analysis [LS95] that the necessary conditions for a minimum of J are Equations 16.16) 1 H = u 2 + λ1x2 + λ 2u 2 © 2011 by Taylor and Francis Group. x2 (1) = 0 (16. x2 (0) = 1 x1(1) = 0. then element j of δx(t f ) vanishes. and the following terminal condition:  ∂ϕ T ∂ψ T   ∂x + ∂x ν − λ    T tf  ∂ϕ ∂ψ T  + ν + H  δt f = 0 δx(t f ) +  ∂t  ∂t t f (16. if the terminal constraint is such that element j of x is fixed at the final time.9.11) gives The costate Equation 16.9 gives λ1 = 0 λ 2 = −λ1 u + λ2 = 0 ⇒ u = −λ2 (16.

then δt f = 0.14) and integrating on both sides gives 1 1 x1(t ) = at 3 − bt 2 + ct + d 6 2 1 x2 (t ) = at 2 − bt + c 2 (16.9). t ) ≤ H (x * (t ).11) has to be replaced by H (x * (t ).8).15). noting that the final value of the state vector is fixed. t f ] (16. It follows that δx(t f) = 0. λ * (t ). u * (t ). Replacing the expression for λ2 in (16.16).13) is satisfied. t ∈[0. It was shown by Pontryagin and coworkers [Pon87] that in this case. Since the terminal time is fixed t f = 1. but the stationarity condition (16. λ*(t )) = 0.17) in (16. such that u(t) ∈ Ω . u(t ). so that the optimal control is given by u(t ) = 18t − 10.10) still hold. results in the following values for the integration constants: a = 18. λ * (t )) = constant. b = 10. According to this principle. then the Hamiltonian must be zero when evaluated along the optimal trajectory: H (x* (t ). and the Hamiltonian does not depend explicitly on time t.19) • If the final time t f is free. gives u(t ) = −at + b (16. This condition is known as Pontryagin’s minimum principle. LLC . t ∈[t 0 . where * denotes optimal functions.18) at the boundary points t = 0 and t = 1. so that the input variable u is restricted to be within an admissible compact region Ω . Replacing (16. c = 1.1] 16. the necessary conditions (16.3. λ * (t ). 0]T. the Hamiltonian must be minimized over all admissible u for optimal state and costate functions. u * (t ). t ) for all admissible u.17) The terminal constraint function in this problem is ψ(x(1)) = [x1(1). and d = 1.20) © 2011 by Taylor and Francis Group. and the terminal condition (16.16 -8 Control and Mechatronics So that the costates can be expressed as follows: λ1(t ) = a λ 2 (t ) = −at + b where a and b are integration constants to be found. then the Hamiltonian must be constant when evaluated along the optimal trajectory: H (x * (t ). t f ] (16. In their work. x2(1)]T = [0.5  Case with Input Constraints: Minimum Principle Realistic optimal control problems often have inequality constraints associated with the input variables. (16. and using the boundary conditions (16.18) Evaluating (16. and (16. u*(t ). t ∈[t 0 . Pontryagin and coworkers also showed additional necessary conditions that are often useful when solving problems: • If the final time t f is fixed and the Hamiltonian function does not depend explicitly on time t.

t f]: c(x(t ). t1 ) = 0 n where q : R nx × R R q. These are called singular arcs.3. This kind of problem is defined as follows. 16. In such cases. In the case of a single control u. t ). in some problems it may be required that the state satisfies equality constraints at some intermediate point in time t1. See [BH75] for a detailed treatment of optimal control problems with pathand interior-point constraints.or Interior-Point Constraints Sometimes it is necessary to restrict state and control trajectories such that a set of constraints is satisfied along the interval of interest [t0. These are known as interiorpoint constraints and can be expressed as follows: q(x(t1 ).t f )= 0 u(t ) ∈ Ω See [LS95] and [Nai03] for further details on minimum time problems. must be checked: (−1)k ∂  d(2k ) ∂H    ≥ 0. A particular case of practical relevance occurs when the Hamiltonian function is linear in at least one of the control variables. u(t ).3. LLC .11) occur where the matrix ∂2H/∂u2 is singular.3. the control is not determined in terms of the state and costate by the stationarity condition (16. once the control is obtained by setting the time derivative of ∂H/∂u to zero.2 Find t f and u(t). t f ] R nc. then additional necessary conditions.… ∂u  dt (2k ) ∂u  © 2011 by Taylor and Francis Group.8  Singular Arcs In some optimal control problems.6  Minimum Time Problems A special class of optimal control problem involves finding the optimal input u(t) to satisfy a terminal constraint in minimum time.7  Problems with Path. the control is determined by the condition that the time derivatives of ∂H/∂u must be zero along the singular arc. 2. t0 ≤ t1 ≤ t f. Additional tests are required to verify if a singular arc is optimizing. t ) ≤ 0 where c : R nx × R nu × [t 0 . extremal arcs satisfying (16. Problem 16. Moreover. to minimize tf J = 1 dt = t f − t 0 subject to t0 ∫ x(t ) = f (x(t ). 16. Instead. u(t ). known as the generalized Legendre–Clebsch conditions. x(t 0 ) = x o ψ(x(t f ).Optimal Control 16 -9 16. t ∈ [t0.11). t f]. k = 0.1.

and provided the pair (A.21) where S f ∈ R nx ×nx and Q ∈R nx ×nx are positive semidefinite matrices. while the system dynamics obey x(t ) = Ax(t ) + Bu(t ) x(t 0 ) = x 0 (16. t f] K(t ) = R −1BT S(t ) R n×n is the solution to the differential Ricatti equation −S = AT S + SA − SBR −1BT S + Q S(t f ) = S f u(t ) = −K(t )x(t ) (16. 16. and R ∈R nu ×nu is a positive definite matrix.9). This is known in the literature as the infinite horizon or steady-state LQR solution.1 that is of particular practical importance arises when the objective function is a quadratic function of x and u. B) is stabilizable. See [BH75] and [ST00] for further details on the handling of singular arcs. The steady-state gain K is given by K = R −1BT S where S is the positive definite solution to the algebraic Ricatti equation: AT S + SA − SBR −1BT S + Q = 0 (16.23) In the particular case where t f →∞. using the optimality conditions (16.3.24) ˙ = Ax + Bu is stabilizable if the eigenvalues of the matrix (A − BK) have (strict) negative real * A linear dynamic system x part for some matrix K.16 -10 Control and Mechatronics The presence of singular arcs may cause difficulties to computational optimal control methods to find accurate solutions if the appropriate conditions are not enforced a priori [Bet01]. and the dynamic equations are linear.8).9 The Linear Quadratic Regulator A special case of Problem 16. and it is possible to express the optimal control law as a state feedback as in (16. In this case.11).22) where A ∈ R nx ×nx is the system matrix and B ∈ R nx ×nu is the input matrix.10). (16.* the Ricatti differential equation converges to a limiting solution S. The performance index is given by 1 1 J = x(t f )T S f x(t f ) + (x(t )T Qx(t ) + u(t )T Ru(t ))dt 2 2 tf ∫ t0 (16. The resulting feedback law in this case is known as the linear quadratic regualtor (LQR). (16. © 2011 by Taylor and Francis Group. it is possible to find that the optimal control law can be expressed as a linear state feedback where the state feedback gain is given by and S: [t0. LLC .23) but with a constant gain matrix K. and (16.

The rotary pendulum arm is mounted to an output gear. At the end of the pendulum arm is a hinge instrumented with another encoder. there exists a finite t1 such that knowledge of the input u and the output y over [0. if the pair (A. It is worth pointing out that there are well-established methods and software for solving the algebraic Ricatti equation (16. t1] suffices to determine uniquely the initial state x(0). as the LQR provides a way of stabilizing any linear system that is stabilizable. The pendulum attaches to the hinge. Notice that the pendulum in open loop has two equilibrium points. Example: Stabilization of a Rotary Inverted Pendulum The rotary inverted pendulum is an underactuated mechanical system that is often used in control education (Figure 16. LLC .* where CT C = Q. then the closed-loop system x(t ) = (A − BK)x(t ) is asymptotically stable. The LQG controller involves coupling the LQR with the Kalman filter using the separation principle. which is known as the linear quadratic Gaussian (LQG) controller. A stable equilibrium at the point where the rod is vertical and pointing down and an unstable equilibrium at the point where the rod is Pendulum rod α Pendulum arm 2lp θ r Figure 16. Moreover.21) to allow for a reference signal that the output of the system should track.24). See [LS95] for further details. has been widely applied. A useful extension of the LQR ideas involves modifying the performance index (16.1). © 2011 by Taylor and Francis Group.1  Illustration of the rotary inverted pendulum. This facilitates the design of LQRs. an extension of the LQR concept to systems with Gaussian additive noise. the steady-state LQR has certain guaranteed robustness properties that make it even more useful: an infinite gain margin and a phase margin of at least 60° [LS95]. The servo motor drives an independent output gear whose angular position is measured by an optical encoder.Optimal Control 16 -11 Moreover. This is an important result. ˙ = Ax + Bu with output observations y = Cx + Du is said to be observable if for any unknown * A linear dynamical system x initial state x(0). C) is observable. The second encoder measures the angular position of the pendulum. In addition to its desirable closed-loop stability property.

3967α(t ) + 1.2 0  0  .2 shows an ideal simulation of the pendulum under closed-loop control.0777 36.9716 8.5536 −1. Octave.6164 0  x   0   1     1  x2   0   +  u  0   54.4682   Using this feedback gain K. and the initial state is given by x(0) = [0.7692 −7.2  The Ricatti solution. respectively: 5 0 Q= 0  0 0 5 0 0 0 0 0 . and SciLab). LLC .5070 1 0 −29. This example illustrates how a steady-state LQR controller can be designed to balance the pendulum around the unstable equilibrium point. 0. The stabilizing controller can be implemented as follows: u(t ) = −Kx(t ) = θ(t ) + 12. where the dynamics are simulated using the linearized dynamic model.8177 −6. an LQR is designed using the following state and control weights. A linearized dynamic model of a practical rotary inverted pendulum is given by  x1  0  x  0  2 =   x 3  0     x4   0 0 0 −69.4207 −1. which can be found easily using widely available software (including MATLAB ®.1229   which are all stable. Figure 16.1996    where x1 = θ is the arm angle (rad) x2 = α is the pendulum angle (rad) x3 = θ is the angular acceleration of the arm (rad/s2) x 4 = α is the angular acceleration of the pendulum (rad/s2) u is the voltage (V) applied to the servo motor Note that the eigenvalues of the linearized model of the pendulum in open loop are eig(A) =  0 −31.3967 −1. R = 5 0   0 .3642 144.8008 −5. results in the following state feedback gain: K =  −1 −12.16 -12 Control and Mechatronics vertical and pointing up. the closed-loop eigenvalues are eig(A − BK) =   −36.1. 0.0]T. © 2011 by Taylor and Francis Group.4207θ(t ) + 1.4682α(t ) The implementation requires measurements (or good estimates) of the angular positions and velocities. Steady-state LQR controllers are often used in practice to stabilize the operation of systems around unstable equilibria. To stabilize the pendulum around the upper equilibrium point (α = 0).9069   so the local instability is apparent.1585   x3      x4  0     −68.

These formulations are useful when the dynamics are discrete (for example. u(k). Nf − 1] Rnu is the control vector f : Rnx × Rnu × [N 0 . k) where ϕ : Rnx × Z R is a terminal cost function L : R nx × R nu × [N 0 .5 Time (s) 3 3.5 2 2. Nf ] Rnx is the state vector u : [N 0 .5 Time (s) 3 3. 16.5 2 2.2 θ (rad) 0. In discrete-time. N f − 1] R is an intermediate cost function See.Optimal Control 0.…. Nf − 1] x : [N 0 .5 5 16 -13 –0. to minimize a performance index of the form N f −1 J = ϕ(x(Nf ). Nf ) + k = N0 ∑L(x(k). the dynamics can be expressed as a difference equation x(k + 1) = f (x(k). for example.1 0 0 0.2  Simulation of the rotary inverted pendulum under LQR control.5 5 Figure 16. u(k).5 Time (s) 3 3.5 4 4. [LS95] or [Bry99] for further details on discrete-time optimal control.5 4 4.5 1 1.5 1 1.1 α (rad) 0 0 0. © 2011 by Taylor and Francis Group.4  Discrete-Time Optimal Control Most of the problems defined above have discrete-time counterparts. k) x(N 0 ) = x 0 where k is an integer index in the interval [N0.5 5 2 u (V) 0 –2 0 0. LLC . k = N0. Nf − 1] Rnx is a vector function The objective is to find a control sequence u(k). a multistage system) or when dealing with computercontrolled systems.5 4 4.1 0. Nf − 1.5 1 1.5 2 2.

Over the years. t0 = t1 < t2 ∙ ∙ ∙ < tN = t f. in the case of trapezoidal collocation. Dynamic programming includes formulations for discrete-time systems as well as combinatorial systems.16 -14 Control and Mechatronics 16. 16. The books [BH75. t f]. or pseudospectral approximations [EKR95]. This usually involves attempting to solve nonlinear two-point boundary-value problems. the remaining decisions must be optimal with regard to the state resulting from those previous decisions. This problem has been addressed in various ways. through the forward integration of the plant equations and the backward integration of the costate equations. both of which are described in detail in the book [Bry99]. LLC . many numerical procedures have been developed to solve general optimal control problems. In this way. The nonlinear optimization problems that arise from direct collocation methods may be very large. Bellman’s principle of optimality can be stated as follows: “An optimal policy has the property that regardless of what the previous decisions have been. It was proposed by Bellman in the 1950s and is an extension of Hamilton–Jacobi theory. The original discrete dynamic programming method. having © 2011 by Taylor and Francis Group. which are discrete systems with quantized states and controls.LS95] and [Luu02] can be consulted for further details on dynamic programming. for example. trapezoidal.5  Dynamic Programming Dynamic programming is an alternative to the variational approach to optimal control. Examples of indirect methods include the gradient method and the multiple shooting method.Kir70. by defining a grid of N points covering the time interval [t0.3 n Find a decision vector y ∈R y to minimize subject to Gl ≤ G(y ) ≤ Gu where n F :R y n G:R y yl ≤ y ≤ yu F (y ) R is a twice continuously differentiable scalar function n R g is a twice continuously differentiable vector function Many direct methods involve the approximation of the control and/or states using basis functions. With direct methods. such as splines or Lagrange polynomials.” which causes the computations and memory requirements to grow dramatically with the problem size. see the work by Luus [Luu02]. Hermite–Simpson [Bet01]. For example. the differential equations become a finite set of equality constraints of the nonlinear programming problem. suffers from the “curse of dimensionality. for example. optimal control problems are discretized and converted into nonlinear programming problems of the form: Problem 16. the decision vector y contains the control and state variables at the grid points and possibly the initial and final times. Direct collocation methods involve the discretization of the differential equations using.6  Computational Optimal Control The solutions to most practical optimal control problems cannot be found by analytical means. Indirect methods involve iterating on the necessary optimality conditions to seek their satisfaction.” This principle serves to limit the number of potentially optimal control strategies that must be investigated. however. It also shows that the optimal strategy must be determined by working backward from the final time.

and well-known methods and software exist for its solution [Bet01. Some complex optimal control problems can be conveniently formulated as having multiple phases. Moreover.4)2 + ( y(t ) − 0. Find θ(t) ∈ [0. It is.27) y = V sin(θ) (16. Also. a spacecraft drops a mass and enters a new phase). which involves finding an optimal trajectory that minimizes energy expenditure for a particle to travel between two points on a plane.8)2 + ( y(t ) − 1. interesting that such large nonlinear programming problems are easier to solve than boundary-value problems. Phases may also be introduced by the analysist to allow for peculiarities in the solution of the problem such as discontinuities in the control variables or singular arcs.WB06].RF04] for further details on multiphase optimal control problems. however.6 (16.138. so that these methods require good initial guesses.Optimal Control 16 -15 possibly hundreds to tens of thousands of variables and constraints. As a result. The reason for the relative ease of computation of direct methods (particularly direct collocation methods) is that the nonlinear programming problem is sparse. See [Bet01.5)2 ≥ 0.1 (16. the range of problems that can be solved with direct methods is larger than the range of problems that can be solved via indirect methods. See [Bet01] for more details on computational optimal control using sparse nonlinear programming. while avoiding two forbidden regions or obstacles [RE09]. LLC . the region of convergence of indirect methods tends to be quite narrow.5)2 ≥ 0.7  Examples 16.28) (x(t ) − 0. 16.7.1  Obstacle Avoidance Problem Consider the following optimal control problem.0 and V = 2.25) subject to the dynamic constraints x = V cos(θ) The path constraints (x(t ) − 0.2 where t f = 1. t f] to minimize the cost functional tf 2 2 2 2  J=  V sin (θ(t )) + V cos (θ(t )) dt 0 ∫ (16. Phases may be inherent to the problem (for example. y(tf) = 1.26) © 2011 by Taylor and Francis Group. direct methods using nonlinear programming are known to deal more efficiently with problems involving path constraints.1 and the boundary conditions x(0) = 0 y(0) = 0 x(tf ) = 1. Additional constraints are added to the problem to define the linkages between interconnected phases.

2 1 0. under the Ubuntu Linux 9. T(t)]T. 15} {294. to minimize J =tf TABLE 16. Table 16.3 shows the optimal (x. 325} {44.7.4 1. of NLP variables No. 15} 4.2 y y obs1 obs2 0. Figure 16. 16} {46. then the mesh was refined to 80 grid points. of grid points No. The problem was solved initially by using 40 grid points.2  Missile Terminal Burn Maneuver This example illustrates the computational design of a missile trajectory to strike a specified target from given initial conditions in minimum time [SZ09]. of constraint evaluations No.3  Optimal (x. The problem was solved using PSOPT [Bec09]. a computational optimal control solver that employs pseudospectral discretization methods. large-scale nonlinear programming.2 0 0 0.04 operating system. CPU time refers to a PC with an Intel Core 2 Quad Q6700 processor running at 2.8 0. and an interpolation of the previous solution was employed as an initial guess for the new solution.571044 s 2.4 0. 16.66 GHz with 8 GB of RAM.4 0. of NLP iterations No.2 Figure 16. y) trajectory of the particle.6 0. y) trajectory for obstacle avoidance problem.6 1. and automatic differentiation. of objective evaluations No. 80} {122.8 1 1.16 -16 Obstacle avoidance problem: x–y trajectory Control and Mechatronics 1.1 presents a summary of the solution to the problem. of NLP constraints No. of Jacobian evaluations Objective function CPU time {40.1  Solution Summary for the Obstacle Avoidance Problem No. LLC . t ∈[0. so that the problem is to find t f and u(t) = [α(t). t f].6 x 0. 242} {165. 22} {295.01 s © 2011 by Taylor and Francis Group.

h is the altitude. D is the axial aerodynamic force.Optimal Control 16 -17 L Normal aerodynamic force T Thrust α Angle of attack V Speed x 90° D Axial aerodynamic force mg Weight 90° y Flight path angle Figure 16. α is the angle of attack. The Equations of motion of the missile are given by γ= V= g cos γ T −D L sin α + cos α − mg mV V T −D L cos α − sin α − g cos γ m m x = V cos γ where 1 D = Cd ρV 2Sref 2 Cd = A1α 2 + A2α + A3 1 L = Cl ρV 2Sref 2 Cl = B1α + B2 ρ = C1h2 +C2h +C3 h = V sin γ © 2011 by Taylor and Francis Group. V is the missile speed. where γ is the flight path angle. and T is the thrust. LLC . x is the longitudinal position. Figure 16.4  Illustration of the variables associated with the missile model. L is the normal aerodynamic force.4 shows the variables associated with the dynamic model of the missile employed in this example.

2  Parameter Values of the Missile Model Parameter M G Sref A1 A2 A3 B1 B2 C1 C2 C3 Value 1005 9.2359 21. The initial conditions for the state variables are γ (0) = 0 V (0) = 272 m / s x(0) = 0 m The terminal conditions on the states are γ (t f ) = − π 2 h(0) = 30 m V (tf ) = 310 m/s x(tf) = 10.142 × 10−4 1.2.9 0 3.3 −4 ≤ L ≤4 mg h(tf) = 0 m h ≥ 30 (for x ≤ 7500 m) h ≥ 0 (for x > 7500 m) Note that the path constraints on the altitude are non-smooth. the constraints on the altitude were approximated by a single smooth constraint © 2011 by Taylor and Francis Group. LLC .3 ≤ α ≤ 0.1499 0.9431 −0.16 -18 TABLE 16.224 Units kg m/s2 M2 Control and Mechatronics kg/m5 kg/m4 kg/m3 where all the model parameters are given in Table 16. Given that non-smoothness causes problems with nonlinear programming.3376 −1.81 0.312 × 10−9 −1. 000 m The problem constraints are given by 200 ≤ V ≤ 310 1000 ≤ T ≤ 6000 −0.

which is computed as follows: z   Hε (z ) = 0.3 gives a summary of the solution.000 8. the missile speed and angle of attack as functions of time. then the mesh was refined to 80 grid points. LLC . 310 305 300 295 V (m/s) 290 285 280 275 270 0 5 10 15 20 25 30 35 40 45 Time (s) Missile problem: speed (m/s) V Figure 16.200 h (m) 1. and an interpolation of the previous solution was employed as an initial guess for the new solution.6 and 16.000 10.800 1.600 1.000 6.000 800 600 400 200 0 0 2. Missile problem: altitude (m) 1. © 2011 by Taylor and Francis Group.7 show.400 1. Figure 16. respectively.5  1 + tanh     ε   where ε > 0 is a small number.5 shows the missile altitude as a function of the longitudinal position.000 h x (m) Figure 16.5  Missile altitude and a function of the longitudinal position.6  Missile speed as a function of time. Figures 16.Optimal Control 16 -19 Hε (x − 7500))h(t ) + [1 − Hε (x − 7500)][h(t ) − 30] ≥ 0 where Hε(z) is a smooth version of the Heaviside function.000 4. The problem was solved using PSOPT [Bec09] with 40 grid points initially. Table 16.

1996. CA. Dynamic Optimization. M. NJ. 2009.K.M. An Introduction to Optimization. of grid points No. Addison-Wesley. [Bet01] J. of NLP iterations No. Optimization-Theory and Applications: Problems with Ordinary Differential Equations. Springer. 80} {242.04 0. 16:23–36.T. 1983. 13} {543.D. New York. Optimal control—1950 to 1985.02 0 –0.7  Missile angle of attack as a function of time.16 -20 Missile problem: angle of attack (rad) 0. Shetty. Applied Optimal Control.06 –0.83 s References [Bec09] V. 23} {57. of NLP constraints No. 1999. 1993. and C. SIAM. Bazaraa. [BH75] A. Bryson and Y. Berlin. Reading. 482} {249. © 2011 by Taylor and Francis Group.E.E. 1996.K. Bryson. TABLE 16.E.9121 s 6. Zak. Practical Methods for Optimal Control Using Nonlinear Programming.P. Menlo Park. IEEE Control Systems Magazine. 1975. Ho. [Bry96] A. Becerra. Betts.1 –0. [Ces83] L. Nonlinear Programming. Hoboken. University of Reading. 22} {542.04 –0. Sherali. of constraint evaluations No. Philadelphia.08 –0.S.12 –0.02 α (rad) –0. 22} 40.3  Solution Summary for the Missile Problem No. [BMC93] H. of NLP variables No. [Bry99] A.14 0 5 10 15 20 25 Time (s) 30 Control and Mechatronics 35 40 45 Figure 16. Germany. Cesari. of Jacobian evaluations Objective function CPU time {40. PSOPT: Optimal Control Solver User Manual. Wiley-Interscience. Wiley.. PA. Bryson. 489} {22. of objective evaluations No. Halsted Press. LLC . New York.H. Chong and S. [CZ96] E. U. 2001.M.

© 2011 by Taylor and Francis Group.J.E.M. 27:397–405. Classics of Soviet Mathematics. Prentice-Hall. The Calculus of Variations and Optimal Control. Wächter and L. [Wan95] F. Pseudospectral knotting methods for solving nonsmooth optimal control problems.L. Pontryagin. The Mathematical Theory of Optimal Processes. Chapman & Hall. 2009. Optimal Control Theory: Applications to Management Science and Economics. PROPT Matlab Optimal Control Software. FL. New York. Luenberger. Kluwer. 1995. the Netherlands.M. 106:25–57. Pullman. Calculus of Variations. New York. 1987. On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming. Dordrecht. LLC . Rutquist and M. Iterative Dynamic Programming.Y. 2006. Optimization by Vector Space Methods. Wiley. Fahroo. Syrmos. [SW97] H. Fomin.Optimal Control 16 -21 [EKR95] G. Zbikowski. NJ. WA. Introduction to the Calculus of Variations and Its Applications. [Lei81] G. 2004. Wan. 2009. Boca Raton.C. Optimal Control Theory. [RE09] P. Boca Raton. [SZ09] S. 2003. Biegler.S. Lewis and V. CRC Press. TOMLAB Optimization. M. 1970. [Nai03] D. 2003. Naidu.M. Springer. Willems. Kirk. Mathematical Programming.L. 2002. CRC Press. New York. [GF03] I. Boca Raton. [Kir70] D. Sussmann and J. 300 years of optimal control: From the brachystochrone to the maximum principle. [Pon87] L.M. FL. Elnagar. Wiley. 2000. Optimal Control. Wiley. [RF04] I. [Lue97] D. Dover Publications. [WB06] A. Kazemi. IEEE Control Systems Magazine. [LS95] F. Thompson. New York. 17:32–44.E. Journal of Guidance Control and Dynamics.T. New York. 40:1793–1796. Subchan and R. FL.L. FL. 1997. 1995. Boca Raton. Luus. Chapman & Hall/CRC. IEEE Transactions on Automatic Control. Englewood Cliffs. [ST00] S.G. Edvall.S. Gelfand and S. 1997. Ross and F. [Luu02] R. Leitmann. Sethi and G. 1995. Optimal Control Systems. 1981. Razzaghi.A. Computational Optimal Control: Tools and Practice.V. The pseudospectral Legendre method for discretizing optimal control problems. and M.

.... 17-3 17........6 Control Design for TDS................. or the measurements.... k = 0.....17-14 Delay-Independent Conditions  •  Delay-Dependent Conditions  •  Exponential Bounds and L2-Gain Analysis Predictor-Based Design  •  LMI-Based Design Emilia Fridman Tel Aviv University 17.. Actuators...4 General TDS and the Direct Lyapunov Method............ communication networks) either in the state..... Thus..... (17.......... LLC .....7 On Discrete-Time TDS............... k ∈ R..... 17-4 17...3 Linear Time-Invariant Systems and Characteristic Equation......... t k +1 )............................................... 2.................................................... Consider a sampled-data control system x(t ) = Ax(t ) + BKx(t k ). Let T(t) denote the water temperature in the mixer output and let h be the constant time needed by the water to go from the mixer output to the tip of the person’s head..........1  Models with Time-Delay Time-delay systems (TDS) are also called systems with aftereffect or dead-time....... equations with deviating argument.... the control input. which results in the following equation with constant delay h: T (t ) = −k[T (t − h) − Td ]. (17........... t ∈[t k .... 1........................ as opposed to ordinary differential equations (ODEs)..............17 Time-Delay Systems 17.... They belong to the class of functional differential equations that are infinite-dimensional..........2 Solution Concept and the Step Method..........................17-17 17. 17-1 17..2) 17-1 © 2011 by Taylor and Francis Group. Assume that the change of the temperature is proportional to the angle of the rotation of the handle..... There can be transport.... or differential-difference equations.............. delays are strongly involved in challenging areas of communication and information technologies: stability of networked control systems or high-speed communication networks [23].... chemical... Time-delay often appears in many control systems (such as aircraft.............. or measurement delays..... and field networks that are involved in feedback loops usually introduce delays......1) Model 2................ 17-15 References...... At time t the person feels the water temperature leaving the mixer at time t − h.......... ............... Model 1 [11].. 17-6 17..... The simplest example of such a system is ˙ x (t) = −x(t − h).......... where h > 0 is the timedelay.. communication.............. whereas the rate of rotation of the handle is proportional to T(t) − Td. sensors................... hereditary systems. or process control systems............ Imagine a showering person wishing to achieve the desired value Td of water temperature by rotating the mixer handle for cold and hot water.......5 LMI Approach to the Stability of TDS...........  x(t) ∈ R............... We consider two examples of models with delay.1 Models with Time-Delay... 17-2 17.....

4) 17. This system can be represented as a continuous system with time-varying delay τ(t) = t − tk [16]: x(t ) = Ax(t ) + BKx(t − τ(t )). …. K are constant matrices and limk → ∞tk = ∞. t ) − Td  . t k +1 ). we find a solution on t ∈ [0. Myshkis and R. (17. z ∈ [0. This is different from ODEs. t] (on Figure 17. which results in the initial value function x(s) = φ(s). e. h]. e. h]. t ) + h z(s. t ) = −k   z(1.3) where the delay is piecewise continuous with τ = 1 for t ≠ tk. t ∈[t k . In order to find a solution to this problem. The resulting solutions for h = 1 and for the initial functions ϕ ≡ 1 and ϕ ≡ 0.1. [2.g. We note that the step method can be applied for solving the initial value problem for general TDS with constant delay. Thus. t ∈ [2h. ∂t ∂s ∂ z(0. 3h]. 0]. Bellman. x(t ) = −φ(t − h).1. h] by solving t ∈[0. 1]. (17.. © 2011 by Taylor and Francis Group.5t are given in Figure 17. from ˙ x (t) = −x(t). Therefore.6) instead of the initial value x(0) for ODE with h = 0. t ) = 0. B. First equations with delay were studied by Bernoulli.5) In order to define its solution for t ∈ [0. where through each x(t *) only one solution passes. and Concordet in the eighteenth century.1. LLC . First. A. denoting in (17. As it is seen from Figure 17. Then we continue this procedure for t ∈ [h. Systematical study started in the 1940s with A. time-delay equations can be represented by a classical transport PDE. s ∈ [0. corresponding to the past time-interval [t − h. Since 1960 there have appeared more than 50 monographs on the subject (see.2  Solution Concept and the Step Method Consider the simple delay equation x(t ) = − x(t − h). s ∈[−h. TDS often appear as simple infinite-dimensional models of more complicated partial differential equations (PDEs) [6].6. t ≥ 0. we shall use the step method initiated by Bellman. θ ∈[−h. t) = T(t − hs).5) has several solutions that achieve the same value x(t *) at some instants t *.1) z(s. In spite of their complexity..11.9.19] and the references therein).g. This is an infinite-dimensional system. (17. 2h]. a proper state is a function x(t + θ) = xt (θ). (17. Euler. there is only one solution passing through xt * for all t * ≥ 0). we have to define the right-hand side x(t −h) for t ∈ [0. 0]. in TDS.1].17-2 Control and Mechatronics where x(t) ∈ Rn. ∂t (17. Conversely. The vector x(t) is the solution at time t. x(0) = φ(0). h]. we arrive to the following boundary value problem for the transport equation: ∂ ∂ z(s.

5 17-3 0 Parabola –0. 17.1  System (17.5) with h = 1.5t (dotted). x(t) ∈ Rn Ak are constant matrices A(θ) is a continuous matrix function The characteristic equation of this system is given by  det  λI −   ∑ k =0 K Ak e − λhk −  A(s)e − λsds  = 0.7). Im λ 0 Re λ Figure 17.2  Location of the characteristic roots.8).  − hd  ∫ 0 (17. The LTI system has exponential solutions of the form e λtc. hd ≥ 0. © 2011 by Taylor and Francis Group.5 –1 Cubic 0 1 2 3 4 5 6 7 8 Time t 9 10 Figure 17.3  Linear Time-Invariant Systems and Characteristic Equation A linear time-invariant (LTI) system with K discrete delays and with a distributed delay has a form x(t ) = K 0 ∑ k =0 Ak x(t − hk ) + − hd ∫ A(θ)x(t + θ)dθ.Time-Delay Systems Constant 1 Line 0.2): there is a finite number of roots to the right of any vertical line. Location of the characteristic roots has a nice property (see Figure 17. φ ≡ 1 (plain). (17. where λ is the characteristic root and c ∈ Rn is an eigenvector of the matrix inside the det in (17.8 is transcendental having infinite number of roots.8) Equation 17. LLC . This also reflects the infinite-dimensional nature of TDS. The latter can be verified by substitution of e λtc into (17.7) where 0 = h0 < h1 < … < hK. or 0.

[9. t ≥ t 0 .7) can be represented in the form ˙ x (t) = Lxt. The initial condition φ must be prescribed as φ: [−h. ϕ2 ∈ C close to ϕ and for t close to t0).19]. Then. iff Reλ < −α for some scalar α > 0.7) and denote by h = max{hK.1 Consider a scalar TDS x (t ) = −ax (t ) − a1x (t − h). hd}. a arccos − a 1 a − a2 2 1 ( ). Similar to linear ODEs.11) if there exists a scalar a > 0 such that x(t) is continuous on [t0 − h.g.10 indicates that the derivative of the state variable x at time t depends on t and on x(ξ) for t − h ≤ ξ ≤ t. linear TDS (17. i..7) is robust with respect to small delays: if (17. (17. k = 0. (17.e. The scalar linear system ˙ x (t) = −x(t − τ(t)) with time-varying delay τ(t) ∈ [0.e. t0 + a)..9) is asymptotically stable for all delay. we have h* = π / 2a1.11). if a = 0. 0].9) The system without delay is asymptotically stable.7) is asymptotically stable for hk = hd = 0. Equation 17. Example 17. 17.. h] can be represented as ˙ x (t) = L(t)xt with time-varying linear functional L(t): C → Rn defined by the right-hand side of the system. a + a1 > 0.e. then the solution is unique © 2011 by Taylor and Francis Group. If f is continuous and is locally Lipschitz in its second argument (i.0] |x(t + θ)|. Rn) be the space of continuous functions mapping the interval [−h. The approximation ˙ y (t) ≃ [y(t) − y(t − h)]h−1 explains the damping effect.e. The stability and the performance of systems with delay are. (17. For a ≥ a1 (17. ϕ) > 0 such that | f (t. therefore. then it is asymptotically stable for all small enough values of delays. the stability of (17..7) is asymptotically stable iff all the characteristic roots have negative real parts. there exists l = l(t0. it is initialized by (17.4  General TDS and the Direct Lyapunov Method Consider the LTI system (17. (17. and due to robustness it is stable for small enough h. ϕ2)| < l ∙ϕ1 − ϕ2∙C for ϕ1. K.7). they may have also a stabilizing effect: a well-known example ÿ(t) + y(t) − y(t − h) = 0 is unstable for h = 0 but is asymptotically stable (i. Moreover. i. 0] → Rn (φ ∈ C . If a1 > |a|. The general form of a retarded functional differential equation is x(t ) = f (t . Let C = C ([−h. delay-independently stable. 0] into Rn with the norm ∙ xt ∙C = maxθ∈[−h. ϕ1) − f (t. However. or piecewise continuous) x(t 0 + θ) = xt0 (θ) = φ(θ). xt ).17-4 Control and Mechatronics Delays are known to have complex effects on stability: they may be a source of instability. and it satisfies (17. where L : C → Rn is linear bounded functional defined by the right-hand side of (17.11) The function x: R → Rn is a solution of (17. LLC . where h* = Thus. e. y(t) → 0 for t → ∞) for h = 1.10) with the initial condition (17.10) where x(t) ∈ Rn and f : R × C → Rn. then the system is asymptotically stable for h < h* and becomes unstable for h > h*. of theoretical and practical importance. For different root location tests see.10) for t ∈ (t0.…. t0 + a). 0].. θ ∈[−h.

w: R+ → R+ are continuous nondecreasing. 0) = 0. We assume that f(t. 0] → R+. Consider (17. Definition.10). 0] → R with φ∈ L2[−h. For TDS.10) is uniformly (in t0) asymptotically stable if (i) For any ε > 0 and any t0.10). the functionals depending on the state derivatives V(t. which is positive definite u (| φ(0) |) ≤ V (t . LLC . an efficient method for stability analysis of TDS is the direct Lyapunov method. Note that for (17.13) and (17. η) such that xt0 C < δa implies |x(t) < η for t ≥ t0 + T(η) and t0 ∈ R. The system is uniformly asymptotically stable if its trivial solution is uniformly asymptotically stable. As in systems without delay. ˙ x t ) are useful [11]. The zero solution of ˙ x (t) = f (t.14) are modified as follows: u (| φ(0) |) ≤ V ( t . + s s →0 (17. positive for s > 0 functions and u(0) = v(0) = 0. φ. since a proper state for TDS is xt. and continuous dependence of the solutions is similar to ODEs. ψ ) ≤ v (|| φ ||W ) (17. we define for xt satisfying (17.10) 1 V (t . If V: R × C → R is a continuous functional. The trivial solution of (17. ϕ).14) If. except that the solution is considered in the forward time direction. (17.10) is negative V (t . Sometimes.13) such that its derivative along (17. Lyapunov–Razumikhin functions V(t. xt ) ≤ −w (| x(t ) |) .12) Lyapunov–Krasovskii Theorem. v. in addition. which guarantees that (17. Suppose f : R × C → Rn maps R× (bounded sets in C ) into bounded sets of Rn and u. We start with the Krasovskii method. there are two main direct Lyapunov methods for stability analysis: Krasovskii method of Lyapunov functionals [13] and Razumikhin method of Lyapunov functions [22]. The existence. the stability © 2011 by Taylor and Francis Group. The above theorem is extended then to continuous functionals V: R × W × L2[−h. On the other hand. The trivial solution is globally uniformly asymptotically stable if in (ii) δa can be an arbitrary large. Denote by W the Sobolev space of absolutely continuous functions ϕ: [−h. xt) is uniformly asymptotically stable if there exists a continuous functional V: R × C → R+. x(t)) are much simpler to use. xt . xt ) = lim sup  V (t + s. 0] (the space of square integrable functions) with the norm || φ ||W = | φ(0) |2 + results corresponding to continuous initial functions and to absolutely continuous initial functions are equivalent [2]. where f is continuous in both arguments and is locally Lipschitz continuous in the second argument.Time-Delay Systems 17-5 and it continuously depends on the initial data (t0. (ii) There exists a δa > 0 such that for any η > 0 there exists a T(δa. there exists δ(ε) > 0 such that || xt0 ||C < δ(ε) implies |x(t)| < ε for t ≥ t0. where inequalities (17. xt + s ) − V (t . then it is globally uniformly asymptotically stable.10) possesses a trivial solution x(t) ≡ 0. finite number. uniqueness.15) ∫ 0 −h | φ(s) |2 ds . φ) ≤ v (|| φ ||C ) . (17. The use of Lyapunov– Krasovskii functionals is a natural generalization of the direct Lyapunov method for ODEs. lim s→∞u(s) = ∞. xt )  .

x(t ) ∈ Rn (17.5.19) with time-varying bounded delay τ(t) ∈ [0. LLC .1  Delay-Independent Conditions A simple Lyapunov–Krasovskii functional (initiated by N. Lyapunov–Razumikhin Theorem. (17. the general form of Lyapunov–Krasovskii functional V (t . h]. Krasovskii) has the form V (t . w: R+ → R+ are continuous nondecreasing.18) u (| x |) ≤ V (t . 0]. which is positive definite such that V (t . ∀θ ∈[−h. N. x(t )). P > 0. Consider now a continuous function V: R × C → R and define V where xt+s = x(t + s) and xt = x(t). (17. x(t)) by the right-hand side of (17. then it is globally uniformly asymptotically stable. x t ) ≤ −w (| x(t ) |).17-6 Control and Mechatronics and V (t . 17.21) © 2011 by Taylor and Francis Group.20) (that corresponds to necessary and sufficient conditions for stability) leads to a complicated system of PDEs with respect to P. v. lim s → ∞u(s) = ∞.19) with constant delay τ ≡ h. 17.16) ˙ (t. Q. u(0) = v(0) = 0.17) If. xt .5  LMI Approach to the Stability of TDS We consider a simple linear TDS x(t ) = Ax(t ) + A1x(t − τ(t )). Suppose f : R × C → Rn maps R× (bounded sets in C ) into bounded sets of Rn and p. R. in addition.12). Q > 0. x(t )) ≤ −w (| x(t ) |) if V (t + θ. x(t + θ)) < p(V (t . xt) is uniformly asymptotically stable if there exists a continuous function V: R × Rn → R+. positive for s > 0 functions and p(s) > s for s > 0. u. (17. xt ) = x T (t )Px(t ) + t − τ (t ) ∫ t x T (s)Qx(s)ds. ξ)x(t + ξ)dsdξ T 0 0 (17. Special forms of the functional lead to simpler delay-independent and delay-dependent conditions. The choice of Lyapunov–Krasovskii functional is crucial for deriving stability criteria. For stability analysis of (17. xt ) = x(t )T Px(t ) + 2 x T (t ) Q(ξ)x(t + ξ)dξ + −h ∫ 0 −h −h ∫ ∫ x (t + s)R(s. The zero solution of ˙ x (t) = f (t. x ) ≤ v (| x |) (17.

T T  V (x(t )) ≤ 2 x T (t )P   Ax(t ) + A1x(t − τ(t ))  + q  px (t )Px(t ) − x(t − τ(t )) Px(t − τ(t ))  x(t )  2 T T  =  x (t )x (t − τ(t )) WR  x(t − τ(t )) < −α | x(t ) | .22) with the same P and with Q = qpP.22) for small enough d is less restrictive than the Razumikhin-based condition (17. differentiating V along (17.17) holds. xt ) = 2x T (t )Px(t ) + x T (t )Qx(t ) − (1 − τ)x T (t − τ)Qx(t − τ).   for some α > 0 if  AT P + PA + qpP WR =  TP A1   PA1   < 0. therefore.19) V (x(t )) = 2 x T (t )P   Ax(t ) + A1x(t − τ(t )) . delay-independent (but delay-derivative dependent). Therefore. Whenever Razumikhin’s condition holds V(x(t + θ)) < pV(x(t)) for some p > 1. we find V (t . we can conclude that. the feasibility of (17. where d < 1 − 1/p.23) is sufficient for delay-independent uniform asymptotic stability for systems with fast-varying delays (without any constraints on the delay derivatives). x (t) the right-hand side of (17.19).23) implies the feasibility of (17. for any q > 0.19) and arrive to We further substitute for ˙  x(t )  2 T T  V (t .22) does not depend on h and it is.   for some α > 0 if  AT P + PA + Q W = TP A1   PA1   < 0.22) LMI (17. We will next derive stability conditions by using Razumikhin’s method and the Lyapunov function V(x(t)) = xT(t) Px(t) with P > 0. xt ) ≤   x (t )x (t − τ) W  x(t − τ) < −α | x(t ) | . Consider the derivative of V along (17. till now only the Razumikhin method provides delay-independent conditions for systems with fast-varying delays.Time-Delay Systems 17-7 where P and Q are n × n positive definite matrices. −(1 − d )Q   (17. LLC .23): the feasibility of (17. Then. We further assume that the delay τ is a differentiable function with τ ≤ d < 1 (this is the case of slowly varying delays). We note that the Krasovskii-based LMI (17.23) is sufficient condition for delay-independent uniform asymptotic stability for systems with slowly varying delays. It is clear that V satisfies (17.23) The latter matrix inequality does not depend on h. Also here (17. The feasibility of LMI (17. However.13). © 2011 by Taylor and Francis Group. −qP   (17.

LLC .26) © 2011 by Taylor and Francis Group. β ≥ 0. thus. delay-dependent (h-dependent) conditions are needed.9   −1 0 x (t − τ(t )).. Krasovskii and Razumikhin-based) conditions were derived by using the relation x(t − τ(t )) = x(t ) − cross terms [3. R > 0 h −h ∫ 0 −h ∫ 0 (17. Usually the Krasovskii method leads to less conservative delay-dependent results than the Razumikhin method. thus.9 and d = 0. the second scalar equation with constant delay is not delay-independently stable for β > 0. The widely used by now Lyapunov–Krasovskii functional for delay-dependent stability is a statederivative dependent one of the form [4. In this example (because of the triangular structure of system matrices). x t ) = x (t )Px(t ) + T t −h ∫ x (s)Sx(s)ds + h ∫ ∫ x (s)Rx(s)dsdθ T T −h t + θ t 0 t + t − τ (t ) ∫ t x T(s)Qx(s)ds .24) is uniformly asymptotically stable for constant delays.2 Consider [9]  −2 x (t ) =  0 0   −1 x (t ) + β  −0.20].23) is feasible for β ≤ 0. For d > 0. it is uniformly asymptotically stable for τ(t) < 1.23) implies that A is Hurwitz.12. They are based on the application of Jensen’s inequality [9] ∫ t t − τ (t ) x(s)ds via different model transformations and by bounding the −h ∫ 0 φT (s)Rφ(s)ds ≥ 1 T φ (s)dsR φ(s)ds.9x2(t) − βx2(t − τ(t)).9. (17.5. It means that delay-independent conditions cannot be applied for stabilization of unstable plants by the delayed feedback. In this example. xt .22) and (17. Feasibility of the delay-independent conditions (17. In the case of fast-varying delay.7] V (t . Most of the recent Krasovskii-based results do not use model transformations and cross terms bounding. which guarantees the delay-independent stability of (17. For such systems. As follows from Example 17.24) for all time-varying delays.1.2 both approaches lead to the analytical results. in Example 17.2  Delay-Dependent Conditions The first delay-dependent (both. −1  (17. Most Lyapunov–Krasovskii functionals treated only the slowly varying delays. Consider. stabilization of the scalar system ˙ x (t) = u(t − τ(t)) with the delayed control input via linear state feedback u(t) = −x(t). no conclusion can be made about the delay-independent stability of (17.17-8 Control and Mechatronics Example 17. 17. systems with fast-varying delays were analyzed by using Krasovskii method in [4] via the descriptor approach introduced in [3]. 0]. it was found that. for β ≤ 0.g. ∀ φ ∈ L2[−h.22) is feasible and.24) By using LMI toolbox of MATLAB®.14. the stability analysis is reduced to analysis of two scalar systems: ˙ x 1(t) = −2 x1(t) − βx1(t − τ(t)) and ˙ x 2(t) = − 0. Razumikhin’s approach leads to a less conservative result: matrix inequality (17. whereas the fast-varying delay was analyzed via Lyapunov–Razumikhin functions. (17.9. LMI (17.25) and of free weighting matrices technique [7].5 [11]. e. LMI (17.5 and unstable for τ(t) which may (for some values of time) become greater than 1. Thus. The resulting closed-loop system ˙ x (t) = −x(t − τ(t)) is asymptotically stable for τ ≡ h < π/2 and unstable for h > π/2 in the case of constant delay (see Example 17.24) via Krasovskii’s approach. For the first time.22) is unfeasible and.1).

30) We shall derive stability conditions in two forms. Differentiating V. it leads to delay-independent conditions (for systems with slowly varying delays) and coincides with (17.25) t − τ (t ) t − τ (t ) ∫ ∫ t 1 x (s)Rx(s)ds ≥ h T t − τ (t ) t − τ (t ) ∫ ∫ t x (s)dsR T t − τ (t ) t − τ (t ) ∫ t x(s)ds. whereas for R = S = 0. we obtain t −h 1 x T (s)Rx(s)ds ≥ h x T (s)dsR t −h t −h ∫ V ≤ 2x T (t )Px(t ) + h2 x T (t )Rx(t ) −   x(t ) − x(t − τ(t ))  R  x(t ) − x(t − τ(t ))  T −  x(t − τ(t )) − x(t − h)  R  x(t − τ(t )) − x(t − h)  + x (t )[S + Q]x(t ) T T − x T (t − h)Sx(t − h) − (1 − d )x T (t − τ(t ))Qx(t − τ(t )). T (17.28) Applying further Jensen’s inequality (17.30) the right-hand side of (17.Time-Delay Systems 17-9 where R ≥ 0.31) © 2011 by Taylor and Francis Group. S ≥ 0.19): V ≤ 2 x T (t )P   Ax(t ) + A1x(t − τ(t ))  + h2   Ax(t ) + A1x(t − τ(t ))   Ax(t ) + A1x(t − τ(t ))  R −   x(t ) − x(t − τ(t ))  R  x(t ) − x(t − τ(t )) T −  x(t − τ(t )) − x(t − h)  R  x(t − τ(t )) − x(t − h)  + x (t )[S + Q]x(t ) T T T − x T (t − h)Sx(t − h) − (1 − d )x T (t − τ(t ))Qx(t − τ(t )) = ηT Φη + h2   Ax(t ) + A1x(t − τ(t ))  R  Ax(t ) + A1x(t − τ(t )) . Q ≥ 0. The first form is derived by substituting for ˙ x (t) in (17. we find V ≤ 2x T (t )Px(t ) + h2 x T (t )Rx(t ) − h t −h ∫ x (s)Rx(s)ds + x (t )[S + Q]x(t ) T T t − x T (t − h)Sx(t − h) − (1 − τ(t ))x T (t − τ(t ))Qx(t − τ(t )) (17. This functional with Q = 0 leads to delay-dependent conditions for systems with fast-varying delays. LLC . (17.29) x(s)ds. (17.21). T (17.27) and employ the representation −h t −h ∫ x (s)Rx(s)ds = −h ∫ T t t − τ (t ) x (s)Rx(s)ds − h T t −h t − τ (t ) ∫ t x (s)Rx(s)ds.

we find that (17. Φ12 = P − P2T + AT P3 . where Φ11 = AT P2 + P2T A + S + Q − R.17-10 Control and Mechatronics where η(t) = col{x(t). h]. For R → 0. x ˙ (t). LLC .35) (17.30). P3 ∈ Rn × n is added to the right-hand side of (17. (17. x(t − h).19) for all fast-varying delays τ(t) ∈ [0. (17. The second form is derived via the descriptor method.22).33) coincides for R → 0.37) We note that the feasibility of the (delay-dependent) LMI (17. Thus. (17. −(1 − d )Q − 2R   (17. Setting ηd(t) = col{x(t). S → 0.33) or (17. x(t − τ(t))}.36) with Q = 0 guarantees uniform asymptotic stability of (17.32) Applying the Schur complements formula to the last term in (17. S → 0 with LMI (17. Φ 22 = −P3 − P3T + h2 R. x(t − τ(t)). where the right-hand side of the expression 0 = 2[x T (t )P2T + x T (t )P3T ][ Ax(t ) + A1x(t − τ(t )) − x(t )].} and where  AT P + PA + S + Q − R  Φ= *  *  0 −S − R * PA1 + R   R . x(t − h). the above LMIs guarantee the delay-independent stability.31).34) with some P2. we obtain that is satisfied if the LMI Φ11  * Φd =   *    * Φ12 Φ 22 * * 0 0 −(S + R) *   P A1 <0  R  −(1 − d )Q − 2R   P2T A1 + R T 3 V ≤ ηT d (t )Φ d ηd (t ) ≤ 0 (17. © 2011 by Taylor and Francis Group.16) holds if the following LMI  AT P + PA + S + Q − R  *   *  *   0 −S − R * * PA1 + R R −(1 − d )Q − 2R * hAT R   0  <0 T  hA1 R  −R   (17.33) is feasible.36) holds.

36) simultaneously for all the N vertices.33) has less decision variables (which is an advantage). P3. Thus.40) This system is unstable for h = 0 and is asymptotically stable for the constant delay h ∈ (0. Example 17.. applying the same decision variables (i. applying the same decision variables P2.36) are affine in the system matrices. and R) for all vertices. LLC .e. Applying the above LMIs with Q = 0. matrices P.19) with τ = 0. (17.7178) [9].33) and of (17. S. e.20). 1.….1  1 0 x (t − h). (17. 1. Among the recent results. N.36).22] (compared with the analytical result τ < 1.57). the following system with constant delay: 0 x (t ) =   −2 1 0 x (t ) +  0. which is based on general Lyapunov–Krasovskii functional (17. The above conditions can be easily extended to systems with multiple delays.22] for the stability of (17.. h].…. 1. j = 1. We note that all delaydependent conditions in terms of LMIs are sufficient only. namely. However. we conclude that the system is stable for all fast-varying delays τ(t) ∈ [0. these conditions can be improved. By the first form of conditions (17.36) simultaneously for all the N vertices. simple Lyapunov functionals are not suitable. h2] with h1 > 0 (see. Recent  h1.Time-Delay Systems 17-11 Since LMIs (17. we find that the system is stable for (constant) τ ∈ [0. slack variables P2 and P3 of the descriptor method may lead to less conservative results in the analysis of uncertain systems and in design problems. N}. In this case. (17.g. Ω= ∑ j =1 N f jΩ j for some 0 ≤ f j ≤ 1.39) By verifying the feasibility of (17. j = 1.3 the interval τ(t) ∈[ delay).29). Applying these conditions with d = 0. e.33) or (17. and Q( j).5). One can use a discretized method of Gu [8]. P and R should be common and only S( j) and Q( j) may be different in the vertices.38) ( j)  A( j ) A1 . 0  (17.39).g. the conditions via the descriptor approach have advantages: one has to solve the descriptor-based LMI (17. results analyze also the stability for interval (or non-small) delay τ(t) ∈[ A necessary condition for the application of simple Lyapunov–Krasovskii functionals is the asymptotic stability of (17. In the case of time-varying where the N vertices of the polytope are described by Ω j =      uncertainty with f j = f j(t). Comparing the above two forms of delay-dependent conditions. Consider. 1. ∑f j =1 N j = 1. we denote Ω = [A A1] and assume that Ω ∈ C o{Ωj. we arrive at the same results for uniform asymptotic stability of (17. where in the right-hand side there could be written 1/τ instead of its lower bound 1/h.3 Consider the simple scalar equation x (t ) = − x (t − τ(t )).39) with fast-varying 17. Therefore.1002. P3 (since these matrices are multiplied by the system matrices) and different matrices P( j). P2. 1. [7]). S( j). convex analysis of [21] has significantly improved the stability analysis of systems with time-varying delays (achieving in Example  0.33). The improvements are usually achieved at the price of the LMI’s complexity. one has to solve the LMI (17..33).41] (compared with the analytical result τ < 1. we see that the 1 − st LMI (17. Q. they can be applied to the case where these matrices are uncertain. For analysis of such systems. The presented above conditions guarantee the stability for small delay τ(t) ∈ [0. R( j). © 2011 by Taylor and Francis Group. the presented method conservatively applies Jensen’s inequality (17. In the case of time-invariant uncertainty.33] instead of τ(t) ∈ [0.

Modifying the delay-dependent Lyapunov–Krasovskii functional (17. xt ) ≤ V (t 0 .g.41) with ϕ ∈W the right-hand side instead of ∙ϕ∙ C .41) holds for the solution of (17.3  Exponential Bounds and L2-Gain Analysis System (17.19) initialized with xt0 = φ ∈C . by the comparison principle [10]. Q  (17.42) For δ = 0. Q > 0. xt ) + 2δV (t . one can arrive at delay-dependent-based conditions for (17. we arrive to the following LMI. P > 0. xt ) + 2δV (t .4 Consider [17]  −4 x (t ) =  0 1 0.5). Substituting further the right-hand side of (17. φ)e −2δ(t −t0 ) ≤ be −2δ(t −t0 ) || φ ||C .5. The (uniform) asymptotic stability of the linear system with (time-varying) uncertain coefficients or delays does not imply exponential stability. xt ) ≤ 2x T (t )Px(t ) + 2δx T (t )Px(t ) + x T (t )Qx(t ) − (1 − d )e −2δh x T (t − τ(t ))Qx(t − τ(t )).19) for ˙ x (t). xt ) ≤ 0.41). which guarantees (17.43):  AT P + PA + 2δP + Q  TP A1   PA1 −(1 − d )e   < 0.43) then. (17. this functional coincides with (17.26) by inserting e2δ(t − s) into inte  and with ∙ϕ∙ W in gral terms. xt ) = x T (t )Px(t ) + t − τ (t ) ∫ t e 2δ(t − s ) x T (s)Qx(s)ds. (17. (17.21) for delay-independent stability. it follows that Differentiating V we find V (t .17-12 Control and Mechatronics 17.. LLC . Lyapunov–Krasovskii method gives sufficient conditions for finding the decay rate and the constant β in (17. Consider.1 x (t ) +  −4  4 0 x (t − 0. 0  (17. then it is also exponentially stable.44) 2 V (t . If along (17. Example 17. In any case. (17.45) −2 δh The latter LMI is delay independent when δ = 0. the following functional [17] V (t .19) V (t .19) is said to be exponentially stable with a decay rate δ > 0 if there exists a constant β ≥ 1 such that the following exponential estimate 2 | x(t ) |2 ≤ βe −2δ (t −t0 ) || φ ||C ∀ t ≥ t0 . e.46) © 2011 by Taylor and Francis Group. If LTI system with known matrices and constant delay is asymptotically stable.

thus. A1.50) in t from t0 to ∞ and taking into account that V| t =t0 = 0 and V|t=∞ > 0 leads to J < 0. ˙ the right-hand side of the Similar to the stability analysis via the descriptor method. ∀ t ≥ t 0 . then Φd < 0 and.52) is feasible. It was found in [17] that the rightmost characteristic root corresponding to (17. (17. Really.46) is exponentially stable with the decay rate δ = 1. x t ) + z T (t )z (t ) − γ 2w T (t )w (t ) < 0. (17.52) where Φd is given by (17. T we find that (17.26).50) holds if    Φ d   −   *   * | | | | − | | P2T B1 P3T B1 0 0 − − γ 2I * T C0  0 0  T C1  < 0. and C1 are constant. −  0  −I  (17. For a prescribed γ > 0. we add to V following expression: T T T T 0 = 2  x (t )P2 + x (t )P3   [ Ax (t ) + A1x (t − τ(t )) + B1w (t ) − x (t )] . We note that if LMI (17.48) with zero initial condition and for all 0 ≠ w(t) ∈ L2[t0. x t . Applying further the Schur complements formula to the term z T (t )z (t ) = [C 0 x (t ) + C1x (t − τ(t ))] [C 0 x (t ) + C1x (t − τ(t ))] . B1.15. LLC .49) We seek conditions that will lead to J < 0 for all x(t) satisfying (17. we introduce the following performance index: T 2 T J=   z (t )z (t ) − γ w (t )w (t )  dt . (17.36).46) is 1.47). In this case the system (17. it was found that LMI (17. z (t ) ∈R nz .Time-Delay Systems 17-13 For d = 0 and δ = 1.48) The matrices A. (17.47) where x(t) ∈ Rn is the state vector. w (t ) ∈R nw is the disturbance with the controlled output z (t ) = C 0 x (t ) + C1x (t − τ(t )).34)). (17. The latter means that the derived LMI condition leads to the analytic result in this example. thus.45) is feasible and. (17. integration of (17.47).48) has L2-gain less than γ.50) then (17. (17. L2-gain analysis for linear TDS can be performed by using the Lyapunov–Krasovskii method (Razumikhin’s approach seems to be not applicable to this problem).47) with w = 0 is uniformly asymptotically stable. We shall formulate sufficient conditions for L2-gain analysis via the delay-dependent Lyapunov functional (17. (17.51) (with the additional term B1w(t) comparatively to (17.47). Consider next a linear perturbed system x (t ) = Ax (t ) + A1x (t − τ(t )) + B1w (t ). © 2011 by Taylor and Francis Group.15. C0. For a prechosen γ > 0.48) V (t . ∞). ∀ w ≠ 0.47).48) has L2-gain less than γ. (17. (17. ∫ t0 ∞ (17.15. if along (17.

the LMI approach leads to efficient design algorithms. We seek a stabilizing state-feedback controller. e. For uncertain systems.54) Thus. Considering next the inequalities (17. where ε is © 2011 by Taylor and Francis Group. which reduces stabilization of TDS to that of its delay-free counterpart. (17. as well as in the case of an additional state delay. either an iterative procedure can be used (see.g. P3T BK .36) for delay-dependent stability with A1 = BK. The predictor-based design has been extended to the linear quadratic regulator for LTI systems [11] and to other control problems with known constant or even known time-varying delay [25].33) or (17. we arrive to the following stabilizing controller: 0   u(t ) = K e Ah x(t ) + e − Aξ Bu(t + ξ)dξ  . where A1 = BK.36) denote P3 = εP2.17-14 Control and Mechatronics 17.33) because of the terms PBK.36) because of P2T BK . the right-hand side of (17. LLC .6.56) where x(t) ∈ Rn is the state vector u(t ) ∈ Rnu is the control input τ(t) is time-varying delay τ(t) ∈ [0. which linearizes it. Formally it means that u(t) = Kx(t + h) stabilizes (17. RBK and (17. we see that both of these inequalities are nonlinear: (17. [18]) or some transformation of the nonlinear matrix inequality.6. h] We seek a stabilizing state feedback u(t) = Kx(t) that leads to a uniformly asymptotically stable closedloop system (17. However. 17.1  Predictor-Based Design Consider the LTI system with the delayed input x(t ) = Ax(t ) + Bu(t − h).53) with the initial condition x(t) we find t +h x(t + h) = e x(t ) + Ah ∫e t A (t + h − s ) Bu(s − h)ds. After the change of variable ξ = s − t − h in the integral.53) where the constant h > 0 and the constant system matrices are supposed to be known. Then. (17. (17. integrating (17.55) This is the idea of the predictor-based controller introduced in [15]. this approach faces difficulties in the case of uncertain systems and delays..54) predicts the value of x(t + h).   −h   ∫ (17. Assume that there exists a gain K such that A + BK is Hurwitz. To find the unknown gain K. To linearize (17.2  LMI-Based Design Consider the system x(t ) = Ax(t ) + Bu(t − τ(t )).6  Control Design for TDS 17.53).19).

R = P2T RP2. We multiply (17.7  On Discrete-Time TDS Consider a linear discrete-time system x(k + 1) = Ax(k) + A1x(k − h). Q ≥ 0.58) Φ 22 = − ε P 2 − ε P + h R.57) T Φ11 = AP 2 + P T 2 A + S + Q − R.P2. P 2 = P2−1.1. R ≥ 0. (17. xaug (k) ∈ R(h +1)n . If we apply Lyapunov method to (17.…. then a necessary and sufficient condition for T (k)Paug xaug (k) with asymptotic stability of (17.g. Q = P2T QP2 .1.P2} – and its transpose.Time-Delay Systems 17-15 T T T a scalar. k = 0. respectively. from the right and the left.36) by diag {P T 2. 2. It should be augmented by its finite history xaug (k) = col{x(k − h).….61). © 2011 by Taylor and Francis Group.59).61) Aaug Thus. In   A xaug (k + 1) = Aaug xaug (k). e. (17. As in the continuous-time case. S = P2T SP2. [24]).60) resulting in the following augmented system without delay: where 0 … = 0   A1 In … 0 0 … … … … 0 … . h ∈ N . (17. T 2 2 – – – – – Given ε. 17. k = 0. the stability analysis of (17..P2. T Φ12 = P − P T 2 + ε P 2A . x(k)}.61) the following holds: Vaug (xaug (k + 1)) − Vaug (xaug (k)) <− α | xaug (k)|2. and P2. the following is obtained: where Φ11   *  *    * Φ12 Φ 22 * * 0 0 −(S + R) * BY + R   εBY <0  R  −(1 − d )Q − 2R   (17. S ≥ 0. Various design problems for systems with state and input/output delays (including observer-based design) can be solved by using LMIs (see. LLC . x(k) can no longer be regarded as its state vector. (17. x(k − h + 1). Denoting Y = KP 2.59) with constant matrices and constant delay h. α > 0.59) can be reduced to the analysis of the higher-order system without delay (17. 2. P = P2T PP .57) is an LMI with respect to P > 0. x(k) ∈Rn (17.….59) is the existence of Lyapunov function V (xaug (k)) = xaug Paug > 0 such that along (17.

by the descriptor method. e. for the linear system with time-varying delay x(k + 1) = Ax(k) + A1x(k − τk ). discrete-time counterparts of simple Lyapunov–Razumikhin functions or Lyapunov–Krasovskii functionals can be applied [5].64) j = k −h ∑ k −1 y T ( j)Ry ( j) = −h k − τk −1 j = k −h ∑ y T ( j)Ry ( j) − h j = k − τk ∑ y ( j)Ry( j) T k −1 and apply further Jensen’s (in fact. LLC . where the right-hand side of the expression T T T T 0 = 2  x (k)P2 + y (k)P3   [( A − I )x(k) + A1x(k − τk ) − y (k)]. y ( j)Ry ( j) ≥  y ( j) R      j = k −h   j = k −h   j = k −h ∑ k −1 ∑ ∑   k −1   k −1 T T y ( j) . is added into the right-hand side of (17.66) © 2011 by Taylor and Francis Group. such augmentation suffers from the curse of dimensionality if h is high. x(k − h). x(k − τk)}. P > 0. k = 0. τk ∈ N . the general Lyapunov function Vaug can be easily found by augmentation of the discrete-time LTI system.62) the discrete-time counterpart of (17. The technique for derivation of LMI conditions is then similar to the continuous-time case (see.63) y (k) = x(k + 1) − x(k). To derive simple sufficient stability conditions. Thus. this is a particular case of Cauchy–Schwartz) inequality k − τ k −1 h h   k − τk −1   k − τk −1 T T y ( j) .g.. and setting ηdis(k) = col{x(k). we obtain that V (x(k + 1)) − V (x(k)) ≤ ηT dis (k )Φ dis ηdis (k ) ≤ 0 (17. y ( j)Ry ( j) ≥  y ( j) R        j = k − τk   j = k − τk j = k − τk ∑ ∑ ∑ Then. Application of the augmentation becomes complicated if the delay is unknown or/and time varying. S ≥ 0.26) for delay-dependent analysis has the form V (x(k)) = x T (k)Px(k) + h m= −h j = k + m ∑∑ −1 k −1 y ( j)T Ry ( j) + j = k −h ∑ x( j) Sx( j). 2. However. (17.64). T k −1 (17.…. [1]). (17. 0 ≤ τk ≤ h. R ≥ 0. T k −1 (17.1. – y (k). P3 ∈ Rn × n.65) with some P2. We have V (x(k + 1)) − V (x(k)) = 2 x T (k)Py (k) + y (k)Py (k) + x T (k)Sx(k) − x T (k − h)Sx(k − h) + h2 y (k)Ry (k) − h We employ the representation −h T T j = k −h ∑ y ( j)Ry( j).17-16 Control and Mechatronics Unlike the continuous-time case.

6. 43. by using the augmentation. −0. Discretized LMI set in the stability problem of linear time delay systems.-H. Φ 22 = − P3 − P3T + P + h2 R. C. Lunel. IEE Proceedings Control Theory and Applications. Φ12 = P − P2T + ( A − I )T P3 . Fridman and U. 4. E. Wang. Springer-Verlag. 2. For time-varying τk. Guan. 76(1). © 2011 by Taylor and Francis Group. International Journal of Control. 105. 43. E. Automatica.68) Example 17. if the following LMI Φ11  * =  *    * Φ12 Φ 22 * * 0 0 −(S + R) * P2T A1 + R   P3T A1  <0 R  −2R   Φ dis is feasible. 7.67).-H. 2007. 2005.V. L.1  (17. and X. by verifying the feasibility of (17.67) Φ11 = ( A − I )T P2 + P2T ( A − I ) + S − R. Introduction to Functional Differential Equations. 68. Z. (17. Y. W. Chen. 78(4). Lu. Shaked. 309–319. Fridman and U.K. it is found that the system is asymptotically stable for all h ≤ 18 (since the eigenvalues of the resulting Aaug are inside the unit circle). 3. 2003. References 1. Academic Press. 5. International Journal of Control. E. where (17. El’sgol’ts and S. Gu. 150. 1993.8 x (k + 1) =  0 0   −0. 371–376. K.Time-Delay Systems 17-17 is satisfied. Wu. Introduction to the Theory and Applications of Differential Equations with Deviating Arguments. Systems & Control Letters. 1973. 412–416. 48–60.1 0  x (k − τ k ). Fridman. Lin. Shaked. Delay-range-dependent stability for systems with time-varying delay. 235–246.69) For the constant τk ≡ h. He.G.97  −0. J. Delay-dependent guaranteed cost control for uncertain discretetime systems with delay. Q. New York. and M. New York. Norkin. 2003.M. we find that the system is asymptotically stable for all 0 ≤ τk ≤ 12.1 x (k ) +   0. Stability and guaranteed cost control of uncertain discrete delay systems. 923–934. 1997. Vol. Delay dependent stability and H∞ control: Constant and time-varying delays. Hale and S.5 We consider 0. 8. Mathematics in Science and Engineering. LLC . 2001. International Journal of Control. New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral type systems.

Kharitonov. Criteria for robust stability and stabilization of uncertain linear systems with state delay. 49. 17. 1855–1858. 18. P. the Netherlands. C. Eds. Boston. Kwon. Myshkis. 22. 1447–1455. Suplin. Automatica. J. Alamir. 1959 [Russian]. Khalil. Gu. Stability of some linear systems with delays. Automatica. Kolmanovskii and A. U. 20. A delay-dependent stability criterion for systems with uncertain time-invariant delays. Richard. Y. Chiasson and J.Z. S. 1667–1694. 984–989. and E.W. S.S. 11.-J. 2007. IEEE Transactions on Automatic Control. 1997. J. and M. 2003. 23. 10. Richard. Translation. Germany.K. 315–327. 39.17-18 Control and Mechatronics 9. 33. 74. 1988. V. Shaked. 1072–1083. 1999. Olbrot. 3rd edn. Ko. Springer-Verlag. Stability and robust stability for systems with a time-varying delay. Park. Yu. Applied Theory of Functional Differential Equations. Time-delay systems: an overview of some recent advances and open problems. 500–512. Sampled-data H∞ control and filtering: Nonuniform uncertain sampling. and U. Canudas-de-Wit. and J. Loiseau. Krasovskii. LLC . Automatica. D. © 2011 by Taylor and Francis Group. Stability of Time-Delay Systems. In Applications of Time-Delay Systems. 2002. Manitius and A. B. 1963. 19. 25. Lee. 268–273. Nonlinear Systems. 50(5). Birkhauser. IEEE Transactions on Automatic Control. Moon. Asymptotic analysis of digital control systems. IEEE Transactions on Automatic Control. H. N. Moscow. 24. E. 2003. Kluwer. A. Mondie and V. London. Li and C. 1175–1180. 2005. Dordrecht. de Souza.P. Park and J. Fridman. 876–877. V. Prikladnaya Mathematika Mechanika. 1999. International Journal of Control.S. X.. 12. pp. On the use of state predictors in networked control systems. Delay effects on stability: A robust control approach. Stability of Motion. 44. 43. MA. Englewood Cliffs.W. Georges. On the stability of systems with a delay. Kharitonov. V. 14. 2007. Exponential estimates for retarded time-delay systems. Lecture Notes in Control and Information Sciences. P. 15. 1657–1662. Kolmanovskii and J. 1956 [Russian]. 21. 269. NJ. 16. 541–553.H. Prentice Hall. Razumikhin. V. W. Berlin. 1979.I. and Y. E. Mikheev. K.K. Sobolev. Niculescu. V. Automatic and Remote Control. Fridman. Delay-dependent robust stabilization of uncertain state-delayed systems.. Springer Verlag. 44. Stanford University Press. Park. 20. Witrant. 43.P. Automatica. 2001. 2007. Finite spectrum assignment problem for systems with delay. 1999. 24. P. 13. 2001. Chen. IEEE Transactions on Automatic Control.

. the fuzzy control...18 AC Servo Systems 18............. such as the adaptive control... 18-1 System Structure  •  Model of PSMS  •  PID Control of PSMS  •  High-Order TSM Control of PMSM  •  Simulations Yong Feng RMIT University 18. strong coupling. that is...... fast dynamics....................XVT03.................... and good compatibility [ZAATS06......... So far.. from the current controller to the position 18-1 © 2011 by Taylor and Francis Group... velocity controller. Both are multivariable.......... many methods can improve the robustness and the dynamic performance of the system.KT07..... including PMSMs and IMs.JSHIS03]......2.1  Introduction AC motors and AC servo systems have been playing more and more important roles in factory automation..1 Introduction.... and the current controller........ high efficiency.......... artificial neural networks.......... high-speed aerospace drives.... such as high power density.......... Recently...... CNC machines... 18-1 18.. The task of the control system design for a PMSM is to design the three controllers: the position controller..... and uncertain nonlinear systems....... 18-15 References�������������������������������������������������������������������������������������������������� 18-15 Scalar Control  •  Direct Torque Control  •  Field-Oriented Control Conventional Sliding-Mode Observer  •  Hybrid TSM Observer Liuping Wang RMIT University Xinghuo Yu RMIT University 18.... there are three controllers in the system: position controller........... the velocity controller..............1.. with a filter in the velocity loop... Generally. 18..... and the active disturbance rejection control [JL09.....SZD05].................... The design process of the three controllers is usually from the inner loop to the outer loop. 18-10 18..... in which uncertainties including internal parameters perturbation and external load disturbances exist. and high-technology tools used for outer space in the past decades because of their good performances. due to the rapid improvement in power electronics and microprocessors. which are used in the three closed loops. This chapter will introduce the control of AC motors. LLC ...........3 Observer for Rotor Position and Speed of PMSMs...................2  Control System of PMSM 18... AC motors include mainly permanent magnet synchronous motors (PMSMs) and induction motors (IMs).............2 Control System of PMSM........ robots. the sliding-mode control method does not request a high accurate mathematical model and is strongly robust with respect to system parameter perturbation and external disturbances..... computers.......... 18-7 18....... and current controller..5 Conclusions...... Among them.. respectively..1  System Structure A typical control system of PMSM is shown in Figure 18. the performances of AC servo systems have reached and even exceeded that of traditional DC servo systems........4 Control of Induction Motors............RH98. household electrical appliances.

5 pψ f T B iq − ω − L ω = J J J    θ = ω (18.2  Model of PSMS Assume that the magnetic flux is not saturated. a PMSM system can be described as follows [ED00. the design tasks for the current controller and the position controller are relatively easy.1  A typical control system of a PMSM. The parameters of the current controller mainly depend on the system performance requirements and the physical parameters of the motor. 18. The filter in the velocity loop is used to suppress mechanical resonance in the servo systems [ZF08]. such as the inductance and resistors. and friction influence the performance and the stability of the velocity loop. controller. Compared to the velocity controller. respectively © 2011 by Taylor and Francis Group. backlash. The design of the velocity controller is more challenging.1) where ud and uq are d-q axes stator voltages id and iq are d-q axes stator currents L is the d-q axes inductance Rs is the stator winding resistance ψf is the flux linkage of the permanent magnets p is the number of pole pairs θ and ω are the angular position and the velocity of the motor. After completing the current controller and the velocity controller. mainly since mechanical resonances.2. LLC . and the influence of the magnetic hysteresis is negligible.18 -2 θ* – θ ω* – ω i* – Control and Mechatronics Position controller Speed controller Filter Current controller i Inverter Motor Encoder/ techometer Encoder Figure 18. which can be readily measured. Taking the rotor coordinates (d-q axes) of the motor as the reference coordinates.SZD05]: i = − Rs i + pωi + ud d q d L L   ψf p uq R ω+ iq = − pωid − s iq −   L L L   1. the magnetic field is sinusoidal. the position controller can be designed in a relatively easy manner because the controlled plant has been well regulated to a second-order system.

According to the vector-control principle.2. most PMSM systems utilize PID controllers till now.3  A PI controller using the anti-reset windup method.3 and expressed by the following equation: * = k e + e k p − 1 i * − i r  1 iq p ω q q   ω τ kc i s  ( ) (18. the anti-reset windup method can be used [S98]. which can be realized by setting the d-axis current reference at zero.3  PID Control of PSMS The vector-control method and PI controllers are widely used in the PMSM speed-control system. © 2011 by Taylor and Francis Group. In order to overcome the integral saturation of the PID controller in Figure 18. The speed controller is shown in Figure 18. In order to realize the maximization of the driving torque. LLC .2.2) where kωm > 0 is the compensation coefficient. three-phase currents of the motor stator are transformed into two rotor coordinates (d-q axes) of the motor.2. The structure of the PMSM vector-control system is shown in Figure 18.AC Servo Systems 18 -3 18. Vdc nM * – id * =0 – iq id Forward park transformation PI iq * – PI PI uq ud Inverse park transformation uα uβ SVPWM Inverter iα iβ Forward clark transformation ia ib ic PMSM motor Position/speed sensor θM nM Figure 18.2  A PMSM vector control system based on PI controllers. Iqm −Iqm + 1 kc – ω* + – ω eω kp kp τi + – 1 s + + i* q ir q Figure 18. the phase angle between the stator flux and the rotor flux should be maintained at 90 electrical degrees. In practical applications.

the high-order TSM control of the PMSM is introduced. 0 ∫  γ p e 1 1 t  q1 2 − p1/q1 ω * − ir  d τ + (k1 + η10 )sgn(lω ) + η11lω − kωm iq q   ( ) (18. Here.5 pψ f where kωm > 0 is the compensation coefficient. The error speed of the motor in Equation 18. © 2011 by Taylor and Francis Group. The control strategies are chosen as follows: * =i +i iq qeq qn iqeq =  q1 J 1.6 will converge to zero in finite time. the anti-reset windup method can be used. LLC .4. An NTSM manifold is chosen as follows [FYM02]: p1/q1 lω = eω + γ 1eω (18. which is shown in Figure 18.18 -4 Control and Mechatronics 18.6).5 pψ f 2 − p1/q1 ω (18.ES98].4  High-Order TSM Control of PMSM Besides PID control. Velocity controller design [ZFY08] 1.5 pψ f t 2. η10 > 0 and η11 > 0 are two design parameters. The speed controller is shown in Figure 18.4  High-order TSM control system of a PMSM.2.5 and expressed by the following equation: J iqn = 1. 0 ∫  γ p e 1 1  + (k1 + η10 )sgn(lω ) + η11lω  d τ  (18.3) where e ω = ω* − ω is the motor speed error ω* is the desired motor speed iqn = J 1.6) In order to overcome the integral saturation in the control (18.4) (18. many other control methods have been proposed for the control of the PMSM motor [BKY00.5)  * B  ω + ω   J  · where k1 > |TL/J|.7) Vdc n* M – TSM speed controller i* d=0 i* q – – TSM q-axis current controller TSM d-axis current controller uq ud Inverse park transformation uα uβ SVPWM Inverter iq id Forward park transformation iα iβ Forward clark transformation ia ib ic PMSM motor Position/speed sensor θM nM Figure 18.

5  High-order TSM speed controller using the anti-reset windup method.11) where. 2–p /q —–— γ1p1 ( ) 1 1 J ——– pnψf iqeq + + iqn i* q ir q η11 + k1 + η10 + – + 1 — s J ——– pnψf kωm Figure 18. q-Axis current controller design [ZFY08] 1. d-Axis current controller design [ZFY08] 1. 1 < (p2/q2) < 2 * − i is the q-axis current error eq = iq q * i is the desired q-axis current q 2.12) where γ3 > 0. 1 < p3/q3 < 2 * − i = −i is the current error ed = id d d * = 0 is the desired current id © 2011 by Taylor and Francis Group. p3 and q3 are odds. k20 > 0. An NTSM manifold is chosen as follows: p2/q2 sq = eq + γ 2eq (18. LLC . An NTSM manifold is chosen as follows: p3 /q3 sd = ed + γ 3ed (18. The control law is designed as follows: uq = uqeq + uqn (18.8) where γ2 > 0.AC Servo Systems Iqm –Iqm + – 18 -5 + + s B — J ω ω* + – e ω s γ1(. k21 > 0 are design parameters. p2 and q2 are odds. The q-axis current controller can guarantee that the q-axis current converge to its desired reference in finite time.9) (18.10) * + Lpωi + R i + pψ ω uqeq = Liq d s q f   q 2 − p2/q2 uqn = L  2 eq + k20 sgn(sq ) + k21sq  d τ   γ 2 p2 0 ∫ t (18.) q1 .)p1/q1 + lω + sgn(.

(b) control voltage. of e ω and e ω © 2011 by Taylor and Francis Group. The d-axis current controller can guarantee that the d-axis current converge to zero in finite time.4 t/s 0.4 t/s 0.4 t/s 0. kωm = 500.8 1 (b) 1100 1050 1000 950 900 850 150 (d) 800 0 0.2.875 Ω . (c) phase diagram · .01. k21 = 0. B = 0.15) where k3 > 0 is the design parameter.2 0.5 A. LLC .002 N ms.18 -6 Control and Mechatronics Wb. k1 = 910. η11 = 5000. IN = 3. and ψf = 0.8 1 PI control High-order TSM control High-order TSM control iq Control voltages/V 300 250 200 150 100 50 0 –50 0 PI control ud High-order TSM control ud 0.13) (18. p = 3.8 1 PI control uq High-order TSM control uq PI control iq PI control –1000 –1500 –50 (c) High-order TSM control 0 50 eω 100 Figure 18.6  Simulation results when iq is saturated: (a) stator currents.2 0. J = 0.6 0. 18. 5 4 Stator currents/A 3 2 1 0 –1 (a) 500 0 –500 Speed n/rpm eω 0 PI and high-order TSM control id 0. The parameters of the high-order TSM controller are designed as follows: p1 = 7. q2 = 3.5  Simulations The parameters of the PMSM are PN = 1. γ1 = 0.2 0. The q-axis current limit is Iqm = 4 A. p3 = 5. The control law is designed as follows: ud = udeq + udn (18. Rs = 2. q1 = 5.5 kW.011 kg m2. and (d) motor speeds. U N = 380 V.002. p2 = 5.6. nN = 1000 rpm. τ0 = 0. η10 = 90. γ3 = 0. γ2 = 0. L = 33 mH. and k3 = 0.6 0. k20 = 200.6 0. q3 = 3.001.14) udeq = −Lpωiq + Rsid   q 2 − p3 /q3 udn = L  3 ed + k3 sgn(sd ) d τ γ p   3 3 0 ∫ t (18. The simulation results of the two methods are shown in Figure 18.

a vector-control strategy and a rotor position sensor are typically required. a sliding-mode observer is utilized instead of the mechanical position sensor of the motor. 18.3  Observer for Rotor Position and Speed of PMSMs In order to achieve high-performance in PMSMs. In the system.3. and so on. The sensorless technique enables control of the PMSM without mechanical sensors. which can reduce the cost of hardware and improve the reliability of the motor system. such as encoders.BTZ03]: i = −Rsiα − eα + uα α  L   i = −Rsiβ − eβ + uβ β  L  (18.LA06. hence reducing the reliability and limiting the applications under certain conditions. the electrical dynamics of the PMSM can be described using the following equations [UGS99. resolvers. Usually.16) where u α and u β are stator voltages i α and iβ are stator currents L is the α-β axes inductance Rs is the stator winding resistance eα and eβ are back-EMFs that satisfy the following equations.7  Sliding mode observer based on a PMSM vector-control system.AC Servo Systems 18 -7 18.7 [UGS99. Vdc ω* * id Controller ud uq Inverse park transformation uα uβ SVPWM Inverter Forward park id transformation iq iα Forward clark iβ transformation ia ib ic ˆ θ ˆ ω Sliding mode observer uα uβ PMSM Figure 18. In the α-β stationary reference frame. However. The stator voltage and the current measurements of the motor are used to estimate the rotor position and the speed using the observer based on the PMSM model. both the position and the speed of the motor are measured by mechanical sensors. as well as estimation of the rotor position and the speed by the stator voltage and the current of the motor.PK06. Then.KKHK04].1  Conventional Sliding-Mode Observer A typical structure of a sliding-mode-observer based PMSM vector-control system is shown in Figure 18. The position sensor in the motor system is replaced by a software algorithm. these high-precision sensors would increase the cost and size of the PMSM system. © 2011 by Taylor and Francis Group. the estimated position and the speed can be used for the motor position or the speed control and the coordinate transformation between the stationary reference frame and the synchronous rotating frame. LLC .

17 that the back-EMFs contain the information of both the rotor position and the speed of the motor.18 from Equation 18. The conventional slidingiβ = i where – iα = i mode observer for the rotor position and the speed can be shown in Figure 18. if the back-EMFs can be obtained accurately.16. the low-pass filter is omitted. the chattering in the control signals can be eliminated using high order sliding mode technique [L03].2  Hybrid TSM Observer Besides the conventional sliding-mode observer. In this method.3. © 2011 by Taylor and Francis Group. From Equation 18. a hybrid TSM observer is introduced [FZYT09]. and – ˆβ − iβ are two stator current errors.16: ˆ s i α + eα + u1  −R  iα = ˆ  L  ˆ  i = − R s i β + eβ + u2 β ˆ  L  (18. several other observers for estimating the motor rotor position and speed have been proposed.17) where ψf is the magnet flux linkage of the motor ω is the rotor angular speed θ is the rotor angular position It can be seen from Equation 18. Therefore. and the phase lag in the back-EMF signals can be avoided. the error equations of the stator current ˆ s = Rs and L Assume that the parameters satisfy R can be obtained by subtracting Equation 18. 18.18 -8 Control and Mechatronics eα = −ψ f ω sin θ  e = ψ ω cos θ f  β (18.17.19) ˆα − iα . LLC . So. respectively.18) where îα and îβ are two estimated stator current signals ˆ are the estimated values of R s and L ˆ s and L R u1 and u2 are two control inputs of the observer ˆ = L. Then. the conventional sliding-mode observer can be designed as follows [LE02]: ˆ siˆα +u α + u1 −R ˆ i α = ˆ  L  ˆ s iˆβ + uβ + u2 R − ˆβ = i ˆ  L  (18. Here.8. the rotor position and the speed can be calculated from Equation 18.

β u ˆ eα. η > 0. γβ).AC Servo Systems ˆ ω 18 -9 iα. An NTSM manifold is chosen as follows: s = is + g is p/q (18. q are all odd numbers. 1 < p/q < 2 is p/q is defined as 2.β Observer (18. eβ]T is the back EMF vector u = [u1. The control law is designed as u = ueq + un ˆ s is ueq = R (18. u2]T is the control of the observer The observer controller can be designed as follows: where s = [sα .24) (18. iβ]T is the error vector of the stator current ξ = [eα .22) ˆ  −1 2 − p/q  Lq  g is un = −  + (k ′ + η )sgn( s) + µ s d τ   p    0  ∫ t where sgn(s) is defined as sgn(s) = [sgn(sα). The stator current error Equation 18. i β p/q    T 1.23) (18.β + – – iα.20) where ‐ ‐ – i s = [iα .25) is p /q =  iαp/q . γβ > 0 p.18) ˆ iα.8  Conventional sliding mode observer for rotor position and speed. μ > 0 are the designed parameters © 2011 by Taylor and Francis Group.β uα.19 can be rewritten as follows: is = ˆ s is + ξ+ u −R ˆ L (18. sgn(sβ)]T ⋅ k ′ > ξ . γα > 0. s β]T γ = diag(γα . LLC .21) (18.β Motor speed calculation –1 LPF Motor position compensation ˆ θ Figure 18.

21) HOSM control law (18.9  Hybrid TSM observer.4  Control of Induction Motors An induction motor is a type of asynchronous AC motor that is designed to operate from a three-phase source of alternating voltage. due to the rapid improvements in power electronics and microprocessors. Recently.β + – – iα.26) The estimation of ω and θ can be obtained from Equation 18. More and more researchers have been attracted to induction motor control in the field of electric motor drives.28) ψf ˆ −e ˆ) ˆα sin θ ˆβ cos θ sgn(e second The structure of the proposed observer can be described in Figure 18. the direct torque control (DTC).25 that the control u is ­ continuous and smooth. high reliability and low manufacturing cost. a wide speed range. advanced control methods have made the induction motor for high-performance applications possible.β Observer (18.β NTSM (18.1  Scalar Control The simplest scalar control of the induction motor is the constant Volts/Hertz (CVH) method.9.9.18 -10 Control and Mechatronics ˆ ω iα. such as an intrinsically simple and a rugged structure. 18.20) stays on the sliding mode. the dynamics of the system satisfy is = is = 0.23) u Motor speed calculation –1 ˆα.29) © 2011 by Taylor and Francis Group.17 as follows: ˆα  ˆ = arctan  −e θ  e  ˆ  β  ˆ = ω 2 2 ˆβ ˆα e +e (18.4. according to the following equation [T01]: Λs ≈ Vs 1 Vs = ω 2π f (18.23 through 18. high efficiencies. robustness.27) (18. When the system (18. e ˆβ ]T = −u ξ (18. A lot of control methods for IMs have been proposed. LLC . The phase lag caused by the low-pass filter can be avoided effectively by omitting the low-pass filter in Figure 18. Because of utilizing the ­ sliding-mode technique. which can be used to estimate back-EMFs directly. it can be obtained from Equation 18. According to the equivalent control method [PK06]. whose principle is to keep the stator flux as a constant via controlling both the amplitude and the frequency of the stator voltage.β uα. The induction motor has many advantages.18) ˆ iα. it can be seen from Equations 18. among which the three important methods are the scalar control. etc.β e Motor position calculation ˆ θ Figure 18. 18.20: ˆ = [e ˆα . and the field-oriented control.

such as in pumps.AC Servo Systems 18 -11 where Λs denotes the stator flux Vs is the stator voltage f is the frequency of the stator voltage ω is the angular velocity of the motor The CVH method is generally called the variable-voltage variable-frequency (VVVF) control or simply the VF control. Asynchronous motor closed-loop speed control using frequency converter (in Chinese). The control principle of IMs can be described using the following equation: ω1 = ω + ωs (18.. LLC . and the motor torque is not controlled.10. 2.31) IN4148 ω* 2 U1 – 3 LM318 + 6 ωLS 2 – U2 3 LM318 + 6 ω1 Inverter (VVVF) M IN4148 –12 V ω TG Figure 18. and blowers.30) where ω is the actual velocity of the induction motor ω1 is the setpoint of the output frequency of the inverter ωs is the slip velocity which can be expressed by where ω* is the desired motor velocity. and is suitable for applications without position-control requirements or the need for high accuracy of the speed control. (From Feng.. The main advantages of the VF control are that it is very simple.) © 2011 by Taylor and Francis Group. the motor status is ignored. There are a lot of VVVF products on the market. et al. No. 12V ω s = ω* − ω (18. The inverter in Figure 18. The main drawbacks are that field orientation is not used. fans. An example of the VF-based closed-loop speed control of an induction motor is shown in Figure 18.10 is a typical VVVF. of low cost.10  VF-based closed-loop speed control of an induction motor. Basic Automat. Y. The VF control can be used as an open-loop or a closed-loop control of IMs. 1994.

b.34) Here. paper and steel industry.18 -12 Control and Mechatronics The amplifier U1 in Figure 18.b Va. which is a conventional PI controller and described by 1 −ω sL = 1+ * 0 .10 is used as a slip controller.c ˆ T Converter Figure 18.β Ia. DTC provides a very quick and precise torque response without the complex field-orientation block and the inner-current regulation loop [HP92. which can be described by (18. The torque and the stator flux calculator are used to estimate the torque and the stator flux from stator voltages and currents in the α .T96. 18 s ω −ω −ω1 = ωsL − ω (18. The stator flux in α .10 is used to perform addition.4. LLC . © 2011 by Taylor and Francis Group.11  Closed-loop direct torque speed-control system. and so on.11. β coordinates can be expressed as follows [T01]:  ψ = ( u − R i ) dt sα s sα  sα  ψ sβ = ( usβ − Rsisβ ) dt  ∫ ∫ (18. The DTC is one of the actively researched control schemes that is based on the decoupled control of flux and torque.2  Direct Torque Control In recent years. the amplitude and the phase angle are 2  ψ s = ψ2 sα + ψ sβ    ψ sβ θs = arctan ψ sα   (18.33) 18.32) Another amplifier U2 in Figure 18. β coordinate. The closed-loop direct torque speed-control system is shown in Figure 18.KK95]. the application fields of flux and torque decoupling control of induction machines have greatly increased in the areas of traction.β abc αβ Iα.35) Torque hysteresis ω* – ω Torque* PI controller – Flux function Flux* – Flux hysteresis Switching table Current sensor A Induction motor A B C Encoder/ Voltage tachometer sensor Switching Pulses control reephase B inverter C Magnetization ˆ| |ψ s Flux sector seeker ˆ ψ s Torque and stator ux calculator abc αβ Vα.

λdm. some sensors. When the power is on. v 7. which is a torque–flux decoupling technique applied to induction machine control [PTM99. In the former. the switching table outputs v1 and v 7 alternatively by a constant frequency. the components of the stator currents. isβ are the stator voltages and the stator currents in the α . that is. the rotor flux is measured using sensors.T94. the magnetization blocks the motor speed reference. Therefore. which can be used to calculate the rotor flux. and isα .36) 18. and the speed depends linearly on the torque current [KK95. the rotor speed is asymptotically decoupled from the rotor flux like the DC motor. in a rotating reference frame aligned to the rotor flux space vector (with a d-q coordinate system). However.37) Te = n p Lm ( isβir α − isαirβ ) (18. For the FOC technique. v1.12. In Figure 18. the rotor flux is estimated using algorithms. represented by a vector. Before the stator flux reaches its desired value. The block switching control limits the switching frequency of six IGBTs not exceeding the maximum frequency.12a. For the IFOC in Figure 18. For the DFOC in Figure 18. which can realize high-performance induction motor control. LLC . A typical block diagram of the FOC-based induction motor control system is shown in Figure 18. λqm. the flux calculators are used to estimate the amplitude of the rotor flux for the flux closed-loop control and the position (angle) of the rotor flux for DQ/dq transformation. direct field-oriented control (DFOC) and indirect field-oriented control (IFOC).12. the rotor flux can be calculated via the following steps: Step 1: Stator flux vector [T01] ψ s = (us − Rsis )dt ∫ (18. usβ . till the stator flux reaches its desired value. the performance will be degraded due to motor parameter variations or unknown external disturbances [R96].3  Field-Oriented Control In 1972 Blascke first developed the field-orientation control (FOC) method. in the latter. iqs. v 7.β coordinate The motor torque can be calculated by using the following equation: or where np is the number of pole pairs Lm is the magnetizing inductance The magnetization in Figure 18.AC Servo Systems 18 -13 where Rs is the stator resistor usα . …. the information on the rotor flux is required.MPR02]. Te = n p ( isβψ sα − isα ψ sβ ) (18.12b.4. FOC is a well-established and widely used applied method. v1.11 is used to create the stator flux. q coordinate. such as the Hall-effect sensors are used to measure the air-gap flux in the d. FOC can be divided into two kinds of methods. The block flux sector seeker is used to seek the sector that is the stator flux.38) © 2011 by Taylor and Francis Group. The switching table utilizes eight possible stator voltage vectors to control the stator flux and the torque to follow the reference values within the hysteresis bands. The control variables are ids.PW91].

b.b. LLC .b ia.c Induction motor Encoder Magnetization θr ω* i* r QS i*΄ DS 1 s θ* θ0 ψr Flux calculator Control and Mechatronics (b) Converter Figure 18.b ab abc ia.12  Closed-loop motor-speed control system with FOC: (a) DFOC and (b) IFOC.c A B C Induction motor λdm ab abc Encoder/ tachometer λqm Flux calculator (a) ω* ω + – PI controller TM * TM k T λR * * iQS i* d.q Flux function * Flux + – Flux controller dq abc ia.c * Switch Switching control + – θr i* D Zero speed vector Magnetization ψr Torque calculator θr reephase B inverter C Current sensor A ia.b.q DQ dq λ* r τr i*΄ DS 1 s+ τ r i* DS * Converter Flux function LM dq abc i* a. © 2011 by Taylor and Francis Group.18 -14 ω* ω + – Torque* PI + – controller Torque controller i* Q DQ dq id *.b.c + – Switching control Zero speed vector Switch Current sensor A reephase B inverter C A B C ia.

2. Yu. 1992. London. References [BKY00] C. 38(12). U. Hybrid terminal sliding mode observer design method for a permanent magnet synchronous motor control system. Yu.K. 2159–2167. [ED00] G.H. Electron. 1741–1747. Automatica. L. Wang. Sliding Mode Control Theory and Applications.J.. 1438–1445. and the control of IMs (scalar control. An RMRAC current regulator for permanent-magnet synchronous motor based on statistical model interpretation. Control Syst.40) ψ m = ψ s − L1sis (18. 169–177. Rome. Youn. Kim.H. The main drawbacks are feedback needed. [FYM02] Y. K. Electron. 1045–1052. Xu. Feng. L1r are the stator and the rotor leakage inductances.V. Zheng. Appl. and M. Baik. and N. 3424–3431. IEEE Trans. [JL09] H.Z. 47–54. Jin and J. 2003. Spurgeon. Asynchronous motor closed-loop speed control using frequency converter.5  Conclusions This chapter has described AC servo systems. 56(9). and performance approaching DC drive. 2002.AC Servo Systems 18 -15 Step 2: Air-gap flux vector Step 3: Rotor flux vector ψr = Lr ψ m − L1sis Lm (18. 8(1). and M. 56(1). Ellis and R.D.G.39) where Rs is the stator resistor us and is are the stator voltage and the stator current. 28. X. Ind. in Proceedings of the Industry Applications Conferences. DTC. IEEE Trans. X. 2009.. Ind. Zigliotto. LLC . J. Feng... Man. 18. 2000. and Z.. Profumo.H. Ind.. IEEE Trans. Direct torque control of induction machines using space vector modulation. IEEE Trans. Resonant load control methods for industrial servo drives. Basic Automation.. Tubiana. 1998. [BTZ03] S. [ES98] C. [FZYT09] Y. Feng. Habetler and F. Technol. pp. no. © 2011 by Taylor and Francis Group. Robust nonlinear speed control of PM synchronous motor using boundary layer integral sliding mode control technique. respectively The main advantages of the FOC can be expressed as follows: good torque response. Y. Edwards and S. Bolognani. full torque at zero speed. and FOC). Lorenz. Ind. and D. [HP92] T. 1994. Italy. Lee. (in Chinese).K. Extended kalman filter tuning in sensorless PMSM drives. and modulator needed. They are of substantial significance to both theoretical research and practical applications. costly. 2000. mainly the control of PMSM (conventional PID control and high-order TSM control). IEEE Trans. accurate speed control. respectively Lm is the magnetizing inductance Lr = L1r + Lm is the sum of the rotor leakage inductance and the magnetizing inductance L1s. Appl. Non-singular adaptive terminal sliding mode control of rigid manipulators. [FWX94] Y. Taylor & Francis Ltd. 2009. 39(6).M. Truong. an observer for rotor position and the speed of PMSMs (conventional sliding-mode observer and hybrid TSM observer).

Italy. Akujuobi. U. Kasprowicz. Peresada. Levant. pp. and M. Shieh. J. IEEE Trans. J. C. Lascu and G. Won. Shi. Jang. IEEE Trans. Ide.B. Tseng. IEEE Trans. Mohamed. [PTM99] S. [ZAATS06] Y. © 2011 by Taylor and Francis Group. Sensorless control of PMSM in high speed range with iterative sliding mode observer. London. Xie. 645–656. 2004. [PK06] K. Sliding Mode Control in Electromechanical Systems. Morici. On-line adaptive artificial neural network based vector control of permanent magnet synchronous motors.J. Guldner. Osaka.M.I. Naples. Power Electron. High-performance indirect field-oriented output-feedback control of induction motors. and L. CA. pp. differentiation and output-feedback control. August 5–7. pp.M. 39(4).F. 39(3). 22(6). Kung and M. Ind. 1111–1116. 258–263. A robust sliding mode observer for permanent magnet synchronous motor drives. 1998. 924–941.. CA. [LA06] C. [MPR02] H. August 14–16. A. Kluwer. 1999. 2. February 1996. Trzynadlowski. The Field Orientation Principle in Control of Induction Motors.. and N. Sevilla. in Proceedings of the IEEE APEC. [XVT03] Y. Load disturbance resistance speed controller design for PMSM. Power Electron. Rahman and M. pp. 37–43. 2. [UGS99] V. China. Ind. Hwang. Ping. 2002. IEEE Trans.L. New antiwindup PI controller for variable-speed motor drives.W. Hoque. Rahim.-Y.K. K. 1031–1039. 1998. C. 35. An improved sliding mode observer for speed sensorless vector control drive of PMSM. Anaheim. 2002. J. Ind.S. India. Electron.. Kim.M.M.L. Zhang.A. Time optimal control for induction motor servo system. in Proceedings of the International Conference on Power Electronics. Sawamura.. FPGA-based speed control IC for PMSM drive with adaptive fuzzy control. in Proceedings of the IEEE IECON. 2001. Shanghai. 1994.H.-H. Ind.H. Andreescu. [LE02] C. 2. Appl. Elbuluk.K. 1999. 445–450. 1014–1019.. Ind.S. D. Japan. W. vol. Energy Conversion. 2006. 1198–1208.D. 514–524. Int. 785–794. K. Vilathgamuwa. vol. Tiitinen. 1991.A. Tolliver.H.B. Drives and Energy System for Industrial Growth. Ha.B. [L03] A. [R96] X.. Taylor & Francis. [PW91] M. The next motor control method—DTC direct torque control.S. [T01] M. LLC . IEEE Trans. Shin.A. 13(4). Park and C.18 -16 Control and Mechatronics [JSHIS03] J. Kang. Li and M.P. Improved direct torque and flux vector control of PWM inverter-fed induction motor drives. Control. 53(3). 2476–2486. S.A. 2003. vol.H. Konghirun. Electron. and K. 1995. H. SICE 2002.. Tonielli. Sliding-mode observer and improved integrator with DC-offset compensation for flux estimation in sensorless-controlled induction motors. in Proceedings of the IEEE IPEMC. 45(3). J. Academic Press. 42. Ali. Ansaldo-CRIS. 53(4). in 1st European Technical Scientific Report. Sensorless drive of surface-mounted permanent-magnet motor by high-frequency signal injection based on magnetic saliency. pp. 344–350. Trzynadlowski. [KKHK04] K. 76(9). [RH98] M. Electron.A. IEEE Trans. [KK95] M. 2003. 2007. IEEE Trans. 1st edn. [S98] H. and J. San Diego. IEEE Trans. 1996. Delhi.. pp. [T96] P. 2006. [KT07] Y. Ind. Performance improvements to the vector control-based speed of induction motors using sliding mode control. Utkin. 2006. and K. A special approach for the direct torque control of an induction motor. 1033–1047. Spain.. Dordrecht. Control of Induction Motors. Automatica. Higher-order sliding modes. [T94] A. 1–5. Kim. 216–225. Electron. App. 6(3). November 5–8. Tsai. Sul. Paponpen and M. Observer-based robust adaptive control of PMSM with initial rotor position uncertainty. the Netherlands.. Kazmierkowski and A. and R. Roboam. IEEE Trans. 2003. 311–318.

VSS’08. High-order terminal sliding mode based mechanical resonance suppressing method in servo system. 1–6. © 2011 by Taylor and Francis Group. 2008. in Proceedings of the Second International Symposium on Systems and Control in Aerospace and Astronautics. Y. 2008. Hybrid terminal sliding mode observer design method for permanent magnet synchronous motor control system. Zheng and Y. [ZFY08] J. pp. IEEE 10th International Workshop on Variable Structure Systems. Yu. Antalya. and X. Feng.AC Servo Systems 18 -17 [ZF08] J. December 10–12. Shenzhen. 106–111. LLC . Zheng. June 8–10. China. Turkey. Feng. pp.

.......... 19-2 Augmented Design Model.......... and aerospace systems often require set-point following of a periodic trajectory..5].. 19-1 Frequency Decomposition of Reference Signal.. mechanical systems... the coefficients of the controller change accordingly.. when switching from one controller to another..... the control signal is often generated by a controller that is explicitly described by a transfer function with appropriate coefficients [2..... the generator for the reference must be included in the stable closed-loop control system [1]....... In the design of periodic repetitive control systems.......... This chapter develops a repetitive control algorithm using the framework of a discrete-time model predictive control....... an alternative is to embed fewer periodic modes at a given time............. As a result. ................. a very high-order control system may be obtained..... LLC ...4 19...... With a given reference signal. If there are a number of frequencies contained in the exogenous signal........... Instead of retaining all the periodic modes in the periodic control system... Second.... which could lead to numerical sensitivity and other undesirable problems in a practical application......6 Conclusions. 19-1 © 2011 by Taylor and Francis Group.. This will effectively result in a lower order periodic control system through the use of switched linear controllers.................3 19.. manufacturing systems..... Repetitive control systems follow a periodic reference signal and use the internal model control principle [2–5]. is paramount from implementation point of view................................... no bump occurring in the control signal........... In this situation. and when the frequency of the external signal changes.. the design of a control system that has the capability to produce zero steady-state error is paramount...19 Predictive Repetitive Control with Constraints Liuping Wang RMIT University Shan Chai RMIT University 19.................... from which the dominant frequencies are identified and error analysis is used to justify the selections........................ if fast sampling is to be used....................... 19-5 Application to a Gantry Robot Model..... It is well known from the internal model control principle that in order to follow a periodic reference signal with zero steady-state error............2 19...1  Introduction Control system applications in........ for example... 19-10 References�������������������������������������������������������������������������������������������������� 19-10 19........5 Introduction.. 19-3 Discrete-Time Predictive Repetitive Control.........1 19... and the number of modes is proportional to the period and inversely proportional to the sampling interval..................... 19-7 Process Description  •  Frequency Decomposition of the Reference Signal  •  Closed-Loop Simulation Results Eric Rogers University of Southampton 19.... In addition to knowing which periodic controller should be used.. the frequency components of this signal are first analyzed and its reconstruction is performed using the frequency sampling filter model [6]..... the repetitive control system will contain all periodic modes.....

Finally. 2. M − 1) are the frequency components contained in the periodic signal.2) and interchanging the summation. 1. The z-transform of the signal r(k) is given by R(z ) = ∑ r(k)z k =0 M −1 −k (19.1) where R(e j(2πi/M)) (i = 0. M = T/Δt. the corresponding discrete sequence is denoted by r(k). Note that the discrete frequencies are 0. in Section 19. M − 1. the disturbance model is formulated and embedded into a state-space model.5. (M − 1)2π/M for this periodic signal. with the augmented state-space model. Third. The augmented state-space model input signal is inversely filtered with the disturbance model. we express the fundamental frequency as ω = 2π/M. …. the discrete-time model predictive control framework based on the Laguerre functions is used to develop the repetitive control law.4. In Section 19. 19. where k = 0. This chapter is organized as follows. In Section 19. and within one period. …. we obtain the z-transform representation for the periodic signal r(k) as ( M −1)/ 2 R(z ) = l =−( M −1)/ 2 ∑ R(e jlω )H l (z ) (19.2.3. 2π/M. 1.2) By substituting (19. the frequency sampling filter model is used to construct the reference signal. and consequently the current control signal is computed from past controls to ensure a bumpless transfer when the periodic reference signal changes. …. For notational simplicity. with which constraints are imposed.19-2 Control and Mechatronics the input disturbance model that contains all the dominant frequency components is formed and is embedded into an augmented state-space model. which forms the basis for analysis of dominant frequency components and the quantification of errors when insignificant components are neglected. and is assumed to be an odd integer for the reason explained below.2  Frequency Decomposition of Reference Signal We assume that the periodic reference signal of period T is uniformly sampled at a rate Δt. this discrete periodic signal can be uniquely represented by the inverse Fourier transform 1 M r (k) = ∑ (e i =0 M −1 j (2 πi /M ) )e j (2 πik /M ) (19. LLC .1) into (19. Here. From Fourier analysis [8]. simulation results from application to a gantry robot are given.3) where Hl(z) is termed the lth frequency sampling filter [6] and has the form H l (z ) = = 1 1 − z −M M 1 − e jlωz −1 1 1 + e jlωz −1 + M ( + e j( M −1)lωz −( M −1) ) © 2011 by Taylor and Francis Group. In Section 19. the receding horizon control principle is used with an online optimization scheme [7] to generate the repetitive control law where any plant operational constraints are imposed. 4π/M.

Predictive Repetitive Control with Constraints 19-3 Here. the time domain signal r(k) can be constructed using the frequency components contained in the periodic signal with the input taken as a unit impulse function.2. which is determined by the period T and the sampling interval Δt. Hl(z) = 1. From the identification of dominant frequency components in Section 19. The frequency sampling filters are bandlimited and centered at lω. which is similar to the application of a frequency sampling filter in the modeling of dynamics. For instance. we assume that the dominant xm (k + 1) = Am xm (k) + Bmu(k) + Ωmµ(k) y(k) = Cm xm (k) (19. As Δt reduces.6) © 2011 by Taylor and Francis Group. the generator for the reference must be included in the stable closed-loop control system [1]. at z = ejlω . • Given the z-transform representation of the reference signal. • Once the dominant frequency components are selected. the number of frequencies increases. the contribution of each frequency component can be considered in terms of the error arising when a specific frequency component is dropped.4) l (z ) and FIl (z ) are the lth second-order filters given by where FR l FR (z ) = 1 2(1 − cos(lω)z −1 )(1 − z − M ) M 1 − 2 cos(lω)z −1 + z −2 −1 −M 1 2 sin(lω)z (1 − z ) −1 M 1 − 2 cos(lω)z + z −2 FIl (z ) = The following points are now relevant: • The number of frequency components is M.3 can also be written in terms of real and imaginary parts of the frequency component R(ejlω) [9] as 1 1 − z −M R(e j 0 ) + M 1 − z −1 ( M −1)/ 2 R(z ) = ∑ Re(R(e l =1 jl ω l )FR (z ) + Im(R(e jlω )FIl (z )  (19. However. the number of dominant frequencies may not change with respect to a faster sampling rate.5) (19. 19. As a direct result. LLC . This approach is used in determining the dominant components of the periodic reference signal.3 Augmented Design Model Suppose that the plant to be controlled is described by the state-space model where xm(k) is the n1 × 1 state vector u(k) is the input signal y(k) the output signal μ(k) represents the input disturbance By the internal model principle. we used the assumption that M is an odd number to include the zero frequency. Equation 19. in order to follow a periodic reference signal with zero steady-state error. they are embedded in the design of predictive repetitive control system.

7) (19. respectively.6) gives D(q −1 ) y(k + 1) = Cm z(k + 1) = Cm Am z(k) + Cm Bmus (k) (19.12) D(q −1 )xm (k + 1) = Am D(q −1 )xm (k) + Bm D(q −1 )u(k) where the relation D(q−1)μ(k) = 0 has been used.13) or. Similarly. the input disturbance μ(k) is described by the difference equation with the backward shift operator q−1 D(q −1 )µ(k) = 0 (19. in the inverse disturbance model. y(k + 1) = −d1 y(k) − d2 y(k − 1) − − dγ −1 y(k − γ + 1) (19. on expanding the right-hand side. choose the new state vector as T x(k) =   z(k) y(k) y(k − 1) y(k − γ )  T leading to the augmented state-space model x(k + 1) = Ax(k) + Bus (k) y(k) = Cx(k) (19. application of the operator D(q−1) to both sides of the output equation in (19. Applying the operator D(q−1) to both sides of the state equation in the system model (19.19-4 Control and Mechatronics frequencies are 0 and some lω and that the input disturbance μ(k) is generated through the denominator of the disturbance model in the form D(z ) = 1 − z −1 Π l 1 − 2 cos(lω)z −1 + z −2 = 1 + d1z −1 + d2 z −2 + d3z −3 + ( ) ( ) (19.10) (19.15) © 2011 by Taylor and Francis Group. LLC .11) where these are the filtered state vector and control signals.8) + dγ z − γ In the time domain.5) gives or z(k + 1) = Am z(k) + Bmus (k) (19.14) −dγ y(k − γ ) + Cm Am z (k) + Cm Bmus (k) To obtain a state-space model that embeds the disturbance model.9) Define the following auxiliary variables using the disturbance model: z(k) = D(q −1 )xm (k) us (k) = D(q −1 )u(k) (19.

18) is minimized where Q ≥ 0 and R L > 0 are symmetric positive semi-definite and positive definite matrices. the control trajectory at the future time m is taken to be us (ki + m | ki ) = ∑ l (m)c = L(m) η i i T i −1 N (19.17) and the objective for the predictive repetitive control is to find the Laguerre coefficient vector η such that the cost function J= ∑ x(k + m | k ) Qx(k + m | k ) + η R η i i T i i T L m =1 −1 Np (19. the next task is to optimize the filtered control signal us(k) using the receding horizon control principle. LLC . −d2. becomes x(ki + m | ki ) = Am x(ki ) + ∑A i =0 m −1 m − i −1 BL(i)T η (19.17) into (19.16)  where ηT =  is the coefficient vector that contains the Laguerre coefficients and L(·) is the c c2 … cN    1  vector that contains the Laguerre functions l1(·). … become −d1I. B =       0  0      0   0    0  and O denotes the n1 × 1 zero vector. x(ki + m | ki).Predictive Repetitive Control with Constraints 19-5 where  Am  Cm Am  OT A=   OT   OT  O −d1 1 0 0 T C= O O −d2 0 … … 1 0 1 0 O −dγ −1 0 0 1 0 O   Bm     −dγ  Cm Bm  0   0  . where a summary of the main points is now given with complete details in Ref. …. respectively. Substituting (19. −d2I.19) © 2011 by Taylor and Francis Group.4  Discrete-Time Predictive Repetitive Control Having obtained the augmented state-space model. 19. … matrices with appropriate dimensions. where the auxiliary control signal us(k) is modeled using a set of discrete-time Laguerre functions.18) gives the optimal solution as  Np  η = − φ(m)Qφ(m)T + RL   m =1    ∑  Np   φ(m)QAm x(ki )   m =1    ∑ (19. The structure of the augmented model remains unchanged when the plant is multiple-input and multiple-output (or multivariable) except that the O vector is replaced by zero matrix with appropriate dimensions and the coefficients −d1. with identical elements. [7]. Assuming that the current sampling instant is ki. l2(·). where I denotes the identity matrix with compatible dimensions. Then the prediction of the future state variable. lN (·).

the control objective is to maintain the plant in steady-state operation with. The constrained control system performs the minimization of the objective function J (19.18) in real time subject to constraints. the feedback errors are captured as [ y(k i ) − r (k i ) y(k i − 1) − r (k i − 1)… y(k i − γ ) − r (k i − γ )]T = [e(ki )e(ki −1) … e(ki − γ )]T These error signals then replace the original output elements in x(k) to form state-feedback in the computation of the filtered control signal us(k) using (19.19) and (19. For instance.20) and the actual control signal to the plant is constructed using the relation or. and the output y(k). and  −d1   1 As =    0  0  −d2 0 1 0 −dγ −1 0 0 1 1  −dγ     0  0     . zero-filtered state vector xs(k) and constant plant output. …. the filtered control signal using receding horizon control is realized as us (ki ) = L(0)T η (19.22) where Umin and Umax are of dimension γ and contain the required lower and upper limits of the control amplitude. If the predictive repetitive controller is to be used to track a periodic input signal. LLC . and hence at sampling instant ki. Note that the state vector x(k) contains the filtered state vector xs(k). y(k − γ). control amplitude constraints are translated into a set of linear inequalities as U min ≤ AsU (k − 1) + Bs L(0)T η ≤ U max (19. © 2011 by Taylor and Francis Group. Bs =    0  0  0    0     0  When the predictive repetitive controller is used for disturbance rejection. respectively. y(k − 1). in the steady state. using the state-space realization.20). U (k) = AsU (k − 1) + Bsus (k) = AsU (k − 1) + Bs L(0)T η (19. the reference signal will enter the computation through the augmented output variables.19-6 Control and Mechatronics where φT (m) = ∑A i =0 m −1 m − i −1 BL(i)T Given the optimal parameter vector η. The key strength of predictive repetitive control lies in its ability to systematically impose constraints on the plant input and output variables.21) D(q −1 )u(k) = us (k) where U(k) = [u(k) u(k – 1) … u(k − γ)]T.

parallel to the conveyor beneath.1  The gantry robot. All motors are energized by performance matched dc amplifiers.1. it was possible to construct Bode plots for each axis and hence determine approximate transfer functions.69 ± j141.23). while the Z-axis is a linear ball-screw stage powered by a rotary brushless dc motor. moves until it is synchronized (in terms of both position and speed) with the conveyor. shown in Figure 19. and the Laplace transfer function is given by (19.10) (19. The sequence of operations is that the robot collects the object from a specified location. The lowest axis.11]. This experimental facility has been extensively used in the benchmarking of repetitive and iterative learning control algorithms [10. Here. to minimize the difference between the frequency response of the real plant and that of the model.19)(s + 4. From this data. but perpendicular to the conveyor.62)(s + 12.74 ± j 459.90 × 105 )(s + 10. we only consider the X-axis where a seven-order transfer function was used in design. Z Y X FIGURE 19. G(s) = (s + 500. The X. LLC .86) s(s + 69. the design algorithm developed in this chapter is applied to the approximate model of one axis of a gantry robot undertaking a pick-and-place operation that mimics many industrial tasks where control is required.Predictive Repetitive Control with Constraints 19-7 19. and then returns to the same starting location to collect the next object and so on. © 2011 by Taylor and Francis Group. by means of a least mean squares optimization technique. The Y-axis is mounted on the X-axis and moves in the horizontal plane. X. each axis of the gantry was modeled independently by means of sinusoidal frequency response tests. The Z-axis is the shorter vertical axis mounted on the Y-axis.23) The gantry robot can be treated as three single-input single-output (SISO) systems (one for each axis) that can operate simultaneously to locate the end effector anywhere within a cuboid work envelope. is a commercially available system found in a variety of industrial applications whose task is to place a sequence of objects onto a moving conveyor under synchronization. Axis position is measured by means of linear or rotary optical incremental encoders as appropriate. moves in the horizontal plane.5.99 ± j169. places the object on the conveyor. To obtain a model for the plant on which to base controller design.5 Application to a Gantry Robot Model In this section. 19.1  Process Description The gantry robot.75)(s + 10. These were then refined.29 ± j106.and Y-axes consist of linear brushless dc motors.00 ± j79.93)(s + 5.

0094 0–3 0. Thus. It is obvious that the reference signal can be closely 0. TABLE 19.0017 0–4 0. 0−3rd. the period of this signal is T = 2 (s).005 0 0.2 1. Figure 19.5 Magnitude of coe cients 3 2.1  Sum of the Squared Errors of the Reconstructed Signal Frequency Error 0 0.5 1 0.4 1.2  Frequency Decomposition of the Reference Signal The reference signal with one period has 199 samples and is shown in Figure 19.3 shows the magnitude of the coefficients for all the frequency sampling filters.6 1.025 0.01 0.035 0. Table 19.2.0005 0–5 0.8 1 Time (s) 1.19-8 Control and Mechatronics 19. Figure 19. which clearly decay very fast as the frequency increases.5 2 1. respectively. LLC .0778 0–1 0.3  Magnitude of the coefficients in the frequency sampling filter.03 0.01 s. and 0−6th frequency components. it is reasonable to reconstruct the reference signal with just a few low-frequency components.6 Reference signal Recostructed signal with 0−1st frequency Recostructed signal with 0−3rd frequency Recostructed signal with 0−6th frequency 0.1 shows the sum of squared errors between the actual reference signal and the reconstructed signal using a limited number of frequencies.04 0.2  Reference signal and reconstructed signal with a limited number of frequencies.5.015 0.005 0 −0.0001 0–6 0 © 2011 by Taylor and Francis Group. 3.2 compares the reconstructed signals using 0−1st.4 0. With a sampling interval of 0.8 2 FIGURE 19.02 0.2 0.5 0 −20 −15 −10 −5 0 l−frequency 5 10 15 20 FIGURE 19.0325 0–2 0.

02 −0.Predictive Repetitive Control with Constraints 19-9 approximated using 0−6th frequency components with only a small error.5 1 1.02 0. 19.2 (b) 0 0.23) are given.4b is as expected. as given by (19.035 0.6 −0.8 −1 (b) 0 0. the result could be a deterioration of the tracking performance of periodic reference signal when the input signal saturates.5 2 2.6 0. given that umax = 0.5 2 2.5 1 1.8 0.5 Time (s) 3 3.3  Closed-Loop Simulation Results In this section. Figure 19. can maintain the tracking performance under the constraints by input signal.6 and umin = −0.5.08 0.4 −0.04 −0.5  System simulation under input constraints: (a) system output under input constraints and (b) input signal under constraints.4 System input 0. 0−6th frequency components are identified as the dominant frequencies for this example.06 0.04 Position 0.5 4 Output signal under input constrains Reference signal System input 0.6 0.005 0 −0. Figure 19.22).5 4 0. Thus.8 0. 0.2 −0.6 −0. Conversely.5a demonstrates that the system is unstable if the input signal is directly saturated as described by umin ≤ u(k) ≤ umax.5 4 Output signal under input constrains Reference signal FIGURE 19.5 1 1.2 −0.5 3 3.5 2 2.005 (a) 0 0.015 0. For example. © 2011 by Taylor and Francis Group. there are circumstances where the input of gantry robot could be saturated due to limitations of either its electrical or mechanical components. For example.5 2 Time (s) 2.8.5 Time (s) 3 3.5 1 1. LLC .04 0. Figure 19.025 Position 0.01 0.2 0 −0.4 −0.2 0 −0. The mismatch in tracking happens when the input signal hits the limits and is saturated. 0. The denominator of the frequency sampling filter model with the 0−6th frequency sampling filter embedded has been used to design the controller. In practice. if saturation occurs.06 (a) 0 0. results from a simulation study on the predictive repetitive controller applied to the gantry robot model represented by the transfer function (19.4  System simulation without input constraints: (a) system output and (b) input signal.5 4 FIGURE 19.03 0.4a shows the perfect tracking performance of the design and the repetitive nature of input signal as shown in Figure 19.6a shows that proposed constraint algorithm.8 −1 −1.5 Time (s) 3 3.02 0 −0.4 0.

International Journal of Control. The simulation results based on the gantry robot model show that the algorithm can achieve (1) perfect tracking of the periodic reference signal and (2) superior tracking performance when the input is subject to constraints. D.5 2 2. U. Li. M. R. Journal of Vibration and Acoustics-Transaction of the ASME. The augmented model is constructed by incorporating the dominant frequency components from the frequency sampling filter model.5 Time (s) 3 3. Repetitive control system: A new type servo system for periodic exogenous signals. and J..6  Conclusions In this chapter. Y. 19.035 0.04 0. 1998. Adaptive frequency sampling filters.2 −0. R. R. The internal model principle of control theory. Tsao. 77. Bentsman.005 0 −0. May 1996. References 1. NJ. Y. 659–668.. 1st edn. M. H. in 19th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes. 5(12). Schafer. vol. T. V.6 0. Wonham. Model Predictive Control System Design and Implementation Using MATLAB.K. a predictive repetitive control algorithm using frequency decomposition has been developed. 2. Buck. February 1999.19-10 0. Prentice Hall. 5. Bai and T. L.5 4 FIGURE 19. Multi-periodic repetitive control system: A Lyapunov stability analysis for MIMO systems.5 4 (b) 0. U. 3. Yamamoto. From Plant Data to Process Control: Ideas for Process Identification and PID Design.015 0. T.6  System simulation under input constraints with proposed method: (a) system output under input constraints and (b) input signal under constraints. 259–265. R. Oppenheim. Banks. P. O. R. B.4 −0. Wu.8 −1 0 0. Nakano.01 0. 939–944. 4. Manayathara.-C. 2000. Owens. and S.5 1 1. 8.5 1 Control and Mechatronics Output signal under input constrains Reference signal 1. 120. W. L. Taylor & Francis. pp.025 Position 0. Wang and W. IEEE Transactions on Automatic Control. 9.8 0. M. Hara.5 3 3. Bitmead and B. 2004. 4(3). Springer. May 1976. and M. Omata. Cluett. 504–515. Francis and W.5 2 Time (s) 2. T. Automatica. 1st edn. Simulation of an internal model based active noise control system for suppressing periodic disturbances.K. D. July 1988. Wang. 19. December 1980. R. and J.03 0. Rejection of unknown periodic load disturbances in continuous steel casting process using learning repetitive control approach. A. 6. Anderson. L. A model predictive control algorithm based on the receding horizon principle is employed to optimize the filtered input to the plant. 7.4 System input 0. 111–116. London. © 2011 by Taylor and Francis Group. S. A.6 −0. IEEE Transactions on Control Systems Technology.2 0 −0. Discrete-Time Signal Processing. London. 457–465. 33(7). Englewood Cliffs.02 0. 2009. LLC .005 (a) 0 0. 2nd edn. 1980.

[Online]. Repetitive control of synchronized operations for process applications. J. and D. Hatonen. 22(6). [Online]. Rogers.soton. L. J. D. 300–325. J. 21(4). and D. H. International Journal of Adaptive Control and Signal Processing. J. Available: http://eprints. H. Ratcliffe.ecs. Available: http://eprints. E. P. Rogers.Predictive Repetitive Control with Constraints 19-11 10. December 2006.soton. L. May D. © 2011 by Taylor and Francis 11. Hatonen. LLC . P.ecs. IEEE Transactions on Robotics. Lewin. J. Lewin. Owens. J. Norm-optimal iterative learning control applied to gantry robots for automation applications. 1303–1307. E.

.... LLC ........... this approach guarantees global or regional regulation and tracking properties. The proof of these properties is a direct consequence of the 20 -1 © 2011 by Taylor and Francis Group........................ and Kokotovic in [1]. 20 -1 Backstepping........... Backstepping achieves the goals of stabilization and tracking................1 20....... Kanellakopoulos..............2 20.... To easily grasp the design and analysis procedures..3 20..................... 20 -1 Adaptive Backstepping.. An important advantage of the backstepping design method is that it provides a systematic procedure to design stabilizing controllers........ 20 -9 Backstepping Control of Unstable Oil Wells..................................... a new approach called “backstepping” was proposed for the design of adaptive controllers................. we start with simple low-order systems... when the controlled plant belongs to the class of systems transformable into the parametricstrict-feedback form........ 20 -7 State Feedback Control........ 20.... A number of results using this approach has been obtained [2–9]....... The technique was comprehensively addressed by Krstic.. Then.....20 Backstepping Control 20.... With this method...... 20 -11 Unstable Multiphase Flow  •  Dynamic Model  •  Backstepping Control Design  •  Simulation Results Jing Zhou International Research Institute of Stavanger Changyun Wen Nanyang Technological University References�������������������������������������������������������������������������������������������������� 20-21 20.5 20.... Application to the control of unstable oil wells is presented to illustrate the theory and the comparative simulation results illustrate the effectiveness of the proposed backstepping control algorithm..................................................... Backstepping is a recursive Lyapunov-based scheme for the class of “strict-feedback” systems......... the construction of feedback control laws and Lyapunov functions is systematic... This chapter reviews basic backstepping tools for state feedback control...... The recursive backstepping design steps back toward the control input starting with a scalar equation and involves the systematic construction of feedback control laws and Lyapunov functions.6 Introduction...... the ideas are generalized to arbitrarily n-order systems...... In fact...............4 20.......20 -4 Adaptive Backstepping with Tuning Functions.......1  Introduction In the beginning of 1990s.. following a step-by-step algorithm....................... Another advantage of backstepping is that it has the flexibility to avoid cancellations of useful nonlinearities and achieve stabilization and tracking..........2  Backstepping The idea of backstepping is to design a controller recursively by considering some of the state variables as “virtual controls” and designing for them intermediate control laws...........

To give a clear idea of such development.2) We view x2 as a control variable and define a virtual control law for (20. 2 x1 = x2 + x1 . we can rewrite (20. 2 1 2 V1 = z1 . which makes z1 → 0. and let z2 be an error variable representing the difference between the actual and virtual controls of (20. (20. © 2011 by Taylor and Francis Group. say α1.20 -2 Control and Mechatronics recursive procedure because a Lyapunov function is constructed for the entire system including the parameter estimates.5) (20.3) Thus. we define z1 = x1 and derive the dynamics of the new coordinate 2 z1 = x2 + x1 . Step 1.. x2.4) In this step. and x3 are system states u is control input x3 = u.6) We can now select an appropriate virtual control α1. if z2 = 0.2) as 2 z1 = α1 + x1 + z2 .1). α1 = −(c1 + 2x1 )(x2 + x1 (20. then V1 = −c1z1 and z1 are guaranteed to converge to zero asymptotically. Start with the first equation of (20. Consider a control Lyapunov function The time derivative of which becomes V 1 = z1(α1 + x1 ) + z1z 2 . The control objective is to design a state feedback control to asymptotically stabilize the origin. which would make the first-order system stabilizable. 2 (20. Then. (20. z 2 = x2 − α1.2). the time derivative of V1 becomes 2 V1 = −c1z1 + z1z 2 .e.2). in terms of the new state variable. i. 2 ). (20. we consider the following third-order strict-feedback system. (20. 2 α1 = −c1z1 − x1 .9) 2 Clearly. (20.1) where x1.7) (20. LLC . our objective is to design a virtual control law α1.8) where c1 is a positive constant. 2 x2 = x3 + x2 .

as 1 2 . (20. if z3 = 0.13) Taking time derivative gives 2 2 V2 = −c1z1 + z1z 2 + z 2 (z 3 + α 2 + x2 + (c1 + 2 x1 )(x2 + x1 )) 2 2 2 )) + z 2 z 3 . and x2.14) We can now select an appropriate virtual control α2 to cancel some terms related to z1. while the term involving z3 cannot be removed 2 2 α 2 = −z1 − c2 z 2 − x2 − (c1 + 2 x1 )(x2 + x1 ). we derive the error dynamics for z3 = x3 − a2.1). ∑c z i =1 2 2 i i + z2z3 . Choose a Lyapunov function V3. we have V 2 = − asymptotically. z3 converge to zero.10) becomes 2 2 z 2 = z 3 + α2 + x2 + (c1 + 2x1 )(x2 + x1 ). z 2 = x2 − α1 = x3 + x2 (20. Proceeding to the last equation in (20. both z1 and z2 are guaranteed to converge to zero Step 3. z2.16) ∑ 2 i =1 ci zi2. and.17) In this equation. V3 = V2 + z 3 2 (20.11) (20. = −c 2 z 2 + z 2 (α 2 + z1 + x2 + (c1 + 2 x1 )(x2 + x1 (20.10) in which x3 is viewed as a virtual control input. thus.12) The control objective is to make z2 → 0. 2 2 + (c1 + 2x1 )(x2 + x1 ).Backstepping Control 20 -3 Step 2. Our objective is to design the actual control input u such that z1. (20. the actual control input u appears and is at our disposal. (20. We derive the error dynamics for z2 = x2 − α1. x1. (20. So the time derivative of V2 becomes 2 2 V2 = −c1z1 − c2 z 2 + z2z3 = − Clearly. ∂x1 ∂x2 (20.15) where c2 is a positive constant. LLC . z3 = x3 − α2 . Choose a control Lyapunov function 1 2 V2 = V1 + z 2 . 2 (20.18) © 2011 by Taylor and Francis Group. z3 = u − ∂α 2 ∂α 2 2 (x2 + x1 ) − 2 (x3 + x2 ). Define a virtual control law a2 and let z3 be an error variable representing the difference between the actual and virtual controls Then.

The boundedness of x 2 follows from boundedness of α1 in (20. the Lasalle theorem in [10] guarantees the global uniform boundedness of z1. To illustrate the idea of adaptive backstepping.20) where c3 is a positive constant. The reference signal xr and its first second-order derivative are piecewise continuous and bounded.15) and the fact that x3 = z3 + α2 . let us first consider the following second-order system: T (x1 )θ .24) where α1 is called virtual control and will be determined in later discussion. z2 .22) where θ ∈ Rr is an unknown vector constant ϕ1 ∈ Rr and ϕ2 ∈R   r are known nonlinear functions Our problem is to globally stabilize the system and also to achieve the asymptotic tracking of xr by x1. Introduce the change of coordinates z1 = x1 − xr .21) Then.7) and the fact that x2 = z2 + α1. the adaptive controller is able to ensure the boundedness of the closed-loop states and asymptotic tracking. which provides estimates of unknown parameters. the derivative of Lyapunov function of V3 is V3 = − ∑c z . The design procedure is elaborated in the following. The parameters of the controller are adjusted during the operation of the plant. With the above example. LLC . z3 → 0 as t → ∞. Then. x1 is also bounded and lim t→∞x1 = 0. x1 = x2 + φ1 T (x1 . In the presence of such parametric uncertainties. Combining this with (20. 20. we will consider unknown parameters which appear linearly in system equations. z 2 = x 2 − α1 − x r . 2 i i i =1 3 (20.20).19) · We are finally in the position to design control u by making V3 ≤ 0 as follows: u = − z 2 − c3 z 3 + ∂α 2 ∂α 2 2 (x2 + x1 ) + 2 (x3 + x2 ).   ∂x1 ∂x 2 (20. It follows that z1.3 Adaptive Backstepping In this section. x 2 = u + φ2 (20. we will consider parametric uncertainties and achieve both boundedness of the closed-loop states and asymptotic tracking. (20.20 -4 Control and Mechatronics Its time derivative is given by V3 = − ∑c z i =1 2 2 i i  ∂α ∂α 2 2  + z 3  u + z 2 − 2 (x2 + x1 ) − 2 (x3 + x2 ) . with a control law. Similarly. Since x 1 = z1. the boundedness of x3 then follows from boundedness of α2 in (20. we conclude that the control u(t) is also bounded. An adaptive controller is designed by combining a parameter estimator. z2 . ∂x1 ∂x 2 (20. © 2011 by Taylor and Francis Group. x2 )θ . In the following. and z3. the idea of backstepping has been illustrated.23) (20.

we need to select a control Lyapunov function and design u to make its derivative nonpositive.26) and (20.29) Step 2. (20.22) by considering x2 as control variable. ˆ  ∂x1 ∂x1  ∂x r ∂θ1 T (20.Backstepping Control 20 -5 Step 1. 2 2 (20. Then the derivative of V1 is given by where θ1 = θ − θ T −1 ˆ1 V 1 = z1 z 1 − θ1 Γ θ 2 ˆ 1 − Γφ1z1 ) = −c1z1 + z1z 2 − θ1 Γ −1 (θ T 2 = −c1z1 + z1z 2 .25) with respect to the Lyapunov function V1 = 1 2 1 T −1 z1 + θ1 Γ θ1.27) ˆ = Γφ1z1 .25) Since θ is unknown. We use a Lyapunov function whose derivative is T    ∂α 1 ∂α 1  ∂α ∂α1 2  Γφ1z1 − 1 x r − x r  . The derivative of z2 with (20.30) In this equation. The derivative of tracking error z1 is given as T z1 = z 2 + α1 + φ1 θ. At this point. (20.26) (20.28) Tˆ α1 = −c1z1 − φ1 θ1 . the actual control input u appears and is at our disposal.27) is now expressed as z 2 = x 2 − α1 − x r =u−  ∂α 1 ∂α  ∂α ∂α x2 +  φ2 − 1 φ1  θ − 1 Γφ1z1 − 1 x r − x r. LLC .31) (20. 2 (20. this task is fulfilled with an adaptive controller consisting of a control law and an update law to obtain an estimate of θ. Design the first stabilizing function α1 and parameter updating ˆ as law θ where ˆ 1 is an estimate of θ θ c1 is a positive constant Γ is a positive definite matrix Our task in this step is to stabilize (20. = − c z + z u + z − x + − φ φ 1 1 2 1 2 2 1 θ − V2  ˆ1  ∂x1 ∂x1  ∂x r ∂θ     1 2 V2 = V1 + z 2 . We start with the first equation of (20. (20. θ ˆ 1.32) © 2011 by Taylor and Francis Group.

40) © 2011 by Taylor and Francis Group.30) as a new parameter vector and assign to it a new estimate θ selecting u as T ( ) u = − z1 − c2 z 2 +  ∂α1 ∂α  ˆ ∂α 1 ∂α x2 −  φ2 − 1 φ1  θ Γφ1z1 + 1 x r + x r. To deal with the term ˆ 1 designed in the first step containing the unknown parameter θ. (20.39) 2 2 V 2 = −c1z1 − c2 z 2 .32) to ensure V 2 ≤ 0. (20. The resulting derivatives of V2 is given as  ∂α1  2 2 ˆ 1). To eliminate this term. 2+ ˆ  ∂x r ∂x1 ∂x1  ∂θ1 (20. we It can be observed that term φ2 − ( ∂α1/ ∂x1) φ1 (θ − θ ˆ 2 by need to treat θ in the equation (20.36) ˆ2 Our task in this step is to stabilize the (z1. φ1 (θ − θ V 2 = −c1z1 − c2 z 2 +  φ2 −   ∂x1  T (20.30) becomes  ∂α  ˆ 2 ).20 -6 Control and Mechatronics · The control u is able to cancel the rest six terms in (20. = −c1z1 − c2 z 2 − θ2 Γ −1  θ 2 − Γ  φ2 −    ∂x1   (20. z2) system.38) We choose the update law Then.33) where c2 is a positive constant. z 2 = − z1 − c2 z 2 +  φ2 − 1 φ1  (θ − θ  ∂x1  T (20.34) T ˆ 1 ) cannot be canceled. the derivative of V2 gives  ∂α  ˆ 2 = Γ  φ2 − 1 φ1  z 2 .35) With this choice. The presence of the new parameter estimate θ s ­ uggests the following form of the Lyapunov function: 1 2 1 T −1 V2 = V1 + z 2 + θ2 Γ θ2. Then the derivative of Lapunov function of V is where θ2 = θ − θ 2 T −1 ˆ2 V 2 = V 1 + z 2 z 2 − θ2 Γ θ    T ∂α 1 T −1 2 ˆ2 Γ θ = −c1z1 + z 2  −c 2 z 2 +  φ 2 − θ 2 φ1  − θ2   ∂x1   ˆ  T ∂α 1   2 2 φ1 z 2  . 2 2 (20. LLC .37) ˆ 2 . 1+ ˆ 1 1 1 ∂x r x r x r  ∂x1 ∂x1  ∂θ T (20. θ  ∂x1  (20. we try to use the estimate θ u = − z1 − c2 z 2 +  ∂α1 ∂α  ˆ ∂α 1 ∂α x2 −  φ2 − 1 φ1  θ Γφ z + 1 + .

where c1 is a positive constant ˆ is an estimate of θ θ To overcome the overparametrization problem caused by the appearance of θ.22) by considering x2 as control variable. z 2 = x 2 − α1 − x r .43) Our task in this step is to stabilize (20. as shown in the previous subsection. θ 1 2 that limt→∞(x1 − xr) = 0. z → 0 as t → ∞. (20. and z . Since z1 and xr are bounded. LLC . such b ­ oundedness of z1 . a new backstepping design is presented to avoid such a case from happening. as follows: τ1 = φ1z1. Then. We start with the first equation of (20.46) © 2011 by Taylor and Francis Group. this Lyapunov function (20.22) with the same control objective. The bounded· and a in (20. the above adaptive backstepping employs the overparametrization estimation. If x2 is the actual control and let z2 = 0. θ ˆ 2.4 Adaptive Backstepping with Tuning Functions To give a clear idea of tuning function design.47) Tˆ α1 = −c1z1 − φ1 θ..Backstepping Control 20 -7 By using the Lasalle’s theorem.35). (20. z 2 . 20. This means that the dynamic order of the controller is not of minimal order. we define a function τ1. named tuning function.40) guarantees the global uniform ˆ 1.e. x1 is also bounded from x1 = z1 = xr. two estimates for the same parameter vector θ in this case.45) ˆ = Γφ1z1. − θ Γ −1θ V 1 = z1 z 2 + α1 + φ1 θ ( ) ( ) (20. θ = θ − θ 1 T Tˆ ˆ −φ1z1 .41) (20. namely globally stabilization and also asymptotic tracking of xr by x1. we consider the same system in (20. It follows that asymptotic tracking is achieved.44) ˆ . The derivative of tracking error z1 is given as T z1 = z 2 + α1 + φ1 θ. In the next section. Step 1. Choose the control Lyapunov function V1 = 1 2 1 T −1 z1 + θ Γ θ.42) where a2 is virtual control. and use the same change of coordinates z1 = x1 − xr . (20. we choose a1 to make We may eliminate θ by choosing θ · V1 ≤ 0. i. which employs the minimal number of parameter estimates.43). The design procedure is elaborated as follows. Combining ness of x2 follows from boundedness of x 2 2 1 r r 1 this with (20. 2 2 (20. the derivative of V is where Γ is a positive definite matrix. (20. we conclude that the control u(t) is also bounded. In conclusion.26) and the fact that x = z + α + x .

The control Lyapunov function is selected as 1 2 1 2 1 2 1 T −1 V2 = V1 + z 2 = z1 + z 2 + θ Γ θ. ˆ  ∂x1 ∂x1  ∂x r ∂θ T (20.49).20 -8 Control and Mechatronics The resulting derivative of V1 is T −1 ˆ 2 V 1 = −c1z1 + z1z 2 − θ Γ θ − τ1 . ( ) (20.53) Then.52) where τ2 is called the second tuning function and is selected as ∂α   τ2 = τ1 +  φ2 − 1 φ1  z 2 .55) © 2011 by Taylor and Francis Group.  ˆ  ∂x1  ∂θ ∂x r (20. We derive the second tracking error for z2 z2 = u −  ∂α 1 ∂α  ∂α ˆ ∂α 1 x2 +  φ2 − 1 φ1  θ − 1 θ − x r − x r.49) In this equation. the actual control input u appears and is at our disposal.  ˆ  ∂x1  ∂ θ ∂x r  (20. LLC . θ (20.54) To stabilize the system (20. 2 2 2 2 (20.51) Finally.51) by designing the update law as ˆ = Γτ 2 .48) Step 2. ∂x1   (20. + θT  τ1 +  φ2 − 1 φ1  z 2 − Γ −1θ   ∂x1    (20.50) · Our task is to make V2 ≤ 0.  ∂α1 2 ˆT x2 + θ V 2 = −c1z1 + z 2  u + z1 − ∂x1    ∂α  ∂α ∂α φ2 − 1 φ1  − 1 Γτ 2 − 1 x r − x r .   ∂α 1 2 ˆ − ∂α1 x r − x r ˆ T  φ2 − ∂α1 φ1  − ∂α1 θ x2 + θ V 2 = −c1z1 + z 2  u + z1 −    ˆ   x ∂ x ∂ x ∂ ∂ θ   1 1 r    ∂α  ˆ . we can eliminate the θ term from (20. the actual control input is selected to remove the residual term and make · V2 ≤ 0 u = − z1 − c2 z 2 + ∂α1 ˆT x2 − θ ∂x1  ∂α  ∂α ∂α φ2 − 1 φ1  + 1 Γτ 2 + 1 x r + x r .

1 The parameters θ and b are unknown and the sign of b is known.…. Γ is a positive definite matrix.5  State Feedback Control The controller designed in this section achieves the goals of stabilization and tracking. i = 1. The resulting derivative of V2 is 2 2 V 2 = −c1z1 − c2 z 2 .57) x n −1 = xn + φn −1(x1. The scheme is now concisely summarized in Table 20. LLC . Assumption 20. At each step. p ˆ are estimates of θ.57). n are known nonlinear functions. ψ1 ∈ R. For system (20. The proof of these properties is a direct consequence of the recursive procedure because a Lyapunov function is constructed for the entire system including the parameter estimates. The control objective is to force the output x1 to asymptotically track the reference signal xr with the following assumptions. θ © 2011 by Taylor and Francis Group. By using tuning functions. ϕi ∈ Rr. γ are positive constants.Backstepping Control 20 -9 where c2 is a positive constant.…. xn −1 )θ + ψn (x1. the number of design steps required is equal to n. only one update law is used to estimate unknown parameter θ. The overparametrization problem is overcomed by using tuning functions. This avoids the overparametrization problem and reduces the dynamic order of the controller to its minimum. a stabilizing function ai. the control u and parameter ˆ are developed. x2 ) (20. and a tuning function τi are generated. n and estimate θ i ˆ. Finally.56) This Lyapunov function provides the proof of uniform stability and the proof of asymptotic tracking x1(t) − xr(t)→0. The number of parameter estimates are equal to the number of unknown parameters. The controller designed in this section also achieves the goals of stabilization and tracking. (20. 20.….1. an error variable zi. xn −1 ) x n = bu + φn (x )θ + ψn (x ). T T T T where x = [x1. xn]T ∈  Rn.…. where c . and the high-frequency gain b is an unknown constant. the vector θ ∈ Rr is constant and unknown. 1/b. x2 )θ + ψ 2 (x1. i = 1. The adaptive backstepping design with tuning functions is now generalized to a class of nonlinear system as in the following parametric strict-feedback form x1 = x2 + φ1 (x1 )θ + ψ1(x1 ) x 2 = x3 + φ2 (x1.….2 The reference signal xr and its n-order derivatives are piecewise continuous and bounded. Assumption 20.

…. i =1.….66) Parameter Update Laws: ˆ = Γτ n θ ˆ = −γ sign (b) (α n +xr(n) )zn p (20. that z1. its derivative is given by z ∑1 2 i =1 n 2 i |b| 2 1 + θTΓ −1θ + p.63) Tuning Functions: τ1 = φ1z1  ∂α  τ 2 = τ1 +  φ2 − 1 φ1  z 2 ∂x1    τi = τi −1 +  φi −   (20.67) (20.64) (20. this Lyapunov function provides the proof of uniform stability. zn. n.65) ∑ ∂x j =1 i −1 ∂α i −1 j  φ j  zi   (20. θ © 2011 by Taylor and Francis Group. 2 2γ (20.59) Adaptive Control Laws: (n ) ˆ (α n + xr u= p ) Tˆ α1 = −c1z1 − φ1 θ − ψ1 (20.62) α 2 = − z1 − c2 z 2 − ψ 2 + α i = −ci zi − zi −1 − ψ i +  ∂α i −1 Γτ i +  ˆ ∂θ   ∑ j =1 i −1  ∂α i −1 ˆ T φ − x j+1 + ψ j − θ i ∂x j   ( ) ∑ ∂x j =1 i −1 j =2 i −1 ∂α i −1 j  φj   ∂α i −1 ( j) xr + ∑ j =2 i −1 zj ∂α j −1    Γ φ − ˆ   i ∂θ   ∑ j =1 i −1 ∂α i −1  φ j + ∂x j   ∑ ∂x ( j −1) r (20.1  Adaptive Backstepping Control Change of Coordinates: z1 = x1 − xr (i −1) zi = xi − α i −1 − xr .68) We choose the Lyapunov function Vn = Then.58) (20. …. p ˆ are bounded and zi 0.20 -10 Table 20. such ˆ. i = 2. This further implies that limt→∞(x1 − xr) = 0. From the Lasalle’s theorem. z2. 3. n Control and Mechatronics (20.69) Vn = − ∑ n i =1  ci zi2 +  θT +   i =1 n 2 i i ∑z j=2 n j ∂α j −1  (n ) ˆ − |b| p p ˆ + γsign(b) α n + xr zn τn − Γ −1θ ˆ Γ ∂θ γ   ( ) ( ( ) ) (20. LLC .70) =− ∑c z ≤ 0.60) (20.61) ∂α1 ˆ T  φ − ∂α 1 φ  + ∂α 1 Γτ + ∂α1 x (x2 + ψ1 ) − θ 2 1 2 r  ˆ ∂x1 ∂x1  ∂x r   ∂θ (20.

x1 is also bounded from the boundedness of z1 and xr. It is an explicit function of design parameters.72) Remark 20. and parameter updating laws (20.1 The transient tracking error performance for ∥x(t) − xr(t)∥2 depends on the initial states xi(0). • The transient displacement tracking error performance is given by x1(t ) − xr (t ) 2 ≤ 1 Vn (0).62). The diagram of riser slug rig is shown in Figure 20. …. and the explicit design parameters. If the parameters are unknown. the likelihood for multiphase-flow instabilities increase when entering oil production. The proposed backstepping controller is shown in simulations to perform better than proportional-integral (PI) and proportional-derivative (PD) controllers for low pressure set points.2. boundedness of all signals and asymptotic tracking are ensured as formally stated in the ­ following theorem. LLC . c1 (20. we have z 2 1 2 = | z1(τ) |2 dτ ≤ 0 ∫ ∞ 1 1 (Vn (0) − Vn (∞)) ≤ Vn (0) .67) and (20. • The asymptotic tracking is achieved.1 Consider the closed-loop adaptive system (20. c1 c1 (20. the adaptive controller (20. an adaptive controller is designed by adaptive backstepping approach. p(0).Backstepping Control 20 -11 Since x1 = z1 + xr. In oil production. we conclude that the control u(t) is also bounded. A simple empirical model is developed that describes the qualitative behavior of the downhole pressure during severe riser slugging.1.. 20.60). Two nonlinear controllers are designed by integrator backstepping approach. (20. the following statements hold: • The resulting closed-loop system is globally stable. The boundedness of x2 follows · and α in (20. The stabilization is related to the purpose of attenuating an oscillation phenomenon. Combining this with (20. Similarly. and thus computable. n) can be ensured from the boundedness of xr and αi in (20.61). Since Vn is nonincreasing. the initial estimated errors θ(0).e. and (20. which exists in pipelines carrying multiphase flow. and stabilization for open-loop unstable pressure set points is demonstrated. In particular. Operation at a low pressure set point is desirable since it corresponds to a high production flow rate. We can decrease the effects of the initial error estimates on the transient performance by increasing the adaptation gains c1Γ and γ. limt→∞[x1(t) − xr(t)] = 0. unstable multiphase flow from wells or severe slugging is an increasing problem. 20. however. i.1 and 20.63) and the fact that (i −1) xi = zi + α i −1 + xr .1  Unstable Multiphase Flow Multiphase-flow instabilities present in all phases of the lifetime of a field. virtual control laws (20. © 2011 by Taylor and Francis Group. called severe slugging. the boundedfrom boundedness of x r 1 2 2 1 r (i −1) ness of xi(i = 3.6  Backstepping Control of Unstable Oil Wells This section illustrates the potential of backstepping control applied for stabilization of unstable flow in oil wells.57) under Assumptions 20.60).63).71) Therefore.61) and the fact that x = z + α + x · . unstable flow causes reduced production and oil recovery as the well must be choked down for the downstream processing equipment on the platforms to be able to handle the resulting variations in liquid and gas flow rates.6. Theorem 20.68).

Conventionally.1  Schematic diagram of riser slug rig. The schematic of the severe slugging cyclic behavior is shown in Figure 20. The oscillating Slug formation Slug production Blowout Liquid fallback Figure 20. Consequently. such as [11–22]. thus. Active control of the production choke at the well head can be used to stabilize or reduce these instabilities. PI control is not robust and requires frequent retuning. There is significant potential to improve performance of the control methods. stabilizing the flow. a simple model is needed that captures the fundamental dynamics of the system. slug production phase.6. LLC .2  Schematics of the severe slug cycle in flowline riser systems. or it does not achieve proper stabilization at all. which includes four phases: slug formation phase. blow out phase. Research on handling severe slugging in unstable wells has received much attention in the literature and in the industry.2  Dynamic Model In order to understand the underlaying instabilities and to predict the controllability of slugging. 20. which can be used to develop a model-based stabilizing control law to counteract the destabilizing mechanisms of slugging. this is done by applying PI control to a measured downhole pressure to stabilize this at a specified set point. © 2011 by Taylor and Francis Group. improved methods for stabilization of slugging wells have significant potential for increased production and recovery.20 -12 Control and Mechatronics Gas Control choke Separator Oil Riser Water p PT Figure 20.2. However. and liquid fallback phase.

82) © 2011 by Taylor and Francis Group. the system dynamics (20.74) w = a1(β − p) + a2 ( ζ − w 2 ) w. δ represents the control input and is a strictly decreasing function of the production choke opening.2.76) where b0 and b1 are positive constants.73) through (20. which exhibits qualitatively the same behavior as the slightly modified van der Pol equation p = w. (20. τ τ where the functions h and g are defined as h ( w ) = a2c0w − a2w 3 = h0w − h1w 3 g ( w ) = a1b1 − a2c1w = g 0 − g 1w.6. The coefficients in (20. The active control of the production choke at the well head is used to stabilize or reduce these instabilities.79) (20. c1 are positive constants. Assuming constant reservoir influx such that β can be given in β ( q ) = b0 + b1q.76) and (20. (20. respectively. Assuming constant flow rates of liquid and gas from the reservoir.81) (20. LLC .75) can be assembled into p=w (20. 1 1 q = − q + δ. (20.80) w = −a1 p + h ( w ) + g ( w ) q + a1b0 1 1 q = − q + δ.1  Simplified Model of Riser Slugging Based on (20.77) where c0/c1 denotes the bifurcation point and c0. ζ: local “degree of the stability/instability” and amplitude of the oscillation. 20. (20.77).78) (20.75) where the states p and w are the downhole pressure in the riser and its time derivative. • a2. ζ can be given as ζ ( q ) = c0 − c1q. q is related to the effect of the differential pressure over the production choke.Backstepping Control 20 -13 behavior of the downhole pressure of a slugging well can be characterized as a stable limit cycle. τ τ (20.75) can be explained as follows: • a1: frequency or stiffness of the system • τ: a time lag due to the transportation delay • β: steady-state pressure.73) through (20.73) (20.

3  Backstepping Control Design In this section. © 2011 by Taylor and Francis Group.78) through (20. and ci are empirical parameters that are adjusted to produce the right behavior of the downhole pressure p. Our objective is to design a control law for δ(t).. 20. we assume all the parameters in dynamic model (20.3). which stabilizes downhole pressure p(t) at the desired set point pref > b0.3. LLC . • Bifurcation: The model exhibits the characteristic bifurcation that occurs at a certain choke opening c0/c1. and we design stabilizing controllers using integrator backstepping.3  Bifurcation plot.80) can capture the qualitative properties in the downhole pressure during riser slugging. i. bi.6. • Decreasing control gain: A characteristic property of riser slugging is that the static gain decreases with choke opening. the steady-state response of the downhole pressure exhibits changes from a stable point when choke opening is smaller than c0/c1 to a stable limit cycle when choke opening is larger than c0/c1 (see Figure 20. z3 = q − αq .6.78) through (20.e. (20. pressures 40 50 60 70 80 90 100 Control choke opening Figure 20.1  Control Scheme I We iteratively look for a change of coordinates in the form z1 = p − pref .80) are known. 20. • Time lag: The transportation delay between a change in choke opening to the resulting change in downhole pressure p is modeled by simple first-order lag. The positive constants ai.85) z2 = w − αw .84) (20. The system (20.20 -14 Bifurcation plot Stable equilibrium point Control and Mechatronics 9 8 Downhole pressure p [barg] 7 6 5 4 3 2 1 0 0 10 20 Bifurcation point 30 Unstable equilibrium point Measured max.83) (20./min.

2  Control Scheme II By inspection of the second step of backstepping in the previous section. limt→∞[p(t) − pref ] = 0. potentially wasting control effort. g0 g0 (20. We select αq = − αw = 0 C2 + h0 a z 2 + 1 ( pref − b0 ) .6.2 We refer to this choice of αq as an exact canceling design because we simply cancel existing dynamics and replace it with some desirable linear feedback terms: −C1z1 and −C2z2.89) The LaSalle–Yoshizawa theorem guarantees that the equilibrium (z1. we have 2 1 2 z = | z1(τ) |2 dτ ≤ 0 ∫ ∞ 1 1 (U 3 (0) − U 3 (∞)) ≤ U 3 (0) . canceling these terms is not necessary at this point in the design. 1 (−C2 z 2 − z1 + a1(z1 + pref − b0 ) − h ( w ) − a1b0 + α w). 20.78) through (20..87). losing robustness to modeling errors. the following statements hold: • The resulting closed-loop system is globally stable. and making the control law overly complicated. Note that this design is not necessarily the best choice of control law because stabilizing nonlinearities may be canceled. With the application of the controller (20. Hence. (20.86) (20.Backstepping Control 20 -15 where functions αw and αq are virtual controls to be determined αq = α w = −C1z1.80).88) and virtual control laws (20. • The asymptotic tracking is achieved.93) (20. and in particular p(t) is regulated to the set point pref.87) (20. we recognize that the terms −h1w3 and −g1wq are expected to be stabilizing. C1 (20.e. (20.94) δ = −τC3z 3 − τg 0 z 2 + q + τα q.90) Lemma 20. © 2011 by Taylor and Francis Group.1 Consider the riser slugging system (20. LLC . g (w ) δ = −τC3z 3 − τg (w)z 2 + α q + ταq. C1 C1 (20.86) and (20. i. The time derivative of the control Lyapunov function (CLF) 3 1 z 2 becomes U3 = i i =1 2 ∑ 2 2 2 U 3 ≤ −C1z1 − C2 z 2 − C3 z 3 .88) where C1. z2.3. • The transient displacement tracking error performance is given by p(t ) − pref 2 ≤ 1 U 3 (0). since physically q > 0.91) Remark 20.92) (20. z3) = 0 is globally exponentially stable. Since U3 is nonincreasing. It is desirable to obtain a simpler control law by avoiding cancellation of useful nonlinearities. C3 are positive constants. C2.

20 -16 Control and Mechatronics Consider now the CLF U 3 = ∑ 3 i =1 1 z2 .6.100) (20.3 Adaptive Backstepping Control Consider the parameters a1. The final control δ and virtual controls αw and αq determined as α w = 0.−wq]T .104) (20. g 1]T .102)   ∂α .101) (20. we use the change of coordinates in (20. w. h1.1.95) 2 4 2 2 2 U 3 = − ( C2 + g 1q ) z 2 − h1z 2 − C3 z 3 ≤ −C2 z 2 − C3 z 3 . h0 .80).103) (20. a1b0 .94) and the virtual control laws (20. h0.−w 3 . (20.2 Consider the riser slugging system (20.85) and adaptive backstepping control technique. ∂α q ∂α q 1  T T ˆ + ∂α q θ ˆ2 ˆ2 ˆ1+ q ϑ q+ w+ − δ = τ e z φ θ  −C3 z 3 + .105)  ˆ = − γz 2  ˆT ϑ   −C2 z 2 − θ 1 φ1 − z1 . ϑ= 1 . Define θ1 = [a1. g0. a1b0 . LLC . (20.  ˆ ˆT αq = ϑ   −C2 z 2 − θ1 φ1 − z1 . Lemma 20.97) (20. h0 ]T .99) Thus. 20.1. the resulting closed-loop system is stable and the asymptotic tracking is achieved. LaSalle’s invariance principle now The stabilizing terms −h1z 2 implies that the origin is asymptotically stable. 2 5 2 θ ˆ ˆ1   ∂w ∂ϑ τ ∂θ ∂p   and the parameter updating laws are given ˆ 1 = Γ1φ1z 2 . w]T . b0. The following result formalizes this.    ∂w  (20.83) through (20. With the application of the controller (20.98) and the regressor errors φ1 = [− p. such as limt→∞[p(t) − pref ] = 0.92) and (20. g1 are unknown constants. θ ˆ 2 = −Γ 2 z 3   θ   ∂α q φ 2 − e5 z 2  . g 0 . θ2 = [a1.3. q. © 2011 by Taylor and Francis Group. φ2 = [− p.78) through (20. · 2 4 and − g 1qz 2 increase negativity of U 3.96) (20.93). h1. the time derivative is 2 i (20. g0 (20.

105) T p=w (20. LLC .125.6. With the proposed backstepping control Scheme II. a1 = 0. τ τ ∑ 1 z2 + i i =1 2 3 ∑ 3 i =1 1 θiTΓ −1 θi + g0 g0 = 0.102). Figure 20.1  Backstepping Controller We test our proposed backstepping controller (20.106) (20. and e5 = [0. Ti are positive constants. b0 = 3.109) which is negative semidefinite and proves that the system is stable.3 With the application of the adaptive control law (20. h1 = 50. the following values are selected as “true” parameters for the system: h0 = 1. T  ∂α q 4 2 φ2 − Γ 2 z 3e5z 2   − h1w − g 1w q  ∂w  (20. Figure 20.0.498. Figure 20.100) through (20.0]T. virtual control laws (20.6. g1 = 5.025. Ti (20. Ki. the proposed backstepping controllers are tested for stabilization of unstable flow in oil well. The initials are set as p(0) = 3.2 and C3 = 5.80) can be rewritten as Note that q > 0.4  Simulation Results In this section.78) through (20. γ are positive constants and Γ1.110) (20.51.Backstepping Control 20 -17 where C2. The choke function is δ(t) = e−10u(t). and the parameter update laws (20.6.6 shows that the system loses stability at the pressure © 2011 by Taylor and Francis Group. The design objective is to stabilize p at the desired set point pref = 3.1. 20.107) (20.103) through (20. compared with traditional PI and PD controllers.2  PI Control The conventional way to stabilize riser slugging is by applying a simple control law uPI uPI = uI − K p ( p − pref ) uI = − Ki ( p − pref ) . respectively.103) through (20.111) where Kp. we take the following set of design parameters: C2 = 0.4. The system (20.101).51.5.5 illustrates PI controller applied for stabilization in the region at reference pressure pref = 4. For simulation studies. the resulting closed-loop system is stable and the asymptotic tracking is achieved as limt→∞[p(t) − pref ] = 0.51. and τ = 0. where u is the choke opening. Lemma 20. 20. Consider the CLF U 3 = f ­ ollows (20.94) for stabilization of unstable flow in oil well.108) T w = θ1 φ1 − h1w 3 + g 0q − g 1wq 1 1 q = − q + δ. w(0) = q(0) = 0 and the choke opening u(0) = 0.105). C3. 20.4 illustrates the backstepping controller applied for stabilization in the unstable region at reference pressure pref = 3.0. Γ2 are positive definite matrix. The time derivative of U i 3 2 2γ ϑ −1 ˆ 2 2 1 − Γ 1φ1z 2 ) − U 3 = −C2 z 2 − C3z 3 − θ1 Γ1 (θ    g0 T ˆ1 ˆ + γ −C2 z 2 − θ ϑ ϑ φ1 − z1 γ ( ( )) −1  ˆ −θ 2 Γ 2  θ 2 + Γ 2z3 2 2 ≤ −C2 z 2 − C3 z 3 .

498 using PI controller. Downhole pressure p [barg] 6 5 4 3 2 1 0 0 100 200 300 400 500 t [s] 600 700 800 900 1.4  Simulations of stabilization at pref = 3. © 2011 by Taylor and Francis Group.5  Simulations of PI stabilization at pref = 4. LLC .51 using backstepping control Scheme II.20 -18 8 6 4 2 0 0 50 100 150 200 250 t [s] 300 350 Control and Mechatronics Downhole pressure p [barg] 400 450 500 Choke control input u [%] 100 80 60 40 20 0 0 50 100 150 200 250 t [s] 300 350 400 u(0) = 10% u(0) = 90% 450 500 Figure 20.000 Figure 20.000 Choke control input u [%] 100 80 60 40 20 0 0 100 200 300 400 500 t [s] 600 700 800 900 1.

C3 = 5. pref = 4. 20. h ˆ ˆ ˆ0(0) = 1.103) through (20. 10.7 illustrates PD controller applied for stabilization at reference pressure Pref = 4. 0. 0.45. h1(0) = 0.112) (20.6.6  Simulations of an attempt to stabilize at pref = 4. Γ2 = diag{0.4 Adaptive Control If some parameters are unknown. When the pressure is small. b 1 1 0(0) = 1.2g0.4.8. © 2011 by Taylor and Francis Group. Ki = 0.5.9 illustrates the adaptive backstepping controller applied for stabilization at pref = 3. and Γ1 = diag{0.01.105) is applied to stabilize the pressure p at the desired set point pref = 3.02 and Kd = −1. 20. The initials of states are set as p(0) = w(0) = q(0) = 0.5h1. according to the Hurwitz criterium.01}. . Kd are positive constants.5.8a1.102) with parameter updating laws (20. Figure 20.45 using PI controller. 10}.6.2g1. dt where Kp.01. which satisfy the stability conditions. which satisfy the stability conditions. g tively. feasible Kp and Kd.8 shows that the system loses stability at the pressure Pref = 3. g ˆ1(0) = 1. and Ti = 25.1. Figure 20.113) d( p − pref ) = − K d w.6. 0. 0. 0.8.51. and the ˆ (0) = 0.3  PD Control Another way to stabilize riser slugging is by applying a simple control law uPD uPD = −K p ( p − pref ) + uD .2h 0. The design parameters are chosen as Kp = 0. Figure 20. γ = 0. The design parameters are chosen as Kp = 2 and Kd = 2.100) through (20. which is higher than the required pressure reference b 0 = 3. LLC . The design parameters are chosen as Kp = 0. The design parameters are chosen as C2 = 0. uD = − K d (20.Backstepping Control 6 5 4 3 2 1 0 0 100 200 300 400 500 t [s] 600 700 800 900 1000 20 -19 Choke control input u [%] Downhole pressure p [barg] 100 80 60 40 20 0 0 100 200 300 400 500 t [s] 600 700 800 900 1000 Figure 20.2. the adaptive backstepping control (20.01.1. give an aggressive actuation that the choke saturates repeatedly and stabilization is not achieved. respecinitials of estimates â1(0) = 0.5.

51 using PD controller.60 using PD controller.8  Simulations of an attempt to stabilize at pref = 3. Downhole pressure p [barg] 8 6 4 2 0 0 50 100 150 200 250 t [s] 300 350 400 450 500 Choke control input u [%] 100 80 60 40 20 0 0 50 100 150 200 250 t [s] 300 350 400 450 500 Figure 20. LLC . © 2011 by Taylor and Francis Group.7  Simulations of PD stabilization at a pressure pref = 4.20 -20 Downhole pressure p [barg] 6 4 2 0 Control and Mechatronics 0 50 100 150 200 250 t [s] 300 350 400 450 500 Choke control input u [%] 100 80 60 40 20 0 0 50 100 150 200 250 t [s] 300 350 400 450 500 Figure 20.

K. 3rd edn. Drengstig and S. September 29–October 2. and Y. Soh. Adaptive output control of a class of time-varying uncertain nonlinear systems. C. 39:2163–2166. 36:87–100. C. 2007. Automatica. 7. 8. V. 2002. IEEE Transactions on Automatic Control. 2007. 2002. and Y. Wen. Decentralized adaptive regulation. Adaptive control of a simple nonlinear system without a priori information on the plant parameters. LLC . San Antonio. References 1. E. IEEE Transactions on Automatic Control. Slupphaug. © 2011 by Taylor and Francis Group. New York. Active feedback control of unstable wells at the brage field. SPE 77650. Wen. Nonlinear Systems. Porsgrunn. Zhang. Wen. M. Wiley. Halvorsen. Lozano and B. and P. Norway. Zhang. Y. and C. Y. and Y. 49:1751– 1757. C. TX. Dalsmo. 6. NJ. 1992. Prentice-Hall. 5:285–298. H. Zhou. 11. IEEE Transactions on Automatic Control. C. Wen. 9. Zhou. Magndal. pp. 2005. C. 2. Systems & Control Letters. C. J. 4. Nonlinear and Adaptive Control Design. 52:503–509. IEEE Transactions on Automatic Control. J. Brogliato. Soh. R. Wen. C. J. 1999. Zhang.9  Simulations of stabilization at pref = 3. SPE Annual Technical Conference and Exhibition. M.8 using adaptive control. 361–366. Zhang. I. C. Kokotovic. Robustness of an adaptive backstepping controller without modification. Khalil. Journal of Nonlinear Dynamics and System Theory. Zhang. 5. Wen.Backstepping Control Downhole pressure p [barg] 6 4 2 0 20 -21 0 20 40 60 80 100 t [s] 120 140 160 180 200 Choke control input u [%] 100 80 60 40 20 0 0 20 40 60 80 100 t [s] 120 140 160 180 200 Figure 20. Zhou. and Y. 1995. 37(1):30–37. 10. T. Zhou. Adaptive backstepping control design for systems with unknown high-frequency gain. J. Slugcontrol of production pipeline. In Proceedings of SIMS2001. Kanellakopoulos. Wen and J. 12. 1994. and O. Krstic. C. Decentralized adaptive stabilization in the presence of unknown backlash-like hysteresis. 45:2350–2354. IEEE Transactions on Automatic Control. Englewood Cliffs. Robust adaptive output control of uncertain nonlinear plants with unknown backlash nonlinearity. 3. 2004. 2000. 43:426–440. Adaptive backstepping control of a class of uncertain nonlinear systems with unknown backlash-like hysteresis. 2002.

Henriot. Spain. 15. 2001. In 17th International Federation of Automatic Control World Congress. and C. Alstad. Characterisation and active control of slugging in a vertical riser. 2005. 2008. The prediction of flows in production ­ risers—Truth & myth? In IIR Conference. Kaasa. G. Fucks. P. and B. H. Stabilizing control and controllability: Control solutions to avoid slug flow in pipeline. 15:454–463. and O. J. Banff. US patent US6716268. M. 6251–6256. 20. Journal of Process Control. Molyneux. 22. In Proceedings of American Control Conference. Canada. Nonlinear adaptive observer control for a riser slugging system in unstable wells. TX. J. Molyneux. Storkaas. G. Attenuation of slugging in unstable oil wells by nonlinear control. Atlanta. 18. Nonlinear control for a riser-slugging system in unstable wells. B. In Proceedings from BHR Group Conference: Multiphase Technology. Simulation of process to control severe slugging: Application to dunbar pipeline. Kaasa. A. Tait. and J. Alstad. 2000. GA. 2001. V. Identification and Control. Foss. 19. Korea. Zhou. J. P. V. H. and O. and the European Control Conference. M. Watson. E. A. Heintzé. P. Fard. © 2011 by Taylor and Francis Group. Hale. Seattle. 21. Moyeux. F. 1999. In Proceedings of the 44th IEEE Conference on Decision and Control. Godhavn. Suppressing riser-based slugging in multiphase flow by state feedback. 2007. 14. O. Pickering. and L. 28:69–79. WA.-O.-M. G. M. A. 16. Hewitt. and O. Courbot. Kinvig and P. Seville. Siahaan. Kaasa. Kinvig.20 -22 Control and Mechatronics 13. SPE 56461. Norway. 2005. E. Trondheim. Zhou. LLC . Zhou. 2005. PhD thesis. V. G. Aamo. Aamo. pp. Houston. P. Slugging control. 2951–2956. New slug control strategies. tuning rules and experimental results. J. P. Aamo. M. 2008. F. J. M. Aamo. Journal of Modeling. Norwegian University of Science and Technology. and P. SPE Annual Technical Conference and Exhibition. J. 17. pp. M.

... Sensors are widely used in various places such as automobiles.................................... etc........... 21-1 21........ LLC ....2 Distance and Displacement Sensors................ radios.1  Sensors for Large Distances The most popular sensors for large distance measurement are optical sensors [ABLMR01]............... 21-16 References...................... ......8 Sensorless Control System... 21-9 21.................................... 21... 21-1 21.............................. 21......................... which may include lenses....................7 Magnetic Field Sensors.3 Pressure Sensors. optical fiber.......... Depending on the distance range. airplanes................. In a basic optical position sensor........................................................................................................21-14 21..... 21-10 Thermistor  •  Thermocouple  •  IC Temperature Sensors Bolometer  •  Photon Detectors  •  Photoresistor  •  Photodiode  •  Photomultiplier Piezoelectric Accelerometer  •  Piezoresistive Accelerometer  •  Capacitance Accelerometer Piezoresistive Pressure Sensors  •  Capacitive Pressure Sensors  •  Piezoelectric Pressure Sensors Sensors for Large Distances  •  Sensors for Medium Distances  •  Sensors for Small Distances Tiantian Xie Auburn University 21.... In this chapter...21 Sensors 21............ displacement or distance is an important parameter to be measured.2  Distance and Displacement Sensors In a control system............... It converts a physical parameter into a signal suitable for processing.......... a photodetector.......6 Radiation Sensors........... mirrors.................... various types of sensors are used.................. 21-1 © 2011 by Taylor and Francis Group....2...... 21-5 21........ An optical sensor usually requires at least three essential components: a light source....4 Accelerometer... and countless other applications..... light is guided toward a target by focusing lenses and it is diverted back to detectors by the reflector.... .5 Temperature Sensors........... Wilamowski Auburn University 21.................1 Introduction....1  Introduction A sensor is a device that provides an electrical output responding to a stimulation of a nonelectrical signal............................................................................. 21-17 Induction Coil Sensors  •  Hall Effect Sensors  •  Fluxgate Sensors  •  Magnetoresistive Sensors Bogdan M...................... 21-7 21............ various types of sensors for measuring different physical parameters are described......................................... and light guidance devices.

since they are connected oppositely.21-2 Control and Mechatronics The time cost for the light traveling through the target and back to the receiver is proportional to the distance. Ultrasonic sensors are used for distance ranging from several centimeters up to a few meters. The transfer of current between the primary and the secondary coils of the transducer depends on the position of a magnetic core shown in Figure 21. LLC .2%. As the core moves away from the center. These sensors are used for sensing distances from hundreds of meters to thousands of meters. accuracy better than 0. (21. 21.1  Laser distance sensor using triangulation principle.2. for example.3  Sensors for Small Distances Capacitive position sensors are based on the principle that the capacitance between two plates is proportional to the area A and inversely proportional to the distance d between them: © 2011 by Taylor and Francis Group. A laser diode projects a spot of light to the target. The most critical element in this arrangement is the receiver/detector. The intensity distribution of the imaged spot is viewed. the two secondary voltages of the displacement transducer are equal. When the waves are incident on the target.2  Sensors for Medium Distances Ultrasonic distance sensors measure the traveling of ultrasonic pulse.2. The CCD is registering bright on pixel array and then reading it by transferring information across array. The schematic structure of an ultrasonic distance sensor is shown in Figure 21. Target Receiver 21. It is commonly a charge-coupled device (CCD). and then image processing is incorporated for the linear triangulation measurement.2  Schematic structure of ultrasonic distance sensor.1) where t is the time cost for the ultrasonic waves traveling from the source to the target and back to the receiver. Another displacement sensor is linear variable differential transformer (LVDT). Laser triangulation sensors are specially designed optical sensors for more precision position measurements over short and long ranges. part of their energy is reflected in a diffuse manner. The transformer comprises three coils: a primary center coil and two outer secondary coils.3. the position of the target is determined by measuring reflection from the target surface. If the target changes its position.2. To generate the ultrasonic.1. They can be manufactured to meet stringent accuracy and resolution requirements. The LVDT is used to measure displacement ranging from fractions of a millimeter to several centimeters. the result is an increase position sensor in one of the secondary coils and a decrease in the other. the position of the reflected spot of light on the detector changes as well. CC Optics Laser Target Figure 21. piezoelectric materials are used. As shown in Figure 21. and its reflection is focused via an optical lens on a light-sensitive array. At the center of the position measurement stroke. which results in an output from the measurement sensor. the output from the sensor is zero. A distance d to the target can be calculated though the speed v of the ultrasonic waves in the media: d= vt 2 Transmitter D d Figure 21.

The capacitance position sensor can operate over a range of a few millimeters. one plate of a capacitor is the base and fixed. There are two fundamental types of capacitive position sensors: spacing and area variations as shown in Figure 21. while the other plate is movable [B97]. C= Aε0εr d (21.2) can be transferred to linear sensor using this method.4  Two types of capacitive position sensors: (a) spacing variation and (b) area. the output voltage is calculated by Vout = − 1/sC2 C d Vin = − 1 Vin = −C1Vin 1/sC1 C2 Aε r ε 0 (21.2) where εr is the electric permittivity of the material ε0 is the electric permittivity of the vacuum In capacitive position sensors. In this circuit. d (a) (b) l Figure 21. distance between two plates are being measured. In the first type of sensor (Figure 21. but it is sensitive to temperature and humidity.3) The nonlinear dependence on d (see Equation 21. if the circuit in Figure 21.Sensors Vout 21-3 – + – + Figure 21.4. the nonlinear relationship can be eliminated.5 is used for measurement. However. especially if the dielectric is air.4a). © 2011 by Taylor and Francis Group. The capacitance is inversely proportional to the distance between plates. LLC .3  Structure of LVDT variation. This relationship is nonlinear.

For larger distances. Signal magnitude 0 Displacement Figure 21. © 2011 by Taylor and Francis Group. which induces the capacitance modulated. is used for precision displacement measurement [PST96].4b) is preferred. When the opaque sector of the moving grating is maximum aligned with the transmitting sector.6  Grating type displacement sensor: (a) minimum capacitance area and (b) maximum capacitance area. The sensor is fabricated with two overlapping gratings with slightly different pitch. Grating capacitive displacement sensor. the overlapping area is the least (Figure 21.5  Circuit of linear spacing variation capacitive distance sensor.7  Transfer function of grating type displacement sensor. As the plates slide transversely. which serve as a capacitive modulator. The displacement of the moving grating can be obtained by calculating the number of hanging periods (a) (b) Figure 21.7. One grating is fixed while the other is movable. which is based on the principle resembling that of caliper. For small and medium distances. the capacitance of the area variation changes as a linear function of sliding length.21-4 Measured capacitance C2 C1 Vin – + Vout Control and Mechatronics Figure 21. LLC .6a).6. the area variation (Figure 21. The overlapping area is periodic variable as shown in Figure 21. the grating displacement sensor can be used as shown in Figure 21.

By placing one piezoresistor parallel to two edges of the diaphragm and the other perpendicular to the other two edges. A piezoresistive pressure sensor consists of a thin silicon membrane supported by a thick silicon rim as shown in Figure 21. LLC . 21. causing the tensile stress on the diaphragm surface at the edges. which is precisely measured. atmospheric pressure. The applied stress is proportional to the change of resistance. © 2011 by Taylor and Francis Group.8c shows the four piezoresistors connected in the Wheatstone bridge configuration. Figure 21. The common piezoresistors can be silicon. The resistance change caused by this stress can be measured by Wheatstone bridge (Figure 21. When the diaphragm is bent downward. zinc oxide. poly silicon.Sensors 21-5 of the capacitance. and anything else involving force [N97]. the resistance change of the two piezoresistors will always be opposite. the parallel resistors are under lateral stress and show a decrease in resistance while the perpendicular ones are under longitudinal stress and show an increase in resistance. They can be used to measure the flow of liquid. silicon dioxide. When a pressure difference is applied across the device. The diaphragm acts as a mechanical stress amplifier. Vbias R1 Piezoresisitors Applied pressure R2 Bridge output R1 Silicon diaphragm R4 R2 R3 (a) (b) (c) Figure 21.3. etc. the weight or force exerted by one object on another. Many modern pressure sensors are much more sensitive than scales and give an accurate output that can be measured electronically. the grating pitch can be made very small. (b) top view.1  Piezoresistive Pressure Sensors Piezoresistivity is a material property where bulk resistivity is influenced by mechanical stress applied to material.3  Pressure Sensors Pressure sensors are the devices used to measure pressure. indicating traction or compression on the piezoresistors. 21.8c). so that very short movements of the grating will result in a large output signal.8a and b. For high sensitivity.8  Piezoresistive pressure sensors: (a) cross section. the thin diaphragm will bend downward or upward. and (c) Wheatstone bridge.

The pressure is applied on the piezoelectric material and makes it deformed. the pressure can be calculated by measuring output voltage. so as to generate charge.3.9  Schematic of a capacitive pressure sensor.21-6 Applied pressure Control and Mechatronics C1 Diaphragm C2 Reference pressure Figure 21. or torque force.3  Piezoelectric Pressure Sensors A piezoelectric pressure sensor is a device that uses the piezoelectric effect to measure force by converting it to an electrical signal [SC00]. The scheme of piezoelectric pressure sensor is shown in Figure 21. A diaphragm is suspended between two parallel metallic plates so as to form two capacitances C1 and C2. This type of sensor is mostly used to measure small changes of a fairly low static pressure.10. 21. The circuit part in Figure 21.3. Therefore. The capacitances of C1 and C2 will be changed if the diaphragm is deflected due to a pressure difference between its two sides. dynamic force can be measured as compression. The capacitive pressure sensor is more sensitive and less temperature dependent than piezoresistive sensor. Applied pressure I/V convertor – + Piezoelectric material Vout Figure 21. tensile. The structure of a capacitive pressure sensor is shown in Figure 21.2  Capacitive Pressure Sensors The capacitive pressure sensors use capacitance sensitive components to transform the pressures to electrical signals.9.10  Scheme of piezoelectric pressure sensor. Depending on the application requirements. © 2011 by Taylor and Francis Group. 21. This makes these devices unsuitable for the detection of static force. The electrical signal generated by the piezoelectric material decays rapidly after the application of force.10 converts the input current into voltage. LLC .

etc. 21. building and structural monitoring. The piezoelectric material is substituted by a Wheatstone bridge of resistors incorporating one or more legs that change value when strained. No matter how the sensors are designed or what is the conversion technique.2  Piezoresistive Accelerometer The structure of piezoresistive accelerometer is the same as piezoelectric accelerometer. 21. any suitable displacement sensor capable of measuring microscopic movements under strong vibrations or linear acceleration can be used as an accelerometer. This component is usually called inertial mass. medical application. The accelerometer has been used in engineering. The sensor of a typical capacitance accelerometer is constructed of three silicon elements bonded together to form a hermetically sealed assembly shown in Figure 21. which is coupled to the object under study. An accelerometer requires a component whose movement lags behind that of the accelerometer’s housing. The structure of piezoelectric accelerometer is shown in Figure 21. © 2011 by Taylor and Francis Group.11  Structure of piezoelectric accelerometer.4. like quartz. and ceramic piezoelectric materials.4.11.12a. The piezoresistive material’s resistance value decreases when it is subjected to a compressive force and increases when a tensile force is applied.4 Accelerometer An accelerometer is an electromechanical device that will measure acceleration forces. such as barium titanate and lead-zirconate-lead-titanate. can be used for the purposes of accelerometer. These forces may be static or dynamic. Single crystal.1  Piezoelectric Accelerometer Piezoelectric accelerometer utilizes the piezoelectric effect of materials to measure dynamic changes in mechanical variables.3  Capacitance Accelerometer Capacitance accelerometers are usually designed as parallel-plate air-gap capacitors in which motion is perpendicular to the plates. 21.Sensors 21-7 21. This mass behaves like a pendulum.4. an ultimate goal of the measurement is the detection of the mass displacement with respect to the housing. responding to acceleration and causing deflection of the diaphragm. the seismic mass loads the piezoelectric element according to Newton’s second law of motion (F = ma). Therefore. The force exerted on the piezoelectric material can be observed corresponding to the change of the voltage generated by the piezoelectric material. They possess high linearity and a wide operating temperature range. When a physical force is exerted on the accelerometer. the acceleration can be obtained. Two of Applied acceleration Mass ++++++++++ –––––––––– Piezoelectric material Output signal Figure 21. LLC . These sensors are suitable from frequency as low as 2 Hz and up to about 5 kHz. A seismic mass is fabricated at the center of the sensor die. Hence.

Using the circuit in Figure 21. the accelerator can be calculated as 4 + Vout /Vin a= ( ) (C d /A ε ε ) − 2 2 2 f 2 2 2 2 r 0 (Vout /Vin )(C f /Aε r ε 0 )t 2 (21.6 and 21.8) © 2011 by Taylor and Francis Group. LLC . it can be obtained that d1 − d2 at 2 = 2 2 (21. By combining Equations 21.6) where t is the time cost for changing a is the accelerator Since d1 + d2 = d (21.21-8 Applied acceleration Mass d C1 C2 Fixed capacitor plates Vin –Vin Control and Mechatronics C1 C2 – + Cf Vout d1 d2 (a) Mass (b) Figure 21. Changes in capacitance due to acceleration are sensed by a pair of current detectors that convert the changes into voltage output.4.2 and 21.7) where d is the distance between the two fixed capacitor plates.12  Capacitance accelerometer: (a) structure and (b) circuit. the relationship between output voltage and capacitance change can be described as Vout = By combining Equations 21.5) C2 − C1 Vin Cf (21. parallel-plate capacitor.7. The middle element is a rigid central mass.4) With the Newton’s second law. the elements are the electrodes of an air dielectric. Vout = Aε0εr  1 1 Vin  −  Cf  d2 d1  (21.12b.

5.5. nickel. used mostly in electrical current control. resulting in a rise in resistance. rectangular flakes. Thermistors have high sensitivity and accuracy.9) where T0 is the calibrating temperature Rt 0 is the resistance at calibrating temperature β is a material’s characteristic temperature The NTC α can be found by α= 1 ∂R β =− 2 R ∂T T (21.1 Thermistor Thermistors are special solid temperature sensors that behave like temperature-sensitive electrical resistors that are fabricated in forms of droplets. LLC . The temperature testing must be interpreted and processed properly. An NTC thermistor is much more sensitive at lower temperatures and its sensitivity drops fast with a temperature increase. Above the Curie temperature of a composite material. and integrated circuit (IC) temperature sensors. including thermistor. The resistances of the NTC thermistors decrease with the increase of temperature. These ceramic PTC materials in a certain temperature range are characterized by very large temperature dependence. PTC thermistors are generally made by introducing small quantities of semiconducting material into a polycrystalline ceramic.5 Temperature Sensors The temperature of electronic systems needs to be measured at regular intervals.10) One may notice that NTC thermistors depend on both β and T. The most commonly used oxides are those of manganese. In the reality. usually barium titanate or solid solutions of barium and strontium titanate. Thermocouples for practical measurement of temperature are junctions of specific alloys that have a predictable and repeatable relationship © 2011 by Taylor and Francis Group. and titanium. the ferroelectric properties change rapidly. cylinders. 21. so that actions can be taken to counteract the unwanted temperature change. There are basically two types: negative temperature coefficient (NTC) thermistors. The sensitivity α varies over the temperature range from 2% to 8%/°C.Sensors 21-9 21. copper. and positive temperature coefficient (PTC) thermistors. β is not constant and depends on temperature. and fast response. The coefficient changes vary significantly with temperature and may be as large as 2/°C. They are composed of metal oxides. they have limited temperature range and a nonlinear resistance–temperature relationship. thermocouple. Errors may also be generated from self-excitation currents being dissipated by the thermistors. used mostly in temperature sensing. There are an abundance of applications where temperature must be monitored and controlled. The relationship between the resistance and temperature is highly nonlinear. cobalt. 21. The most popular equation is the exponential form   1 1  R t = R t 0 exp  β  −     T T0   (21.2 Thermocouple Thermocouples operate under the principle of Seebeck. However. which implies that this is a very sensitive device. Many devices have been developed to match the widely varying technical and economic requirements. iron. and thick films. bars.

12) One may notice that the current change I is proportional to absolute temperature. The common source of radiation can be thermal radiation. If currents in a fixed ratio or equal currents flow through one transistor and a set of N identical paralleled transistors.3  IC Temperature Sensors IC temperature sensors employ the principle that a bipolar junction transistor (BJT)’s base-emitter voltage to collector current varies with temperature: kT  I CE ln q   IS    Figure 21. Vcc Rref Iref I Q1 Q2 R 21. Radiation. Absolute temperature measurements can be  made once one of the junctions is held at a known temperature. between temperature and voltage. infrared. DSPs. and ultraviolet radiation. is microwave. Thermocouple can be made in very tough designs. The use of IC temperature sensors is limited to applications where the temperature is within a −55°C to 150°C range. or an electronic reference junction is used.14  Circuit of silicon temperature sensor.14 shows an example of a circuit utilizing this principle. which can be classified by their frequency from low to high. which includes © 2011 by Taylor and Francis Group. The three most common thermocouple materials for moderate temperatures are Iron-Constantan (Type J). they cover from −250°C to +2500°C. LLC .5. visible. and inexpensive and are easy to interface with other devices such as amplifiers. respectively. VBE = (21. The typical structure of thermocouple is shown in Figure 21. and Chromel-Alumel (Type K).6 Radiation Sensors Radiation sensors convert incident radiant signals into standard electrical output signals.13. Copper-Constantan (Type T). the current change can be calculated by I= kT  I ref  ∆VBE (kT/q)ln I ref /I S − (kT/q)ln I/I S = = ln R R qR   I   ( ) ( ) (21. The fundamental design of IC temperature sensor results from the diode and silicon temperature sensor [FL02]. they are very simple in operation and measure temperature at a point. 21. The Seebeck voltage developed by the two junctions is V1 − V2 . and microcontrollers. accurate.13  The structure of thermocouple. regulators. The measurement range of IC temperature sensors is small compared to that of thermocouples. Over different types. but they have several advantages: they are small.21-10 A B A V2 V1 Control and Mechatronics – + V3 T2 REF Figure 21.11) Figure 21.

which is incident on a photoconductive detector. and pyroelectric material. and photomultiplier are three typical photon detectors that utilize photoelectric effect. There are several types of bolometer due to different sensitive thermometer. Then. earth. The bolometer uses high thermo-resistance material such as a suitable metal. It can be used to measure the energy of incident electromagnetic radiation. Both thermometer and absorber are connected by a weak thermal link to a heat sink. Taking the probability for excitation of a carrier by an incident photon. LLC . the carrier generation rate is G= Pη hv (21. VOx.13) © 2011 by Taylor and Francis Group. Once the radiation is absorbed by pyroelectric bolometer and its temperature rises. the problem could be solved by electrochemical etch stop technique or utilizing silicon-on insulator wafers. atmosphere. It requires a power source as it does not generate resistance because the photoconductive effect is manifested in change in the material’s resistance. such as constant current or constant voltage either forward biased or reverse biased.6. the quantum efficiency η.3  Photoresistor Photoresistor is a photoconductive device. When light is incident on the surface. incandescent. light energy quantum produces free electrons and causes the change of electric signal. which results in an electric current flowing across the crystal. the temperature increase decays as power in absorber flows out to the heat sink. the bolometer would be damaged by excessive Joule heat dissipated within the bolometer membrane. There are two fundamental components in the bolometer: a sensitive thermometer and high crosssection absorber. The photon must have sufficient energy to exceed some thresholds.15 shows a schematic diagram of a photoresistor. The incoming energy is converted to heat in the absorber. Pyroelectric sensors have a flat response over wide spectral range. A semiconductor diode made of single-crystal silicon has been applied to replace resistive temperature sensor due to its excellent stability and low noise. Otherwise. current flows. Hence. resulting in the signal. gas-discharge light source. Figure 21. There are mainly two fundamental detectors for radiation detecting: bolometer and photon detector.6.6. 21.1  Bolometer A bolometer is a device that heats up by absorbing radiation. The amplifier should be placed as close as possible to the detector to reduce the noise.Sensors 21-11 the solar. The temperature increase is proportional to the incoming energy. photoresistor. 21. Photodiode. The wavelength must be shorter than the cutoff wavelength. 21. the resistance of the material is high. human body. However. diode. and semiconductors. solid-state laser. the polarization of dipolar domains inside the crystal changes to more chaotic. applied voltage V results in small dark current due to temperature effect. Pyroelectric bolometer is based on a pyroelectric crystal covered by absorbing layer (silver or silver blackened with carbon). such as bulk resistor. the resistor has to be biased by a pulse with its duration limited to tens of microseconds. Consider an optical beam of power P and frequency v. The change in temperature is sensed in some way. and semiconductor light source. amorphous silicon. The major problem is thermal isolation of a silicon island containing the diode from the substrate required for the bolometer operation. The current can be amplified by current-to-voltage converter based on operational amplifier. Diodes as temperature sensors can be operated in various modes.2  Photon Detectors In photon detectors. In darkness. In order to determine the amount of absorbed radiation.

where τl = l/v The resistor is obtained by R= V Vhv τl = i eητ0 P where V is the bias voltage. If the carriers last on the average seconds τ0 before recombining. where l is the length of semiconductor.14) ‒ is the drift time for a carrier across the length d. the near-infrared wavelength range (such as lead sulfide or PbS photoconductive detectors).15) N c = G τ0 = P ητ0 hv (21. Depending on their spectral responsively function. The total current i = N cie = P ητ0ev eη  τ0 =  hvl hv  τl  P  (21. the carrier life time τ0 should be larger. In addition. photoresistors are divided into photoconductive detectors for the visible wavelength range (such as cadmium sulfide or CdS photoconductive detectors). LLC . This equation describes the response of a photoconductive detector to a constancy optical flux. the average number of carriers Nc is found by equating the generation rate to the recombination rate: To a current in the external circuit.16) ie = ev l (21. and the © 2011 by Taylor and Francis Group.15  Schematic diagram of a photoresistor. It can be seen that R∞ 1 P (21. it is very sensitive to temperature variation.17) It can be shown that for large sensitivity. Photoresistor has nonlinear responsibility and cannot be used to detect short pulse.21-12 Control and Mechatronics + – Light i Electrode l Figure 21.

If the absorption occurs in the junction’s depletion region.5  Photomultiplier Photomultiplier tube is a typical photoemissive detector whose structure is shown in Figure 21. a reverse bias voltage is applied to the photodiode. Upon striking the first dynode. which is called PIN photodiode. This allows much better sensitivities at low light levels. as the reverse bias is increased. the current increase will be very small with respect to a dark current. When the electrons reach the anode. © 2011 by Taylor and Francis Group. however. However. Photomultipliers are constructed from a glass envelope with a high vacuum inside. If the junction is reversely biased. resulting in a significant increase in the collector current. which houses a photocathode.4  Photodiode Photodiode is a type of photodetector device converting light into either current or voltage depending on the mode of operation.6. 21. the speed response is worse due to an increase in internal capacitance. The phototransistors can do the same and provide additional current again. and electrons toward the cathode. The wide intrinsic region makes the PIN diode largely improve the response time and thus enhance sensitivity for longer wavelengths. The result is that there is no dark current. PIN has wider depletion region due to very lightly doped region between a p-type semiconductor and n-type semiconductor. If a p-n junction is forward biased and exposed to light of proper frequency. As the electrons move toward the first dynode.6. resulting in a much higher sensitivity but slow response. Incident photons strike the photocathode material with electrons being produced as a consequence of the photoelectric effect. the accumulation of charge results in a sharp current pulse. so there is only thermal noise present. Each dynode is held at a more positive voltage than the previous one. shorter rise time. The number of the electrons is multiplied. The electrons leave the photocathode. There are two general operating modes for a photodiode: the photoconductive and the photovoltaic. or one diffusion length away from it.16. the current will increase quite noticeably. and increase the drift velocity. The electron multiplier consists of a number of dynodes. For photovoltaic mode. Because the increase of the reverse bias can widen the depletion layer. 21. shot noise increases as well due to increase in a dark current. This current pulse indicates the arrival of a photon at the photocathode. it excites an electron. holes move toward the anode. and these electrons in turn are accelerated toward the second dynode. lower series resistance. and linear response in photocurrent over a wider range of light intensities. having enough incoming energy. If the transistor is connected into a circuit containing a battery. they are accelerated by the electric field and arrive with much greater energy. The result is a wider depletion region.Sensors 21-13 infrared wavelength range (such as silicon doped with arsenide or Si:As photoconductive detectors and the mercury–cadmium–telluride or HgCdTe photoconductive detector). a photoinduced current flows through the loop. For photoconductive operating mode. several dynodes. In the sensor technologies. and an anode. no bias voltage is applied. When a photon of sufficient energy strikes the diode. The photodiode directly converts photons into charge carriers. decrease the capacitance. which includes the based-emitter region. lower junction capacitance. more low-energy electrons are emitted. thereby creating a mobile electron and a positively charged electron hole. LLC . Thus. these carriers are swept from the junction by the built-in field of the depletion region. the efficiency of the direct conversion of optical power into electric power becomes quite low. and a photocurrent is produced. an additional high-resistivity intrinsic layer is present between p and n types of the material. The p-n photodiode possess 1–3 μm wide depletion layer and is usable for visible light with Si and near-infrared light with Ge. The collector-base junction is a reverse biased diode that functions as described above. This current is amplified by the transistor as in a conventional transistor.

there are four fundamental types of magnetic field sensors: induction coil sensor. In order to improve the sensitivity. This voltage is the Hall voltage and is given by Vout = IB qnd (21. Hall effect sensor. Based on their principle. and magnetoresistive sensor.1  Induction Coil Sensors The induction coil sensor is one of the oldest and well-known magnetic sensors.2  Hall Effect Sensors The Hall element is another basic magnetic field sensor.7. 21. Its transfer function results from the fundamental Faraday’s law of induction: V = −n dΦ dB dH = −nA = − µ 0nA dt dt dt (21.16  Structure of photomultiplier tube. The coil sensors are widely used in detecting displacement. When a current-carrying conductor is placed into a magnetic field. 21. etc.18) where Φ is the magnetic flux passing through a coil with an area A n is a number of turns The output signal V of a coil sensor depends on the rate of change of flux density dB/dt.7.19) where d is the thickness of the Hall plate n is the carrier density q is the charge of the electron © 2011 by Taylor and Francis Group. LLC . magnetic field. 21. fluxgate sensor. a voltage will be generated perpendicular to both the current and the field.21-14 Photocathode Photon Vacuum Anode Control and Mechatronics Output Dynode R1 R2 Electrons R3 R4 – + High voltage Figure 21. the coil should have large number of turns and large active area.7  Magnetic Field Sensors Magnetic field sensors are used for detecting magnetic field or other physical parameter via magnetic field.

3  Fluxgate Sensors Fluxgate sensor is a device measuring the magnitude and direction of the Figure 21. Consequently. As the current flows through the drive coil. There is a net change in flux in the pick-up coil. At this moment. I B + – 21. resolution [R92]. This principle is known as the Hall effect. the hysteresis loop of half of the core is distorted. small thickness d and low carrier concentration n are needed. The drive coil is thought to be two separate half cores. LLC . the other half core comes out in saturation later. If the current changes direction or the magnetic field changes direction. Fluxgate sensor exploits the hysteresis of soft magnetic materials. The Hall effect sensor can be used to measure magnitude and direction of a field. this net change in flux induces a voltage in the same direction as external magnetic field. For some critical value of magnetic field. in order to increase the sensitivity of the Hall device. Two coils are wrapped around these cores: the drive coil and pick-up coil. one half of the core will generate a field in the same direction as external magnetic field and the other will generate a field in the opposite direction.18. The core is made of soft magnetic material.7.4  Magnetoresistive Sensors The magnetoresistive effect is the change of the resistivity of a material due to a magnetic field. etc. The basic ring-core fluxgate sensor configuration is shown in Figure 21. According to Faraday’s law. When the external measured magnetic field is present. which is shown in Figure 21. the polarity of the Hall voltage flips.17. there are two spikes in voltage for each transition in the drive and the induced voltage is at twice the drive frequency.17  Principle of DC or low-frequency AC magnetic field in the range 10−10–10−4 T with 10 pT Hall effect. power. VH 21. the magnetic resistance of the circuit rapidly increases and the effective permeability of the other half core decreases. The Hall voltage is a low-level signal on the order of 30 mV/T. half of the core in which the excitation and measured fields have the same direction is saturated. © 2011 by Taylor and Francis Group. The first which is similar to Hall elements is Pick-up coil B0 Φ2 Iexc Φ1 Soft magnetic core Vind Drive coil Figure 21. Hall effect can also be used for measuring angular displacement. As a result.7.18  Ring core fluxgate sensor. There are two basic principles of magnetoresistive effect [M97].Sensors 21-15 From the above equation.

The magnetic field forces electrons to take a longer path and thus the resistance of the material increases. other parameters are being measured and the value of this differs to measuring parameters that are evaluated by countless physical effects. The Corbino ring shown in Figure 21. Without the magnetic field. etc. such as ferromagnetics. Operating conditions are estimated from measurements of © 2011 by Taylor and Francis Group. The electric windings of PMS motors are spaced on the armature at regular angles. Magnetoresistor can be used for position sensor. whose resistances are changed in the presence of a magnetic field when a current flows through them. When the magnetic field is applied. When compared with Hall effect sensors. the Lorentz force drives a circular component of current and the resistance between the inner and outer rims goes up. pressure sensor. For example. efficient operation.19 is a particularly useful configuration for magnetoresistor. thus called anisotropic magnetoresistor. The sample has an internal magnetization vector parallel to the flow of current. and very good compatibility.21-16 Control and Mechatronics B l2 l1 Figure 21. A magnetoresistive material is exposed to the magnetic field to be sensed. Magnetic field is applied perpendicular to the current. known as the geometrical magnetoresistive effect. A current passes through the magnetoresistive material at the same time. When a magnetic field along the axis is turned on. 21. their mathematical model is a zero-order system. Magnetoresistor offers several advantages as compared with other magnetic sensors. The sensorless control system estimates the parameters without using a detector. the effect is due to change of their magnetization direction due to application of the field. the radial current flows in the conducting annulus due to the battery connected between the conductivity rims. This differs from inductive sensors in which response depends on the time derivative of magnetic flux density. temperature range. which also have a first-order model. they each produce magnetic fluxes that add vectorially in space to produce the stator flux vector. It is dependent on carrier mobility in the material used. flux magnitude and orientation are determined. accelerometer. inductive motor utilizes the sensorless control system to estimate torque. Sensorless VC is a variable frequency drive (VFD) control strategy that improves motor performance by regulating the VFD output based on a mathematical determination of motor characteristics and operating conditions. The resistance of the device becomes a measure of the field. The second principle is based on those materials with highly anisotropic properties. magnetoresistor shows increased sensitivity. speed. A relationship between magnetic field and current is established. By controlling the proportions of currents in the windings.8  Sensorless Control System Instead of measuring a parameter that is difficult to measure. and flux without sensor. There are two competing control strategies for PMS motors: vector control (VC) and direct torque control (DTC) technology.19  Scheme of a Corbino ring. the internal magnetization changes direction by an angle α . The relation between field and current is proportional to B2 for most configurations. Unlike geometrical magnetoresistor. Permanent-magnet synchronous (PMS) motor with high-energy permanent magnet materials provides fast dynamics. and frequency passband. LLC . It is much more sensitive than Hall elements. When excited with current. First. Geometrical magnetoresistor is used in a manner similar to Hall elements or magnetoresistive read heads where Hall elements cannot be used.

rated flux is used as a reference (constant torque region). M. Both types of VC provide improved performance over the basic control strategy called “V/Hz control. a flux-weakening method generates the flux reference for higher-than-rated speed. Kaynak. The DTC technique is based on the direct stator flux and torque control. Bosch. These inductive motors can be widely used in industries such as oil well diagnosis [B00]. References [ABLMR01] M. Optical Engineering. 2001. 1100–1107. 40.Sensors 21-17 electrical parameters. With this approach. K. © 2011 by Taylor and Francis Group. Lescure. Amann. [B97] L. The input voltage (vs ) and current (is ) of the motor on the stationary reference frame can be expressed as v s = v sα + jv sβ is = isα + jisβ The actual stator flux can be estimated from the equivalent circuit of the motor as follows: where ψs is flux vector ψs0 is the initial flux vector ϕs is the flux vector rms value Rs is the stator resistance The electromagnetic torque of the motor is Te = φsαisβ − φsβisα ψ s = (v s − Rsis ) dt + ψ s 0 2 φs = ψ 2 sα + ψ sβ ∫ The control command for the system is speed. C. M. 47. [B00] B. a relatively large rated flux may imply the need to exceed the supply voltage limits to maintain speed. the motor voltage needs to be “fine tuned” to suit the exact motor characteristics and to compensate for changes in those characteristics due to load changes and motor temperature changes. The reference flux is selected. 2000. which is directly applied by the pedal of the vehicle. 61–80. Below the rated speed. 1997. The information from the terminal characteristics of the induction motor could be combined with the fault identification of electrical motors by neural networks. pp. proportional to the inverse of the reference speed. New York. Above the rated speed. IEEE Transactions on Industrial Electronics.” With the simplest type of V/Hz. Myllylä. LLC . Institute of Electrical & Electronics Engineering. R. Oil well diagnosis by sensing terminal characteristics of the induction motor. The flux reference can be calculated based on the speed. The input of the motor controller is the reference speed. M. control. T. Wilamowski. O. For DTC. The reference torque can be calculated using the difference between reference speed and instantaneous speed. the drive simply acts as a power supply that provides an adjustable output frequency with output voltage proportional to frequency. To provide the maximum possible motor torque with the minimum possible current. the required measurements for this control technique are only the input currents. Laser ranging: A critical review of usual techniques for distance measurement. Capacitive Sensors: Design and Applications. 10–19. Therefore. the quality of the oil well could be monitored continuously and proper adjustments could be made. Rioux. Baxter.

Coupling gratings as waveguide functional e­ lements. 2000. Nguyen. J. 1997. 149–152. [SC00] J. Mapps. Sychugov. 1992. I. Filanovsky. Pure and Applied Optics. Magnetoresistive Sensors. 453–469. Tishchenko. LLC . 246–257. 2. Flow Measurement and Instrumentation. 5. T. V. A. © 2011 by Taylor and Francis Group. M. 1996. Micromachined flow sensors—A review. Sensors and Actuators A. [R92] P. Su Tam Lim.21-18 Control and Mechatronics [FL02] I. 8. 7–16. [N97] N. 11. 1997. [M97] D. Sensors and Actuators A. 129–141. [PST96] O. Chopra. Temperature sensor application of diode-connected MOS transistors. V. IEEE International Symposium on Circuits and Systems. Review of fluxgate sensors. Ripka. 2002. 33. T. 59. Sirohi. Parriaux. A. Journal of Intelligent Material Systems and Structures. Fundamental understanding of piezoelectric strain sensors. 9–19.

....4 Sliding Mode Control with SC Methodologies...2 Key Technical Issues in SMC...46]..... The essence of SMC is that in a vicinity of a prescribed switching manifold..... the removal of the effects of unmodeled dynamics..... 22-5 Fundamentals of SMC and Its Design Methods  •  Key Technical Issues in SMC and Applications Neural Networks  •  Fuzzy Systems  •  Evolutionary Computation  •  Integration of SC Methodologies Xinghuo Yu RMIT University 22......1  Introduction Sliding mode control (SMC) is a special class of the variable structure systems (VSSs) [34]... Such motion is induced by imposing disruptive (discontinuous) control actions... FL............ as opposed to hard computing)........ One key objective of the recent SMC research is to make it more intelligent....... coined by Professor Lotif Zadeh as one of the key future intelligent systems technologies......... though there has not been a perfect solution..............3 Key SC Methodologies..... such as those that will be discussed in this chapter....... Soft computing ([SC]......... For details about the fundamentals of SMC. adaptive learning.... which has been studied extensively for over 50 years and widely used in practical applications due to its simplicity and robustness against parameter variations and disturbances [35................ Various approaches have been developed to address these problems............ and 22-1 © 2011 by Taylor and Francis Group............. the velocity vector of the controlled state trajectories always points toward the switching manifold... 22-7 22............ 22-2 22...... commonly in the form of switching control strategies.. 22-1 22. disturbances and uncertainties... and EC Okyay Kaynak Bogazici University 22.................. and improvement of robustness remain to be the key research challenges that have attracted continuing attention..... this leads to the introduction of intelligent agents into SMC paradigms............. The integration of other research methodologies has been considered.. This requires an infinite switching in general to ensure the sliding motion......... An ideal sliding mode exists only when the system state satisfies the dynamic equation that governs the sliding mode for all time........................22 Soft Computing Methodologies in Sliding Mode Control 22....... the key technical problems such as chattering........ LLC .. and many new technologies have been introduced.....1 Introduction... 22-10 References�������������������������������������������������������������������������������������������������� 22-10 SMC with NNs  •  SMC with FL  •  SMC with Integrated NN.............. Naturally. Despite the sustained active research on SMC over the last 50 years..........5 Conclusions..................... SC methodologies include neural networks (NNs)....... has been researched and applied in solving various practical problems extensively.................. readers are referred to Chapter 13 [27] of this handbook...... fuzzy logic (FL)........

…. 2. t ) + B(x. …. the switching manifolds can be denoted as s(x) = 0 ∈ Rm. Section 22. that is. The SMC u ∈ Rm is characterized by the control structure defined by +  ui (x ) ui =  − ui (x )  for si (x ) > 0 for si (x ) < 0 (22.2 outlines the key technical issues associated with it. where the system state is driven from any initial state to reach the switching manifolds (the anticipated sliding modes) in finite time. which are paradigms for mimicking human intelligence and smart optimization mechanisms observed in the nature [1]. © 2011 by Taylor and Francis Group. 1. t )u + ξ(x.2. then there exists a control ut. Commonly. the celebrated invariance property of SMC stands [7]. therefore. depending upon specific control requirements. These two phases correspond to the following two main design steps: In the context of the system (1).5 concludes the chapter. Discontinuous control design: A discontinuous control strategy is formed to ensure the finite time reachability of the switching manifolds. Section 22.2) where i = 1.1) where x ∈ Rn is the system state vector u ∈ Rm ξ(x. Section 22. we will use MIMO system framework as a platform for discussing the integration of SC methodologies in SMC. where the system is induced into the sliding motion on the switching manifolds. Sliding mode phase. the design procedure of SMC includes two major steps encompassing two main phases of SMC: 1.2  Key Technical Issues in SMC 22. t ) (22. It is well known that when this condition (the matching condition) is satisfied. such as disturbances and uncertainties in the parameters of the system If ξ ∈ range B.3 briefs the basics of SC techniques that are relevant to SMC. t) ∈ Rn represents all the factors that affect the performance of the control system. the MIMO SMC systems considered are of the form x = f (x. the majority of research in SMC has been done with regards to multi-input and multioutput (MIMO) systems. 22. LLC .1  Fundamentals of SMC and Its Design Methods In recent years. The controller may be either local or global.4 presents key developments of SC methodologies in SMC. Reaching phase. The diffusion of SC methodologies into SMC architectures has received numerous successes and has become a major research topic in SMC theory and applications. As detailed in [27].22-2 Control and Mechatronics probabilistic reasoning (PR). m. 2. where s = (s1. sm)T is a m-dimensional vector underpinned by the desired dynamical properties. such as ξ = Bu ξ. Switching manifold selection: A set of switching manifolds are selected with prescribed desirable dynamical characteristics. the switching manifolds become an attractor. Common candidates are linear hyperplanes. Section 22. Readers can also refer to the comprehensive tutorial [6] on the fundamentals of MIMO SMC systems. This chapter introduces the basic SC methodologies used in SMC.

However. V < 0 in the neighborhood of V = 0 would lead to the convergence of the system states during which certain parameters would be learned simultaneously. central to the SMC design is the use of the Lyapunov stability theory. Therefore. However. t) is induced which drives the system dynamics s nonsingular): ∂s   ∂s  ueq = −  B(x. LLC . that is. Bang-bang type SMC: This is a control of direct switching type usi = − M sgn(si ) © 2011 by Taylor and Francis Group. The control design task is then to find a suitable discontinuous control such that · V < 0 in the neighborhood of the equilibrium s = 0. t ) )   ∂x   ∂x  −1 (22. one common alternative Lyapunov function may be constructed as 1 1 V = sT Qs + z T Rz 2 2 (22. one may set z = p − p*. in which the Lyapunov function of the form 1 V = sT s 2 (22.4) is commonly taken. z may converge to some nonzero constants. t ) + ξ(x. t )   ( f (x. There are several SMC controller types seen in the literature. Typical SMC strategies for MIMO systems include 1.Soft Computing Methodologies in Sliding Mode Control 22-3 It is well known [27] that when the sliding mode occurs.3) And in the sliding mode. the choice of the type of controller to be used is dependent upon the specific problems to be dealt with.5) where z = 0 represents a desirable outcome R and Q are symmetric nonnegative definite matrices of appropriate dimensions For example. a so-called equivalent control (ueq) is · = 0 and can be written as (assuming (∂s/∂x)B(x. say for a set of system · parameters p ∈ Rl to converge to its true values p*. β > 0 and there are other variants. if one wants to enable system parameter learning while controlling. the system dynamics is immune to the matched uncertainties and external disturbances.6) where us is a switching control component that may have two types of switching • usi = −αi (x )sgn(si ) for αi(x) > 0 or si • usi = −β ||si || . If one would like to embed learning and adaptation in SMC to enhance its performance. 2. it should be pointed out that the set of parameters are not necessarily convergent toward their true values. Equivalent control-based SMC: This is a control of the type u = ueq + us (22.

[16] and iterative learning control mechanism. if |si| > οi. where R > 0 and K > 0 are diagonal matrices and si σ(si) < 0. As demonstrated in the previous sections.3  Unmodeled Dynamics Mathematically. gives rise to a high gain control when the states are in the close neighborhood of the switching manifold. and sgnb(si) = ρ(si) if |si| ≤ οi. These parasitic dynamics represent the fast actuator and sensor dynamics that are normally neglected during the control design.g.2. These may include small time delays due to sampling (e. Some of these are listed in the following. for example. where ε is a small positive number [3].7) to realize finite time reachability [18]. Research efforts in this aspect have been focussed on restricting the influence of the uncertainties ξ within a desired bound. • The presence of parasitic dynamics in series with the control systems causes a small amplitude. SMC is a special class of VSSs. οi > 0.2. (2). Two causes are commonly conceived [42]. which largely follows the methodologies of adaptive control. • The continuous approximation method in which the sign function is replaced by a continuous approximation sgna(si) = (si/(|si| + ε)). Enforce s = −R sgns − K σ(s) (22. which have prompted the extensive use of SC methodologies in SMC systems in recent years.2. • The boundary layer control in which the sign function is replaced by sgnb(si) = sgnsi. However. Various methods have been proposed to “soften” the chattering.1  Chattering As has been mentioned above.2. it is impossible to model a practical system perfectly. [38]. despite the advantages of simplicity and robustness. namely chattering. in which the sliding modes are induced by disruptive control forces. which is a motion oscillating around the predefined switching manifold(s). • The switching nonidealities can cause high-frequency oscillations. This. by which. 3. for example. high-frequency oscillation. if the matching condition is not satisfied.2  Key Technical Issues in SMC and Applications There are several key issues that are commonly seen as obstacles affecting the widespread applications of SMC. zero-order-hold [ZOH]). for example. 22. and (3). Design of such control relies on the Fillipov method.2. 4. where οi is small and ρ(si) is a function of si. 22. and more recently transmission delays in networked control systems.2  Matched and Unmatched Uncertainties It is known that the celebrated invariance property of SMC lies in its insensitivity to matched uncertainties when in the sliding mode [7]. 22. in fact. LLC .2.22-4 Control and Mechatronics where M > 0 should be large enough to suppress all bounded uncertainties and unstructured systems dynamics. which could be a constant or a function of states as well [29].. Embedding adaptive estimation and learning of parameters or uncertainties in (1). a sufficient large local attraction region is created to suck in all system trajectories. SMC generally suffers from the well-known problem. There always exists some unmodeled dynamics. which is not desirable because the condition for the wellknown robustness of SMC does not hold. 22. the sliding mode (motion) is dependent on the uncertainties ξ. and/or execution time required for calculation of control.2. The situation may get worsened if the unmodeled dynamics contains high-frequency © 2011 by Taylor and Francis Group.

according to certain learning algorithms. Often. each of which is a simple computational unit. FL. The adjustments of weights are done to minimize the difference between the desired and actual outputs incrementally. • The unmodeled dynamics can refer to the presence of parasitic dynamics in series with the control system as shown above. A common structure is the feedforward neural networks (FNNs) [11] where the information flow in the network is directional. the more the knowledge of the uncertainties is. NNs. the class to which the pattern is supposed to belong. it is more feasible to study the component subsystems linked through aggregation. the unmodeled dynamics can be commonly lumped with uncertainties and a sufficiently large switching control has to be imposed. the unmodeled dynamics that may have different dimensions and characteristics cannot be modeled at all. desired outputs. and PR that subsume belief networks. the object to be controlled is difficult to model without a significant increase in the dimensions of the problem space. the less crude will the SMC force be. it can be formulated as follows.3  Key SC Methodologies In this section. Certainly. respectively. LLC . the learning can be supervised or unsupervised. is to minimize the performance index.1  Neural Networks An NN is a computational structure that is inspired by observed processes in natural networks of biological neurons in the brain [11]. for learning purposes. In general. in the complex systems environment. There are several aspects of dealing with this unmodeled dynamics. Generally. which correspond to their local as well as global minima. • Due to the complexity of practical systems.3. yet the intersections between the neurons emulate the enormous learning capability of the brain. Components of SC methodologies are NNs. which can be generically written as 1 J= τ where τ is the length of time window ∥·∥ represents the Euclidean norm t −τ ∫ 2 || y(θ) − y (θ) || d t 1 2 (22. yd ∈ Rm. and parts of learning theory [1]. Supervised learning in FNNs appears to be the most popular research area in dynamical systems and control applications. and evolutionary computation (EC). we will give a brief introduction to the key SC techniques that are commonly used in dynamical systems and control. EC. BP) algorithms for NNs can be considered as finding zeros of ∂J/∂ w. the target. The search performance of this class of learning algorithms somehow relies on initial weights and. 22. for example. 22. Given the inputs. w ∈ Rl. The challenge is how to design an effective SMC without knowing the dynamics of the entire system (apart from the aggregative links and local models). chaos theory. frequently. In a supervised learning algorithm. weights. it © 2011 by Taylor and Francis Group. In this case.Soft Computing Methodologies in Sliding Mode Control 22-5 oscillatory dynamics that may be excited by the high-frequency control switching of SMC. This learning is done by adjusting the so-called weights that represent the interconnection strength of neurons. and actual outputs of an NN as x ∈ Rn.8) Most supervised learning (backpropagation. More broadly. This research topic has been very popular in recent years in the intelligent control community using. It consists of a mass of neurons. and y ∈ Rm. for each training input pattern. FL. learning is guided by specifying.

LLC . the fuzzy sets are defined on some relevant universe of discourses.…. In the domain of control systems. z(t) is the system state. which is a form of multi-valued logic derived from fuzzy set theory to deal with re­ asoning. ∑ m j =1 w ij φ j (x − µ j . One particular FNN that offers a much faster way of learning is the so-called radial basis f­ unctionbased neural network (RBFNN) [28].…. θ j ). some of them) range over states that are fuzzy sets based on FL. the output of the fuzzy system is formulated as y =     ∑ m i =1 wi yi         ∑ m i =1 wi  . and yi is the system output that could be a function ci. This class of NNs has also been popular in the area of dynamical systems. vague rather than precise. using the conventional FNN learning paradigm.2. ∩ n j =1 Fji . Denote wi(z(t)) as the normalized fuzzy membership function of the inferred fuzzy set Fi where F i = Another well-known fuzzy control method is the Mamdani method [19] that makes use of much intuitive nature of human expert control. which can be viewed as a local dynamics. at least. θj) is a Gaussian function.32]. As shown in [43]. Representing states of variables by fuzzy sets is a way of quantifying the variables. that is. In this setting. Fi(i = 1. Another class of NNs is the so-called recurrent NNs (RNNs) that has a feedback mechanism [11]. n) are the fuzzy sets.   © 2011 by Taylor and Francis Group.10) for i = 1. u is the system input. which can be described as Ri : IF z1 is F1i THEN y i = c i AND … zn is Fni (22. where ϕ(x − μj. where Ri represents the ith fuzzy inference rule. the output function yi = ci can be replaced with a dynamical equation.3. yi (t ) = much simpler as well. Hence. the output of a fuzzy system can be computed by a mechanism of Takagi–Sugeno–Kang (TSK) type IF-THEN rules [31. the network structure is much simpler. all FNN learning can be generally regarded as finding minimum for the Lyapunov function 1 ∂J V=J+ µ 2 ∂w 2 (22. the fuzzy sets are fuzzy numbers. which are often an interval of real numbers. m. similar to dynamic feedback systems. and the associated variables are linguistic variables. In the domain of control systems.2  Fuzzy Systems A fuzzy system is any system whose variables (or. m is the number of inference rules.9) where the parameter μ determines the relative importance of each term in the function.22-6 Control and Mechatronics traps into local minima. In FL the degree of truth of a statement can range between 0 and 1. in this case. the TSK method is more suitable for model-based fuzzy control systems. although extensive computation is still required and efficient algorithms can be derived [36]. For each variable. Fuzzy systems provide an alternative representation framework to express problems that cannot easily be described using deterministic and probabilistic mathematical models. The learning of the weights becomes 22. dynamical systems are commonly dealt with. In this special but important case. that is. However.

there is another trend of using the control design techniques to derive the learning algorithms with convergence to the true parameter values being a secondary requirement. Unlike conventional learning of constant parameters or functions. while the conventional BP-based NN learning paradigm is usable.Soft Computing Methodologies in Sliding Mode Control 22-7 22. 22.3  Evolutionary Computation EC techniques have been a most widely used technology for optimization in SMC. learning speed is very important in dynamical environments. chattering. The integration of these paradigms would give rise to powerful tools for solving difficult practical problems. The EC paradigm attempts to mimic the evolution processes observed in nature and utilize them for solving a wide range of optimization problems. One of the early attempts in using NNs in SMC [20] aimed to estimate the equivalent control. The criterion that is expressed in terms of an objective function is usually referred to as a fitness function [10].5(s instead of (22.3 to summarize the developments. EC performs directed random searches using mutation and/or crossover operations through evolving populations of solutions with the aim of finding the best solutions (the fittest survives). matched and unmatched uncertainties.4).9) for learning. This learning process is usually very time consuming. the application of NNs is done in the context of dynamical learning focusing on achieving stability and fast convergence. NNs in control systems adopt the same topological structures but the learning is done through real-time estimation. © 2011 by Taylor and Francis Group. there are three essential interconnected aspects that affect usability and applications of SMC. respectively. and complex network theory also provide alternative tools for optimization and for explaining emerging behaviors that are predicted to be widely used in the future. The convergence of learning parameters to their true values becomes secondary. It should be noted that NNs and EC are about a process that enables learning and optimization while the FL systems are a representation tool. We will use the same categories as in Section 22. In control theory. Reducing learning errors rather than obtaining exact parameter values is of paramount importance. and strategies. the application of SC methodologies in SMC. This chapter will be confined within the scope of the former. chaos theory. An · + ds)2 online NN estimator is constructed and the NN is trained using the alternative function V = 0. that is. genetic programming. evolutionary algorithms. Instead of using the conventional NN performance index (22.4. in adaptive learning and control. and unmodeled dynamics. under certain conditions. namely. 22. The same approach is also contemplated in [14].5) is used. In general.1  SMC with NNs As has been outlined before.3. such convergence is guaranteed [29]. One is the application of SC methodologies in SMC to make it “smarter” and the other using SMC to enhance SC capabilities. Conventional NNs learn by employing BP-type of learning as stipulated in [11. a performance index of adaptive control type (22. In terms of learning mechanisms. In another work [8]. where z is defined as functions that enable the convergence of NN weights to deliver the convergence V → 0. 22. LLC .43]. though.3. when the persistence excitation condition is satisfied. Emerging technologies such as swarm optimization.4  Sliding Mode Control with SC Methodologies There have been extensive research done in using SC methodologies for SMC systems. The integration of SMC and SC has two dimensions. rather than reducing learning errors. for example. EC technologies include genetic algorithms (GAs). two NNs are used to approximate the equivalent control and the correction control. NNs offer a “model-free” mechanism to learn from examples the underlying dynamics.4  Integration of SC Methodologies The aforementioned SC paradigms offer different advantages.

In such a way.22-8 Control and Mechatronics In the literature. For example. the stability as well as modeling of the uncertainties are guaranteed for an electronic throttle. Fuzzifying the sliding-mode concept and the reaching phase to the sliding manifold involves replacing the otherwise well-defined mathematical derivations with heuristic rules underpinned by FL [25. to alleviate the problems associated with chattering. a dynamical NN approach is used to estimate the unmodeled dynamics to reduce the requirement of large switching control magnitude using the Lyapunov function (22. and to control complex systems that are modeled through aggregated linear models via fuzzy logic. the control action is derived from an FL-based control table. This section will outline major developments in this respect. compounded by fuzzy membership functions.4. Consider the case when the following standardized fuzzy values are used. the landing on the switching manifold and overshooting it is softened. There are many choices. For example.4. If the state of the trajectory to the switching manifold is NB and the velocity toward the switching manifold is NS.” ZO = “zero. the control choices are not unique. and unmodeled dynamics. the switching manifolds are used as performance criteria and adaptive algorithms are derived for learning and control. meaning that the system dynamics has a useful tendency toward the switching manifold. Another approach to dealing with chattering is to fuzzify SMC. 22. and this eases the chattering. However. a NN estimator is used to estimate the modeling errors to compensate the SMC to reduce the tracking errors in discrete nonlinear systems.” NS = “negative small. It is indicated that the normal requirement of bounded uncertainties is not required in the control scheme. In this paper. 22.2  SMC with FL The strength of fuzzy systems based on FL has been taken advantage of in SMC design due to its simple representation. matched and unmatched uncertainties.5). This approach is proved to be effective in reducing chattering and practically implemented on a see-saw system. one may choose NS or NB as the control output to further accelerate the convergence. This is done by forming a specifically constructed fuzzy rule table. © 2011 by Taylor and Francis Group. instead of using the well-defined SMC functions. One of the earliest works in which the fuzzy concepts are introduced is [22]. NB = “negative big. a NN is used to estimate the lumped unmatched uncertainties with time delayed states.” PB = “positive big. The main idea behind the use of SMC with FL is to use human control intuition and logic to enhance SMC performance. The above example serves as a typical case for fuzzified SMC and similar ideas have been used in a number of papers. For example. if the state of the trajectory to the switching manifold is PB and the velocity toward the switching manifold is NB. to model the unmodeled uncertainties based on partial knowledge of experts through their years of experience. The rationale central to the use of fuzzy systems with SMC is similar to those for NNs. that is. there are several research works using similar ideas. LLC . In [40]. in [2]. one simple and direct way to design a fuzzified SMC is by using the actual position to the switching manifold and the velocity toward it as two inputs and to derive rules as to what control action is required for the state to reach the switching manifold. a robust fuzzy sliding-mode controller is proposed for active suspension of a nonlinear half-car model in which the ratio of the derivative of the error to the error is used as the input to the controller.2. in a recent work [39]. In [21]. In [26]. the softening FL idea is introduced in tuning the switching magnitude according to the distance from the state to the switching manifold. that is. an intuitive control action should be PB since only a drastic action can reverse the speedy trend of the trajectory leaving the switching manifold.37].1  Dealing with Chattering One of the early works in this respect uses FL in a low pass filter to “smooth” SMC signals.” and PS = “positive small”. In [24]. Adaptive algorithms are derived for estimating parameters and guarantee stability. depending upon what control performance is required. Certainly. which would reduce chattering [13]. The key advantages include introducing heuristics underpinned by human expert experience to reduce (or soften) chattering. The key to fuzzified SMC lies in the selection of membership functions associated with the fuzzy values. underpinned by the heuristic nature of human reasoning.

22.Soft Computing Methodologies in Sliding Mode Control 22-9 22. in the complex systems environments. Bi. networked through a topological structure. schemes are proposed to estimate the unknown functions in order to reduce the magnitudes of the switching controls. It may also refer to other residual dynamics that cannot be modeled. y = Dx(t ) + E (22. Takagi and Sugeno [32]. LLC . the latter is difficult to handle due to the uncontrollable nature of the unmodeled dynamics. Di are of appropriate dimensions Ei is a constant vector Using the standard fuzzy inference approach. which may refer to the presence of parasitic dynamics in series of the control systems that causes small amplitude high-frequency oscillations. For complex dynamical systems. The feasible solutions in such cases are often to study the component subsystems.2. However. These parasitic dynamics are normally neglected during the control design. it is possible to linearize them around some given operating points such that the well-developed linear control theory can be applied in the local region. the uncertainties also include lumped unmodeled dynamics. Such a formulation can be easily accommodated by replacing yi = ci by a dynamic linear system [9] x(t ) = Ai x(t ) + Biu(t ). An example of using FL in SMC includes a fuzzy system architecture [4] to adaptively model the nonlinearities and the uncertainties. yi (t ) = Di x(t ) + Ei where the matrices Ai. that is. However. Efforts to reduce chattering are directed to ensure that information about the uncertainties and the unmodeled dynamics be obtained as much as possible in order to reduce the requirements of high enough (crude) switching gains to suppress them and to secure the existence of sliding modes and hence the robustness. we also discuss uncertainties associated with unmodeled dynamics.2. One effective approach is the FL approach. To aggregate the locally linearized models into a global model to represent the complex system is not an easy task. the object to be controlled is difficult to model without a significant increase in the dimension of the problem space. NNs.4. Sugeno and Kang [30]. the global fuzzy state space model can be obtained as where x(t ) = Ax(t ) + Bu(t ). The estimation of uncertainties using fuzzy systems has been quite a popular approach. there may exist a number of operating points in complex nonlinear systems that should be considered during control. [9]. for example. In [41].11) ∑ B=∑ D=∑ E=∑ A= m i =1 µi Ai m i =1 m µi Bi µi Di i =1 m i =1 µi Ei © 2011 by Taylor and Francis Group. FL. This direction of research has become very popular in recent years in the intelligent control community using.2  Dealing with Uncertainties Uncertainties are another cause of chattering. by which the set of linearized mathematical models is aggregated into a global model that is equivalent to the complex system. Such a treatment is quite common in practice. for example. and EC technologies.3  Dealing with Complex Systems More broadly.4. The challenge is how to design an effective SMC without knowing the dynamics of the entire system (apart from the topological links). and Feng et al. Various fuzzy models and their controls have been discussed by. In this section.

M. the aggregated SMC cannot g ­ uarantee the global stability. Zinober.4  SMC with EC The optimization of the control parameters and the model parameters are vitally important in reducing chattering and improving robustness.2. A fuzzy adaptive control scheme is then developed to deal with the model following a control problem. Baric.3  SMC with Integrated NN.4. two fuzzy NNs are used to learn the control as well as to identify the uncertainties to eliminate chattering in distributed control systems.cs.berkeley.e. J. 22. 1986. it is certainly advantageous to use fuzzy systems as a paradigm to represent the complex systems as an aggregated set of simpler models. Peric. In this respect. Levenberg–Marquart algorithm. a significant amount of information about the tendency of search parameters toward optima (i. S. the dynamics of (22. http://www-bisc. robust adaptive fuzzy SMC is proposed for nonlinear systems approximated by the Takagi–Sugeno fuzzy model. 17(6):875–885. readers are referred to the survey paper [45]. LLC . FL. The advantages mentioned above have motivated various researchers to use EC as an alternative to find “optimal” solutions for SMC. Neural network-based sliding mode control of electronic throttle. 22. In another work [12]. The conventional BP-based learning mechanism is employed for learning. For a comprehensive coverage of literature on this topic. In [33]. For complex systems that are difficult to model and represent in an analytic form. 22. 2005. etc. This is because the globally asymptotical stability is not guaranteed in the overlapping regions of fuzzy sets where several subsystems are activated at the same time. for example. a GA is used to optimize the matrix selection to obtain better membership functions for smoother SMC. various optimization techniques can be used [11]. I.. through bio-inspired operations such as mutation and/or crossover. I. 3. The complexity of the problem and the sheer number of parameters make the application of the conventional search methods rather hard. The Berkeley Institute in Soft Computing. Burton and A. © 2011 by Taylor and Francis Group. Lin et al. Such ideas have been used quite extensively in dynamic systems and control areas. References 1. the derivative information) is required. unless it satisfies a sufficient condition [44]. in [5]. In [23]. A. International Journal of Systems Science. just like the role local linearization plays in nonlinear function approximation by piecing together locally linearized models.. 18:951–961. Engineering Applications of Artificial Intelligence. and N. NNs on the other hand can be used as a learning mechanism to learn the dynamics while the EC approaches can be used to optimize the representation and learning. [17] use a GA to reach an online estimation of the magnitude of switching control to reduce chattering. such as the gradient-based search. however.5  Conclusions In this chapter. the key developments of SC methodologies in SMC have been discussed. 2.22-10 Control and Mechatronics While the global fuzzy model can approximate the complex system. and EC It is recognized that NNs and EC are processes that enable learning and optimization while the fuzzy systems are a representation tool [15]. For Petrovic. The key process of using EC is to treat the set of parameters (often a large number) as the attributes of an individual in a population and generate enough number of individuals randomly to ensure a rich diversity of “genes” in the population.7) is used to enforce the reachability of the sliding modes for MIMO nonlinear systems represented in a similar form as shown in above. Continuous approximation of variable structure control.

2003. Ho. T. 2010. 26. D. Drazenovic. Y. IEEE Transactions on Fuzzy Systems. New York. 12. Rodic. C. Y. M. Y. 1994. Murray-Smith. 1998. An adaptive sliding mode controller for discrete nonlinear systems. 220–225. Rees. Wada. IEEE Transactions on Industrial Electronics. IEEE Transactions on Fuzzy Systems. 519–526. L. 2000. 1997. Chen. IEEE Transactions on Neural Networks. 3:203–304. Palm. Morioka. Jezernik. Neuro sliding mode control of robotic manipulators. In Proceedings of IEEE Conference on Systems. In Proceedings of 1992 IEEE Conference on Fuzzy Systems. 16. G. An adaptive variable structure model following control design for robot manipulators. Kaynak. 15:23–30. Variable structure control of nonlinear multivariable systems: A tutorial. Assilian. Automatica. Da. 2004. Chak. 1994. 32:75–90. W. Mechatronics. 19. An experiment in linguistic synthesis with a fuzzy logic controller. Robust control by sliding mode. 7. Genetic algorithms applied to fuzzy sliding mode controller design. Saaric. Proceedings of the IEEE. Li. M. 6. Chen and W. pp. Systems and Control Letters. Hwang and M. K. Lam. Y. 11(6):1471– 1480. W. Ertugrul. Sliding mode fuzzy control. 2:277–284. IEEE Press. 11. 5:287–295. Feng. 1995. 1999. Q. Lin. 21.-D. 2003. X. 10:239–263. Su. J. C. IEE Proceedings: Control Theory Applications. Cao. Y. 76:212–232. San Antonio. Sbarbaro. Sliding mode control for nonlinear state-delayed systems using neural network approximation. International Journal of Man-Machine Studies. Ng. 1991. Curk. Zak. Fogel. 48:4–17. San Diego. A. IEEE Proceedings—Electrical Power Applications. 1995. M. 12(5):676–687. Huang. 27. S. IEEE Transactions on Industrial Electronics. Gorez. M. Evolutionary Computation—Toward a New Philosophy of Machine Intelligence. Prentice-Hall. S. E. C. 1995. 20. Automatica. R. Erbatur. 24. Adaptive sliding-mode controller based on real-time genetic algorithm for induction motor servo drive. 22. and D. 17. 1994. W. Wang. © 2011 by Taylor and Francis Group. N. Robotica. TX. Chou. 7(1):1–13. H. Variable structure control techniques. 1303–1208. F. and G. A. Sabanovic and N.-L. Man and Cybernetics. Boca Raton.-J. 25. 14. DeCarlo. The fusion of computationally intelligent methodologies and sliding-mode control: A survey. A. Palm. C. 10. C. Fuzzy smoothing algorithms for variable structure systems. 1975. 9. C. P. 1995. Control and Mechatronics. Sabanovic-Behlilovic.-K. Matthews. Leung. and P. pp. 150(3):233–239. and B. LLC . Neural network sliding mode robot control. 1992. 5. G. Neural Networks—A Comprehensive Foundation. A novel takagi-sugeno based robust adaptive fuzzy sliding mode controller. Robust adaptive sliding mode control using fuzzy modeling for an inverted-pendulum system. Sharman. K. and K. Lu and S. Haykin. 1997. pp. and C. Kaynak. Mamdani and S. O. Decentralized sliding mode adaptive controller design based on fuzzy neural networks for interconnected uncertain nonlinear systems. and C. The invariance conditions in variable structure systems. Taylor & Francis. Zhou. and M. Englewood Cliffs. K. Niu. F. and K. Munoz and D. R. Use of fuzzy concepts in adaptive sliding mode control. H. S. J. NJ.Soft Computing Methodologies in Sliding Mode Control 22-11 4. 15. Jezernik. Design of fuzzy control systems based on state feedback. 30(9):1429–1437. P. 43(3):574–581. Robust sliding mode control of uncertain nonlinear systems. 150(2):1–13. IEEE Transactions on Industrial Electronics. 36(3):347–353. De Neyer and R. Tomizuka. B. 2000. 23.-J. 8. Spurgeon. D. Neural network based chattering free sliding mode control. 2000. 1969. FL. In Proceedings of the 34th SICE Annual Conference. K. D. Sabanovic. 2001. Hwang. K. CA. H. X. R. R. In Proceedings of the First International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications. Journal of Intelligent and Fuzzy Systems. 13. 45:297–306. Ertugrul and O. IEEE Transactions on Automatic Control. 1988. 18.

Yoo and W. IEEE Transactions on Industrial Electronics. and U. 7:328–342. Berlin. 1985. Berlin. N. 37. IEEE Transactions on Systems. I. J. Slotine and W. 49(8):1215–1220. Takagi and M. Utkin. and K. V. © 2011 by Taylor and Francis Group. 42. I. Linear and Nonlinear Iterative Learning Control. Kaynak. A control engineer’s guide to sliding mode control. 11(3):354–360. Z. Yu and O. and Y. 43. T. Z. 15:110–32. K. Fuzzy Sets and Systems. volume 274 of Lecture Notes in Control and Information Sciences. IEEE Transactions on Fuzzy Systems. 1998. 2002. 29. 55(10). 40. Sabanovic. Y. 44. 56:3275–3285. Li. Sliding mode control with soft computing: A survey. 54(3):1676–1685. Variable structure systems with sliding modes. Design of fuzzy sliding mode control systems. LLC . A. Communications and Control Engineering Series. Kang. 6:315–321. 2007. Kaynak. 2009. Y. Springer Verlag. 2003. Kaynak. 38. J. IEEE Transactions on Fuzzy Systems. 22:212–222. IEEE Transactions on Control Systems Technology. 46. IEEE Transactions on Neural Networks. S. 1986. 36. Springer. 34. Wu and T. NJ.-X. Yagiz. Lyapunov theory based radial basis function networks for adaptive filtering. Xu and Y. Tong and H. IEEE Transactions on Automatic Control. Berlin. Sugeno and G. IEEE Transactions on Control Systems Technology. J. Adaptive sliding mode control of nonlinear systems. 30. C. Yu. 33. 39. 31. Englewood. and G. B. M. Fuzzy Sets and Systems. Sugeno and G. Fuzzy identification of fuzzy model. 1991. Yu. X. Germany. 35. Wilamowski. Prentice Hall. 1999. Yu and J. S. IEEE Transactions on Industrial Electronics. Ozguner. Man.-X. IEEE Transactions on Industrial Electronics. Cotton. Abidi. T. Germany. Sugeno. 2002. Sliding Modes in Control and Optimization. Fuzzy adaptive sliding mode control of mimo nonlinear systems. V. IEEE Transactions on Neural Networks. Germany. Lecture Notes in Control and Information Science. Fuzzy Sets and Systems. Utkin. T. X. 45. A sliding mode approach to fuzzy control design.22-12 Control and Mechatronics 28. M. 28:15–33. Kang. Fuzzy modeling and control of multilayer incinerator. 2002. Efe. Tan. X. P. Variable Structure Systems: Towards the 21st Century. Springer Verlag. S. 1992. V. Utkin. Fuzzy identification of systems and its application to modeling and control. and B. Sliding mode neuro controller for uncertain systems. Ham. A general backpropagation algorithm for feedforward neural networks learning. Man. Applied Nonlinear Control. Wu. 32. Li. 41. D. 4:141–151. K. 95(3):295–306. Hacioglu. M. 2003. Dundar. and H. 2008. O. 55(10):3784–3790. Ham. 1977. Fuzzy sliding-mode control of active suspensions. 1996. 1988. Xu. 18:329–346. J. 13:251–259. Cliffs. 2008. Computing gradient vector and jacobian matrix in arbitrarily connected neural networks. Taskin. O. Young. and O. Yildiz. I. 1998. Seng. X. Wu. and Cybernetics.-J. IEEE Transactions on Industrial Electronics. N. Man. B.

......................................... and Identification III 23 Adaptive Estimation  Seta Bogosyan................... Observation...................... and Jorge Angel Davila Montoya...................... 23-1 Introduction  •  Adaptive Parameter Estimation Schemes  •  Model Reference Adaptive Schemes  •  Lyapunov-Based Adaptive Parameter Estimation for Nonlinear Systems  •  Adaptive Observers  •  Experimental Results  •  References 24 Observers in Dynamic Engineering Systems  Christopher Edwards and Chee Pin Tan..................... Metin Gokasan........................Estimation.......................... 25-1 26 Ultrasonic Sensors  Lindsay Kleeman...... .................. LLC ....... 27-1 Introduction  •  High-Order Sliding-Modes State Observation  •  System Identification  •  Bibliography Review  •  References III-1 © 2011 by Taylor and Francis Group... 26 -1 Overview  •  Speed of Sound  •  Attenuation of Ultrasound due to Propagation  •  Target Scattering and Reflection  •  Beamwidth—Round Piston Model  •  Transducers  •  Polaroid Ranging Module  •  Estimating the Echo Arrival Time  •  Estimating the Bearing to Targets  •  Specular Target Classification  •  Ultrasonic Beam-Forming Arrays  •  Advanced Sonar Sensing  •  Sonar Rings  •  Conclusion  •  References 27 Robust Exact Observation and Identification via High-Order Sliding Modes  Leonid Fridman....... 24-1 Introduction  •  Linear Observers  •  Reduced-Order Observers  •  Noise and Design Tradeoffs  •  Kalman–Bucy Filtering  •  Nonlinear Observers: Thau Observer  •  High-Gain Observers  •  Sliding-Mode Observers  •  Applications  •  Conclusions  •  Further Reading  •  References Why Estimate Disturbance?  •  Plant and Disturbance  •  Higher-Order Disturbance Approximation  •  Disturbance Observation  •  Disturbance Cancellation  •  Examples of Application  •  Conclusions  •  Bibliography 25 Disturbance Observation–Cancellation Technique  Kouhei Ohnishi.......................... Arie Levant........ and Fuat Gurleyen...................

........................... 23-21 Luenberger Observer  •  Adaptive Estimation in Deterministic Nonlinear Systems Using the Extended Luenberger Observer  •  Adaptive Estimation in Stochastic Nonlinear Systems Using the Extended Kalman Filter Standard Parameterization Approach  •  New Parameterization Approach Metin Gokasan Istanbul Technical University Fuat Gurleyen Istanbul Technical University 23............. Estimator (or observer) schemes serve as mathematical substitutes to sensors and are the most widely used solutions against these limitations..... The use of estimators is not limited to control systems....................... such as cost...................... While all the methods to be 23-1 © 2011 by Taylor and Francis Group.......5 Adaptive Observers....... that is........ noise...4 Lyapunov-Based Adaptive Parameter Estimation for Nonlinear Systems... realistic considerations................................ or both parameter and state......... we will first group estimators in two main categories based on their estimation domain... but is extended to system identification and modeling with diverse applications in monitoring and fault detection...... 23-1 23.. or to predict a certain future state of the system or environment.... 23-30 References... In both cases...... parameter........... it is obvious that the control performance very much depends on the proper use of sensor information.......................................... 23-15 23................. which calls for the acquisition and processing of a large number of sensors and related data.................. where estimation schemes may be needed to calculate a single state............. state................ as well as sensorless control systems....... system robustness...... For state feedback...... Most control systems use output feedback... 23-12 MRAS with Full-State Measurements  •  Generalized Error Models in Estimation with Full-State Measurements  •  Generalized Error Models in Estimation with Partial-State Measurements Least-Squares Estimation Seta Bogosyan University of Alaska 23.. The diverse applications of estimation theory range from the design of state observers and/or parameter identifiers for control systems.................... However.... while some also require partial...... as well as availability are major reasons leading to the replacement of sensors wherever possible............. space....... 23-33 23......1 Introduction.23 Adaptive Estimation 23........... Estimators are also extended for prediction and decision support systems in a multitude of areas... the number of states and sensors has to match......... fuse data from multiple sensors........................................... 23-3 23...............2 Adaptive Parameter Estimation Schemes.....6 Experimental Results.or full-state feedback systems............ LLC .3 Model Reference Adaptive Schemes............. In this chapter............1  Introduction The process of estimation involves the mathematical determination of unavailable or hard to access system states and uncertain parameters with the use of measurable states and a system model that may be available in analytic form or may be derived recursively as part of the estimation process... theoretically.

some estimator schemes are specifically designed for deterministic and some for stochastic systems: The first category. LLC . Kalman filter. adaptive estimation may also be performed for nonlinear systems provided the system satisfies a certain parameter-regressor vector form to be discussed further on in this chapter..e. u.23 -2 Control and Mechatronics discussed below can be applied for both linear and nonlinear systems (if necessary. LSE and Lyapunov-based estimation schemes can also be used in combination with model-based nonlinear control schemes. we will comply with this common approach. θ. Least-squares estimation (LSE) and recursive LSE (RLSE) schemes are the commonly known estimation techniques in this category. Methods based on optimization theory require a system model given as an input–output transfer function. or a system model online or offline. and inverse dynamic control approaches. However. an open-loop observer structure warranting the determination of the system output. the parameter update law is obtained based on Lyapunov’s theory and estimates the system parameter vector. that is. self-tuning regulators (STR). using the error between the output signal. r(t). nonlinear. LSE applications in sensor networks or KF applications in sensor fusion. in which the output estimation error (deviation between measured and calculated output) is also taken into consideration to update the observer © 2011 by Taylor and Francis Group. and control reference signal. or in phase-canonical state-space representation and use the system output and/or measurement noise. as input for the parameter update law.e. mostly referred as “adaptive parameter estimators (APE) or parameter identifiers. Normally. online. There are also APE schemes developed based on the principles of stability. after linearizing the given model). KF). Hence. Once again. EKF.” involves the estimation of unknown parameters of a system model based on the principles of optimization or stability theory. based on system input. feedback linearization. An important condition for accurate estimation with methods in this category is persistency of excitation. Such an open-loop structure may be sufficient to calculate the state values based on u when the system at hand is linear and time invariant (LTI) and has a well-defined model. when the system’s initial conditions are not known. that is. stochastic extended Kalman filter. deterministic extended Luenberger observer. and can perform the estimation of states (i. APE schemes to be discussed in this chapter can be applied for both constant and varying system parameters and used in combination with model-based adaptive control schemes that require updates from system states and parameters.. Fast convergence of the estimated states in this case can only be ensured with the use of closed-loop observers/estimators. y. One such estimator is the model reference adaptive scheme (MRAS) in which the parameter vector is updated while minimizing the error between the actual system output and the output of a desired system based on the Lyapunov’s stability theorem. The estimated states and parameters derived from observers or estimators can further be used to update a control law. • Estimation schemes in the second category are developed based on linear. ELO). which is the requirement that either the system or the input signal be rich in terms of frequency content to avoid parameter drift and assure convergence in steady state. An essential requirement for observer and estimator schemes is the fast convergence of the estimated states to their actual values. or simultaneous estimation of both system states and unknown system parameters (i. and/or input signals to solve for parameter values that will minimize a given performance index. such as PD+. the term “observer” is used for estimators performing state estimation only. may appear to be good enough. Although estimation schemes in this category are referred both as “estimators” or “observers” in the literature. In this chapter. with known system parameters and initial conditions. While linear system models must be developed for these two schemes. y(t). estimators and observers in this latter category may be used to predict the future values of a system state or output with the use of the estimated model and measured signals. the estimated states will take a longer time to converge to their actual values and will not even converge if the model and system parameters do not match. or recursive system models.

T 2 t where k1 and k2 > 0 A lack of this property may bring about a parameter drift in steady state. While such estimators can easily be developed for systems with analytical models. finite-dimensional or infinite-dimensional. which contains input and output measurements. While most schemes can be applied for both linear and nonlinear systems after linearization or application of a proper parameterization approach. Similarly. in the more likely event that system parameters and properties are time varying. methods based on LSE and MRAS will be discussed in relation with linear system models. model-based control laws may also require accurate knowledge of system parameters and disturbances (i. is persistently exciting (PE). or where it is impossible to perform separate identification experiments for the system to establish an observer model. information reconstructors. The PE property of W(t) is necessary for the convergence of the estimated parameters to the unknown parameter values. most commonly used APEs will be briefly discussed..2 Adaptive Parameter Estimation Schemes Adaptive parameter estimation (APE) schemes. In this chapter. smooth or “with singularities” [B07].” as they rely on a model using available measurements for online adaptation based on available measurements. Usually. which is one of the most important problems encountered with APEs in practice. while the Lyapunov-based parameter estimation scheme will be addressed within the context of nonlinear system applications to comply with the common application trends in the literature. To remedy this problem. and stochastic type estimation and observer schemes with corresponding simulation and experimental examples wherever possible. measurement-based.e. LLC . Such observer configurations are more widely used than their open-loop counterparts and are known as closed-loop estimators or observers. nonlinear. in continuous-time or discrete-time. © 2011 by Taylor and Francis Group. To summarize. these unknown/variable parameters could be estimated and used to update the observer model online in the same process or in a separate estimation process running in parallel. are online estimation schemes that guarantee the convergence of the estimated parameters to their unknown actual values. in this chapter. could be called adaptive estimators. This chapter aims to address the most widely used linear. observers and estimators can be regarded as “model-based. load observers). It is also worth mentioning that there are numerous examples in the literature that refer to all online estimators (and observers) as adaptive [B07]. An important consideration in the design of observers is the unknown or varying system parameters in the observer model. The PE property is t +T k1I ≥ ∫ W (τ)W (τ)dτ ≥ k I. The design of APEs includes the selection of the plant input so that the signal vector W(t) (also called the regressor vector). a recursive identification approach can be conducted in an online manner to infer the properties of the system. Consequently. the model is given as a state-space representation. also called parameter identifiers. or simultaneous state and parameter estimation performing parameter estimation will be referred to as adaptive observers and the term “observer” will be used only for state estimators.Adaptive Estimation 23 -3 control input. deterministic. schemes for online determination of suitable regulators/ predictors/filters (mostly associated with the first category) can inherently be considered adaptive and hence. closed-loop. In this chapter. deterministic or stochastic. estimators performing parameter estimation or identification only. or use the terms adaptive estimators or ­ adaptive observers interchangeably. 23. which will naturally affect the estimation performance negatively.

u(t − j) and y(t − i) are combined in WT(t) = [−y(t − 1)…− y(t − na) u(t) u(t − 1)… u(t − nb)]. The implementation can be done either offline or online depending on the application or system. P =  a1 …ana b0 …bnb   while measured inputs and outputs. B(z −1 ) = b0 + B(z −1 ) = b0 + b1z −1 + + bnb z −nb (23. This chapter will discuss LSE methods for both continuous-time and discrete-time dynamical system [IF06. i = 1.1.23 -4 Control and Mechatronics 23. This error minimization approach is known as the LSE. The approach is based on finding a parameter set that yields the minimum sum for the square of errors found as the difference between the measured output and the output calculated based on the system model with the calculated parameters. online. hence. it makes sense to find the parameters offline. and even nonlinear systems.3) y(t ) = a1 y(t − 1) + + ana y(t − na ) + b0u(t ) + b1u(t − 1) + + bnb u(t − nb ) Here.AW95]. …. This leads to the calculation of the output.2). for example. either as an input–output transfer function or as a state-space representation. t .2. created by Gauss. As a result.1  Least-Squares Estimation The least-squares method. As a difference equation. y(t).2). 23. calls for the mathematical model to be given in certain forms. y(i). (t − nb).1) can also be represented as (23. is given in terms of output values at previous instances. nb) with ai ve bj being the unknown parameters.…. as given in (23. which can be applied to a very wide range of systems with discrete or continuous-time models of stochastic or deterministic nature.2) Here. 2. As a transfer function:  B(z −1 )  y (z ) =  u(z ) −1   A(z )  where z−1 is the backward shift operator u is the control input y is the output [AW95] The parameters to be calculated are given in A(z −1 ) = 1 + a1z −1 + + ana z −na . LLC . when the system output.1  Least-Squares Estimation Algorithms for Discrete-Time Systems The discrete-time model of a system can be given in two forms: 1. our derivations will take that model as basis: (23.2. na) and bj ( j = 0. with a system with slow-varying system parameters. T Next. these parameters are combined as a vector. …. the model can now be given in the following parameter linearizable form: y(t ) = W T P (23.4) © 2011 by Taylor and Francis Group.… (23. P = [P1  P2  …  Pn]T is known as the parameter vector with dimension n = na + nb and W T(i) = [w1(i)  w2(i)  …  wn(i)] is known as the regression vector. The method serves the purpose of finding the parameters of a mathematical model using the measured inputs and outputs of the system. Since the discrete model in (23. for each instant i based on the following model: y(i) = W T (i)P . ai (i = 1. However.1) 2. (t − na) and input values at the current instant (t) as well as previous instances. is probably the oldest and the most used estimation method. when dealing with a system that has fast-varying parameters in an operation range. in most applications. the mathematical model is in linear form.IS96. and inversely.

1. in terms of the parameter vector.10) We will now discuss some commonly used methods for calculating the unknown parameter vector that will minimize the above cost function.8). and its measured value. The deviation between the estimated output. the minimization of the cost function. The cost function can be expressed as below: J (P . are commonly used methods for the minimization of J.. t) [AW95].8) can be further organized by making the following definitions: Y (t ) =   y(1) y(2) y(t )  . t ) = E T E = P T (WT (t )W(t ))P − P T WT (t )Y + Y TY 2 2 2 (23.8) { + e(t )2 = J   P(1) + } (23.11) © 2011 by Taylor and Francis Group. by finding P that will minimize the cost function. The cost function in (23. P. J(P. J(P.2.5) The calculation of the unknown parameter vector P can be achieved by solving the least-squares cost function below. or similarly. y(i).7) J (P . 23. t). W(t ) =  :  w1(t )w2 (t )…wn (t )   1 1 1 J (P . can be also achieved by minimizing each error-square term.6) (23.Adaptive Estimation 23 -5 ˆ(i) calculated at an instant. t ) = 1 e(1)2 + 2 1 2 ∑ e(i) i =1 t 2 +J  P(t )  (23.1. and recursive least squares with forgetting factor. as the deviation at each i is actually calculated in terms of the estimated below by replacing P with P parameter vector as 2 ˆ (t ) ) = 1 e 2 (t ) = 1 y(t ) − W T (t )P ˆ (t ) J (P 2 2 ( ) (23. As can be seen from (23. LLC . T E (t ) =  e(1) e(2) e(t )  . etc.1  Gradient Methods Standard gradient methods. T P=  P1 P2 Pn   T w1(1)w2 (1)…wn (1)   ˆ E = Y − Y = Y − WP . y gives the error ˆ (i) = y(i) − W (i)T P(i) e(i) = y(i) − y (23. We will start our discussion with the gradient methods. t ) = 1 2 ∑ ( y(i) − W (i) P(i)) T i =1 t 2 (23.8) is rewritten ˆ.9) (23. such as steepest descent. t ) = J (P . Newton’s method. These methods can be listed as the gradient method. recursive least squares. i.

the parameter convergence rate can be adjusted by the proper selection of Γ based on trial and error.15) As can be seen in the above equations. the least-squares estimate is unique and can be formulated as ˆ (t ) = WT (t )W(t ) P ( ) −1 WT (t )Y (t ) = S(t )WT (t )Y (t ) (23.1.17) such that a vector P is a least-squares solution of WP = Y if and only if P is a solution of W T(t)W(t)P = W T(t)Y.2.18) © 2011 by Taylor and Francis Group.1). a recursive optimization approach can also be taken to this aim. the derivative (∂e(t)/∂P There are many alternatives to the cost function given by Equation 23.12. computations of the least-squares estimate should be carried out in such a way that the results obtained at time t − 1 can be used to get the estimates at time t (Figure 23. To get the least-squares estimate recursively. a gradient descent-like iteration is obtained.16) ˆ (t )) = e(t ) . We may assume that W T(t)W(t) is positive definite for t ≥ n.13) Hence. which is also known as the MIT rule: ˆ (t − 1) − G (t ) ∂e(t ) e(t ) = P ˆ (t ) = P ˆ (t − 1) + G (t )W (t )e(t ) P ˆ (t ) ∂P (23. P. ١J (P . given in (23.10) will be minimal.12) where 𝚪(t) = 𝚪  T(t) > 0 is a symmetric positive definite matrix. If it is chosen to be the gradient method gives ˆ (t − 1) + G (t )W (t )sgn [e(t )] ˆ (t ) = P ˆ (t − 1) − G (t ) ∂e(t ) sgn[e(t )] = P P ˆ (t ) ∂P (23. which will be discussed next. the observations are obtained sequentially in real time. LLC . 23.1. we can directly apply iterative methods.23 -6 Control and Mechatronics The cost function in (23.2  Recursive Least-Squares Estimation In adaptive processes. E = Y − WP. if the Euclidean norm of the residual vector. t). which make the gradient vector equal to be zero. Accordingly.14) With the assumption that the parameter changes are slower in comparison to those of the other variˆ(t)) can be evaluated under the assumption that P is constant.10. t )  ∂E  = E = −W T (t )(Y − WP ) = 0  ∂P   2 ∂P T (23. The cost function. ables in the system. by substituting (23. is minimal for all parameters. In that case. J(P (23. t ) = 1 ∂J ( P . This is also equivalent to ˆ (t ) ) ∂e(t ) ∂J ( P e(t ) = −W (t )e(t ) = ˆ (t ) ˆ (t ) ∂P ∂P (23. J(P.11) should decrease in the direction of its negative gradient as expressed with the relationship below: ˆ ˆ (t ) − P ˆ (t − 1) = − G (t ) ∂J ( P (t ) ) ˆ (t ) = P ∆P ˆ ∂P (t ) (23.13) in Equation 23. however.

where S(t ) = WT (t )W(t ) ( ) −1  =  ∑  W (i)W T (i) . The most common way is to define the residual error variable e(t) as defined sion for P in Equation 23.20) (23. the recursion for S(t) will be combined with the recursion for W T(t)Y(t) to give a direct recurˆ(t) from P ˆ (t − 1).21) as S −1(t ) = W T (t )W (t ) = S −1(t − 1) + W (t )W T (t ) (23.18): In order to calculate P Using the definitions below. which gives S(t ) = S −1(t − 1) + W (t )W T (t ) ( ) −1 = S(t − 1) − S(t − 1)W (t ) 1 + W T (t )S(t − 1)W (t ) ( ) −1 W T (t )S(t − 1) (23.22) Hence.23).5 by ˆ (t − 1) ˆ (t ) = y(t ) − W T (t )P e(t ) = y(t ) − y (23.23) To obtain S(t) from (23.19) ˆ (t) iteratively using P ˆ (t − 1). t must be placed by t − 1 in (23.25) © 2011 by Taylor and Francis Group.1  Schematic representation of the RLSE. t ≥ n  i =1 t −1 (23.21) ˆ (t − 1) = S(t − 1)WT (t − 1)Y (t − 1) P (23. the Matrix Inversion Lemma [AW95] is used. (23.19) can be written using (23.24) Finally. LLC .  W(t − 1) T T WT (t )W(t ) = [WT (t − 1) W (t )]  T  = W (t − 1)W(t − 1) + W (t )W (t ) W ( ) t     Y (t − 1) T WT (t )Y (t ) = [WT (t − 1) W (t )]   = W (t − 1)Y (t − 1) + W (t ) y(t ) y ( t )   (23.Adaptive Estimation 23 -7 W P + – ˆ (t – 1) P e y(t) ˆ(t – 1) ˆ(t) = WT(t) P y Update mechanism Figure 23.

and 23. This gives ˆ (t − 1) + W (t )e(t ) W T (t )Y (t ) = W T (t − 1)Y (t − 1) + W (t )W T (t )P (23. S(t) can be expressed in the following recursive form: S(t ) = S(t − 1) − L(t )(S(t − 1)W (t ))T (23.2). with online applications of RLSE.3  Recursive Least-Squares Estimation with Exponential Forgetting Factor [AW95] Forgetting factor λ is a number between 0 and 1. 23.27) we obtain ˆ (t ) = P ˆ (t − 1) + L(t )e(t ) P (23. e.2.21.26) Substituting for W T(t − 1)Y(t − 1). culating P ( ) © 2011 by Taylor and Francis Group.27) and (23.27) ˆ (t) in (23.27).22. P λ + W T (t )S(t − 1)W (t ) (23. the S and L expressions reflect the inclusion of the forgetting factor:  W (t )W T (t )S(t − 1)  S(t ) = λ −1S(t − 1) In −  λ + W T (t )S(t − 1)W (t )   (23.27 as a column vector of adjustment gains.30) ˆ (t) recursively (Figure 23. the definition can also be taken as As an alternative procedure to update the parameter P L(t) = S(t)W(t) in Equation 23.24). L(t ) = S(t − 1)W (t ) ˆ (t ) = P ˆ (t − 1) + L(t )e(t ) .29) Consequently. we can obtain the following equation for L(t): L(t ) = S(t − 1)W (t ) 1 + W T (t )S(t − 1)W (t ) (23. Initial value of S J (P .2  Flowchart for calˆ(t) recursively with RLSE.28) remain the same.1.28) By combining Equation 23. t ) = 1 2 ∑ λ ( y(i) − W P ) t −i T i =1 t 2 (23.24 gives ˆ (t ) = P ˆ (t − 1) + S(t )W (t )e(t ) P (23.18. Substituting the vector.23 -8 Control and Mechatronics and substitute for y(t) in Equation 23. LLC . Below is a flowchart for the procedure of calculating P 23. the expression for S(t) and L(t) are given as follows.32) S(t) = S(t – 1) – L(t)(S(t – 1)W(t))T Thus. that is.1. The use of this factor becomes necessary especially when high amounts of measurement and/or input data are involved.22 and the expression for P(t) in (23. W T(t)Y(t) using Equations 23.31) S(t – 1)W(t) L(t) = –—–––—–––———––— 1 + WT(t) S(t – 1)W(t) ˆ(t) =P ˆ(t – 1) + L(t)e(t) P Below we present the recursive estimation algorithm with the inclusion of the forgetting factor. L(t) into (23. While the parameter update laws in (23.33) Figure 23. which is used to progressively reduce the emphasis placed on the past values of error.

3). Let us consider that the degrees of the A. The approach is based on the system model given below A(q) y(t ) = B(q)u(t ) + C(q)ε(t ) (23. B(q). we can make the assumption that when ˆ(t)y ˆ(t − 1)…y ˆ(t − n)] calculated by the LSE algorithm converges to the actual [y(t)y(t − 1)…y(t − n)] values.35) (23. will also converge to the random noise. B. The equations for updating the estimates are given by ˆ (t ) = P ˆ (t − 1) + S (t )W (t − 1)e(t ) P e e e e (23. A(q) = qn + a1qn −1 + + an . we will also discuss the LSE approach when the output signals of the system in consideration are corrupted by noise. [y the modeling error. and C polynomials are confined to be n = degree(A) = degree(C) = degree(B) + 1. and C(q) are polynomials in forward shift operator q. The model can then be approximated by ˆ e + ε (t ) y(t ) = WeT (t − 1)P (23. [e(t)e(t − 1)…e(t − n)].1. augmented by the unknown noise coefficients.38) The resulting approach is called the extended least squares (ELS).39) (In – L(t)WT (t))S(t – 1) S(t) = –—–––—–––———––—— λ Figure 23. The resulting recursive extended least-squares algorithm can be given as follows: ˆT ˆ1 P e = [a … ˆn a ˆ1 b … ˆn b ˆ1 c … ˆn ] c (23. [ε(t)ε(t − 1)…ε(t − n)].2  Extended Least-Squares Estimation Method In this section. Below is a flowchart for the procedure of calculating P 23. This assumption leads to the extended regressor vector given below.34) where A(q). and ε(t) is white noise. S(t) can also be given in the following form as in RLSE as  We (t )WeT (t )Se (t − 1)  Se (t ) = Se (t − 1) In −  1 + WeT (t )Se (t − 1)We (t )   (23.Adaptive Estimation 23 -9 S(t ) = (I n − L(t )W T (t ) S(t − 1) λ ) ˆ (t) recursively (Figure 23.2. C(q) = qn + c1qn −1 + c2qn −2 + + cn Due to the infeasibility of measuring past random noise values. LLC .3  Flowchart for calˆ(t) recursively for RLSE culating P with forgetting factor.36) WeT (t − 1) =   − y(t − 1) − y(t − n) u(t − 1)…u(t − n)ε(t − 1)…ε(t − n)  Initial value of S ε(t) can be written in the following form: ˆT e(t ) = y(t ) − WeT (t − 1)P e (t − 1) (23. B(q) = b1qn −1 + b2qn −2 + + bn .40) © 2011 by Taylor and Francis Group.37) S(t – 1)W(t) L(t) = –—–––—–––———––— (λ + WT(t) S(t – 1)W(t)) ˆ (t) = P ˆ (t – 1) + L(t)e(t) P The variables ε(t) are approximated by the modeling errors e(t).

single-output (SISO) linear.43) Nm(s) is of degree.42) (23.47) requires the m and nth-order derivatives of u and y. even when all the system states may not be available: y= N m (s ) u Dn (s) Se(t) = Se(t – 1) – Le(t)We(t) Se(t – 1) Figure 23. constituting the parameter vector. low-pass filters (LPF) may be designed with orders © 2011 by Taylor and Francis Group. LLC .44) and (23.41) Nm(s) and Dn(s) have no common factors and can be given as below: N m (s) = bm s m + bm −1s m −1 + Dn (s) = s n + an −1s n −1 + + b1s + b0 + a1s + a0 (23.41) to obtain the continuous-time parametric model as an nth-order differential equation given by y (n) + an −1 y (n −1) + + a0 y = bmu(m) + bm −1u(m −1) + + b0u (23.4 23. However. time invariant systems can be modeled with an input–output transfer function as below. a T i ( p) =   representation of y(n) in the following compact form: y (n ) = P T X (23.44) We can start with (23. of dimension (n + m + 1) can be written as P= bm bm −1 … b1 b0 an −1 an − 2 … a1 a0   (23. n.47) It should be noted that the calculation of the parameter vector in (23.4  Flowchart for calˆ(t) recursively for ELSE. with m ≤ n − 1. To address this limitation.46)  pi pi −1 …1 and p = d/dt. The definitions in (23. problems related to differentiation may cause limitations in practical applications. culating P (23.46) can further be used in the Here.3 Adaptive Parameter Estimation for Continuous-Time Systems Single-input.45) We can lump all the parameters in (23.2. while Dn(s) is of degree.45) in the parameter vector P and all the corresponding input– output signal derivatives in the signal vector as below: (m) (m −1) XT =  u u u − y (n −1) − y (n − 2) T  T  −y   = a m ( p)u − a n −1( p) y  (23. m. P. The unknown parameters of Nm(s) and Dn(s). respectively.1.23 -10 Control and Mechatronics Le (t ) = ( Se (t − 1)We (t ) 1 + WeT (t )Se (t − 1)We (t ) ) Initial value of Se ˆ (t ) = P ˆ (t − 1) + L (t )e(t ) P e e e Se (t ) = Se (t − 1) − Le (t )We (t )Se (t − 1) Se(t–1)We(t) Le(t) = –—–––—–––———––—— 1 + WT e (t) Se(t–1)We(t) ˆ (t) = P ˆ (t – 1) + L (t)e(t) P e e e The ELSE algorithm also uses the flowchart given for RLSE in Figure 23.

To this aim. where WT P =  ␣ T ( p)  ␣ T ( p) 1 X T (t ) =  m u(t ) − n −1 y(t ) Λ( p) Λ( p)  Λ( p)  (23.IS96.1  Gradient Methods for Minimization of J in Continuous-Time Systems We start with the instantaneous quadratic cost function in the form 2 ˆT 2 ˆ ) = e = (z − P W ) J (P 2 2 (23. the plant in (23.Adaptive Estimation 23 -11 that are the same or higher than that of the differential equation describing the system. both sides of (23.51.AW95] in further detail. in the following way: y= N m (s) Λ(s) u Dn (s) Λ(s) (23.41) can be rewritten with the inclusion of Λ(s) with degree n. which will be minimized with respect to P © 2011 by Taylor and Francis Group. with Λ(p) = pn + λT αn−1(p). such as RLSE and RLSE with forgetting factor are also widely applied and can be found in [IF06. the input–output equation in (23.50) z(t ) = W T P (23.51) ˆ TW results in ˆ of z in Equation 23. λT = [λn−1 λn−2 … λ0]. we will briefly discuss the gradient method as a commonly applied approach to perform the least-square minimization of (23.41) will take on the parameterization model below to develop adaptive laws for estimating. z ˆ=P The estimate z ˆTW ˆ=z−P e=z−z (23.52) In the next section.48) Here. the unknown parameters are assumed to appear in a linear form.3.1. Other methods.2. such as in the linear parametric model given below: z(t ) = P TW (t ). 1 is the nth-order low-pass filter. ∞ ) (23.52). LLC . Λ( p) In order to avoid differentiation.49)  pn   Λ( p) − ␭T ␣ n −1( p)  T z(t ) =   y(t ) =   y(t ) = y(t ) + ␭ W2 Λ( p)  Λ( p)    Below we will discuss the LSE approach developed for the estimation of parameters for a system model given with a continuous input–output transfer function. To develop the LSE for continuous-time observation.47) are divided by Λ(p) to yield pn 1 y (n ) = y and Λ( p) Λ( p)  aT ( p) 1 X T (t ) =  m u(t ) Λ( p)  Λ( p) −  aT n −1 ( p) y(t ) Λ( p)  Following the use of this stable filter and z and W definitions below. t ∈  0. P.53) ˆ. 23.

55) Hence. we will now discuss parameter identification techniques based on MRAS (with full.58) © 2011 by Taylor and Francis Group.3  Model Reference Adaptive Schemes An LTI system with unknown plant parameters can be modeled by the LTI differential equation in (23. LLC . B. x(0) = x0 . The plant input is piecewise continuous and is a uniformly bounded function of time.2. Now.3. the parameters of which are updated based on output measurements in a way that the deviation between the actual output and the model output is minimized. respectively: x = Ax + Bu. We will now discuss APE methods developed for system models given in state-space form. we have ˆ ) = −(z − P ˆ T W )W = −eW ∇J ( P (23.53). MRAS also requires the control input to be PE. 23. also known as adaptive gain. With these assumptions and also using the previously described parameter estimators.57) where x ∈ Rn.56) 23. (23. Now.54) where 𝚪 = 𝚪 T > 0 is a scaling matrix. 23. The parameter identifier can be designed using Gradient and Lyapunov methods. Once again. C in (23. l.23 -12 Control and Mechatronics ˆ = − G ∇J ( P ˆ) P (23. first with the selection of a plant model. and C ∈ Rl×n are unknown constant parameter matrices.57). The plant parameters to be estimated are the constant matrices A.or partial-state measurements) to estimate A and B from the measurements of x and u. Similar to all other identification methods. the objective of this section is to design online parameter identifiers (or parameter estimation schemes) that guarantee convergence of the estimated plant parameters to their actual values. B ∈ Rn×m. y ∈ Rl and n ≥ m. We also assume that A is stable.4 APE Schemes for System Models in State-Space Form APE methods discussed so far are based on relationships between input–output transfer functions and noise signals given as transfer function models. while the state vector x ∈ Rn may be fully or partially available for measurements. consider an LTI plant represented by the state equation for these systems as x = Ax + Bu. From (23. We assume that the plant (23.1. The plant parameters A ∈ Rn×n. A and B are constant matrices with unknown components. y = Cx (23. and u is bounded. let us assume that the actual plant and the estimated plant have the following models.57) is completely controllable and observable.57). u ∈ Rm.1  MRAS with Full-State Measurements MRAS involves the calculation of unknown system parameters. the adaptive law for calculating P(t) can be obtained as ˆ = G eW P (23.

the Lyapunov funcand the elements of A 23.58) in one matrix P. ˆ . LLC . B parameter matrices of system in (23.59) to discuss the Lyapunov method in order to derive the ˆ. must satisfy the following matrix equation: AT T + TA = −Q. G 2 = G T ˙(t) from (23. After substituting e into (23. A.58) are unified in one regressor vector W. This condition yields the parameter adaptive laws ˆ = − G2 Te(t )uT (t ) ˆ = − G1Te(t )x T (t ). if the input u(t) is PE. A . B When the plant is stable and u is bounded and sufficiently rich. B (t ) asymptotically converge to zero as t → ∞.NA05] requires the derivation of the state error dynamics equation as ˆ − x = Ae(t ) + A (t ) x ˆ (t ) + B (t ) u(t ) e(t ) = x (23.B) = eT Te + tr AT G 1−1 A + tr BT G 2 B = − e T Qe ≤ 0 2 { } { } (23.Adaptive Estimation 23 -13 ˆu ˆx ˆ +B ˆ =A x (23. ˆ(t) are bounded and e(t) ˆ(t).65) guarantee convergence of the parameter estimates to their true values with uniform asymptotic stability. T.3. Â(t).63).60) 2 > 0 are positive definite gain matrices. B = B A(t ) = A (23. to guarantee negative definiteness of (23.58) and (23. while x and u in (23. should equal zero.65) tion based identification laws (23.61) Next. Q = QT > 0 (23.2  Generalized Error Models in Estimation with Full-State Measurements [NA05] This approach involves unifying the A.63) T where G 1 = G 1 > 0.60).59) We will use the models in (23.62) 1 −1 V (e. a Lyapunov function candidate is proposed to guarantee the asymptotic stability of the error dynamics in (23.60) where the parameter error matrices are defined by ˆ (t ) − B ˆ (t ) − A and B(t ) = B A(t ) = A (23. © 2011 by Taylor and Francis Group. Therefore. the symmetric positive definite matrix of order. then x ˆ ˆ (t ). n. B parameter update rules for A The Lyapunov theory–based identification algorithm [IF06.63).64) while the rest of the terms in (23.IS96. V (e.63). which include the parameter errors.B) = 1 T −1 e Te + tr AT G 1 A + tr BT G 2−1 B   2 and its derivative ( ) ( ) (23.

ˆ TW ˆ ˆ =P x = PTW .23 -14 Control and Mechatronics W PT(t) PTW e(t) = Ke(t) + PT(t)W(t) e P(t) = –ΓPe(t)WT(t) Figure 23.5  Generalized error models in estimation and control with full-state measurements.  Q = QT > 0. which arises when only some of the outputs of the plant are accessible. the error between the plant and model outputs can be described by e(t ) = Ke(t ) + DPT (t )v(t ). ␩(t ) = He(t ) (23.67) t →∞ t →∞ where 𝚪 = 𝚪 T > 0. where K(K = A)P ˜ P(t) is unknown but P(t ) can be adjusted in such a way that lim e(t ) = 0 and lim P(t ) = 0. © 2011 by Taylor and Francis Group.66) ˜T(t) = [Ã(t) B ˜ (t)] is a stable matrix of order n. as This adaptive law can be found via the Lyapunov function V(e.3  Generalized Error Models in Estimation with Partial-State Measurements [NA05] Figure 23.  KT T + TK = −Q. 23. x where (PT(t) = [A(t) B(t)]) and W(t) (W T(t) = [xT(t) uT(t)]) is an s-vector. P a result of which PT (t ) = − G Te(t )vT (t ) (23. Figure 23. LLC .6 illustrates the MRAS scheme with partial-state measurement.3.6  Generalized error models in estimation and control with partial-state measurements. Hence. ˜ ) = (1/2)[eT Te + tr(P ˜ (t)𝚪  −1P ˜  T(t))].5 illustrates the MRAS identification scheme with full-state measurement. In such case.68) W PT(t) P PTW e(t) = Ke(t) + DPT(t)W(t) η(t) = He(t) η P(t) = –Γη(t)νT(t) Figure 23. the resulting dynamics of the error between the adjustable model and plant states can be given as below: e(t ) = Ke(t ) + PT (t )W (t ) (23.

such as input–output or. A large class of nonlinear control systems can be made to have linear input–output behavior through a choice of nonlinear state feedback control laws. T = T T > 0 and H = DT T. Differentiating y with respect to time until the control input. g.71) Here. where h is as For a single output system (23. which aims to transform a nonlinear system into a (fully or partially) linear system. Lf h. thereby allowing the use of linear techniques to complete the control design. However. and requires (preferably) online parameter and/or load updates for accurate tracking of reference inputs. due to the fact that any © 2011 by Taylor and Francis Group. with the condition L g Lrf−1h(x ) ≠ 0 ∀x ∈ℜn yields the control law below: u= 1 ( −Lrf h + ν ) L g Lrf−1h (23. u. P ˜ (t) tend to be zero. feedback linearization does not guarantee robustness in the face of parameter uncertainty or disturbances. The adaptive law ˜ T(t) in this case is found to be for updating the m × s matrix P PT (t ) = − G DT Te(t )vT (t ) = − G␩(t )vT (t ) (23. [TKM89].4  Lyapunov-Based Adaptive Parameter Estimation for Nonlinear Systems Besides its use in systems with linear models.69) specializes to e(t) = Ke(t) + dP row. it is assumed that the output vector belongs to Rm and that the transfer matrix H(sI − K)−1D is a strictly positive real such that KT T + TK = −Q. hence. input-state linearizable.Adaptive Estimation 23 -15 In (23. P(t ) = −e1(t )v(t ). Lg h stand for the Lie derivatives of h w. Q = QT > 0.72) Feedback linearization is based on the idea of using change of coordinates and feedback control to cancel all or most of the nonlinear terms so that the system behaves as a linear or partially linear system.r. sufficiently rich. and P . LLC . Lyapunov-based APE proves to be a very effective approach for adaptive linearization in nonlinear systems. e1(t) = he(t).69) ˜ (t)𝚪  −1P ˜ T(t))] is a Lyapunov function. which. if the input u(t) and hence. h.t. f. A brief discussion of the Lyapunov-based adaptive linearization scheme will be provided below.70) with x ∈ ℜn and smooth f.83 thus satisfies the so-called matching condition defined in Taylor et al. or minimum-phase systems. If the transfer function h(sI − K)−1d is a strictly positive real and the input u(t) is ˜ (t) tend to zero. appears in the equation.68). v(t) is sufficiently rich. e(t) ˜ ) = (1/2)[e(t)T Te(t) + tr(P As in the previous case. one obtains y (r ) = Lrf h + L g Lrf−1hu (23. g. Equation 23. Consider the SISO system given in control-affine form as x (t ) = f (x ) + g (x )u y (t ) = h (x ) (23. Exact linearization in practical implementations poses some limitations. ˜ T(t)v(t). The approach has been used to solve a number of practical nonlinear control problems involved with important classes of nonlinear systems. V(e. A common use of the approach is in combination with the feedback linearization technique. respectively. then e(t) and P 23.

x2 ) © 2011 by Taylor and Francis Group. LLC . x2 . Pf . ν must be chosen for tracking purposes and the update law for parameter estimation must be determined.75) where ν is the tracking controller designed for the desired system dynamics. Input–output feedback linearizable systems often have the structure x2 0   x1           x2 x3 0 +  u = f (x ) + g (x )u x= =                 g n (x1 . will prevent the cancellation from being exact resulting in an input–output equation. the system becomes asymptotically linearized. with the assumption of accurate knowledge of parameters in the control law. The use of parameter adaptive control to be discussed in this section can help achieving exact cancellation.76) into (23. Pg as below: u= 1 −W fT (x )Pf + ν W (x )Pg T g ( ) (23. The performance of the linearizing controller in (23.76) With the substitution of (23.75) depends on how well the parameter vector.….1  Standard Parameterization Approach The control input assumes the following form in [SI89]: u= 1 −W fT (x )Pf + ν WgT (x )Pg ( ) (23. which is not linear. xn −1 ) y = x1 = h(x ) xn = fn(x ) + g n(x ) u = W fT (x )Pf + WgT (x )Pg u (23. Now.73) A system in the form above can be linearized by the proper application of a feedback control law.75) with − − the use of estimated parameter values. hence calling for online parameter estimation schemes. it is clear that if the parameters converge to their true values. x2 .77) Thus. For simplicity. This controller will determine the behavior of the linear system arising after the proper cancellation of nonlinearities.….74) (23. ) ( . which may naturally be variable or uncertain. xn −1 )   xn     fn (x1 . Whatever the source of these parameters.4. 23. There are two existing approaches in the design of the control law. consider a second-order system such as 0   x1   x2   x= = u + x f x x g x x ( .74) xn = ν − W fT Pf − uWgT Pg (23. it would make better sense to present (23. at least asymptotically.23 -16 Control and Mechatronics uncertainty in the knowledge of the nonlinear functions f and g. )  2  2 1 2   2 1 2  y = x1 = h(x1 . P is known.

K D = 2σ (23. for the determination of the parameter update law a Lyapunov function is chosen as below: ˙ as which gives the V −1 −1 V = −σ  − σe 2 + 2  PfT Γ1 + W fT (e + σe )iq  Pf + 2  PgT Γ 2 + WgT (e + σe ) Pg (e + σe )2 + σ2e2              −1 −1 V = (e + σe ) + σ2e 2 + PfT Γ1 Pf + PgT Γ 2 Pg 2 (23.Adaptive Estimation 23 -17 Considering this second-order system.80) K P = σ 2 .78) Usually. ν can be chosen as a simple proportional-derivative (PD) controller. Γ2 chosen as positive definite adaptation gain matrices. another parameterization approach has been proposed [CP93. Clearly. LLC . ref ref ref ref ν = x2 + K D x2 − x2 + K P x1 − x1 = x2 + K De + K P e ( ) ( ) (23. the convergence rate σ of the asymptotic behavior is determined to provide critical damping. especially when Pg values become very small. the parameter update laws must satisfy the equations Pf = Pf = −Γ1 (e + σe )W f u Pg = Pg = −Γ 2 (e + σe )Wg (23.BG95].79) Now.81) (23.83) 23.76) is that changes in Pg may affect the numerator terms negatively.82) with Γ1.4.75) and (23. To remedy this condition. Thus. with the error equation will be obtained as e + 2σe + σ2e − WgT Pg − iqW fT Pf = 0 (23. in which the parameters multiplying the control input are separated as those that are constant or known and those that are uncertain as below: WgT (x )Pg u = Pgnu + WgT (x )Pg u = −W fT (x )Pf + ν The resulting control structure is as below: u= 1 −WgT (x )Pg u − W fT (x ) Pf + ν P gn ( ) (23.84) © 2011 by Taylor and Francis Group. hence deteriorating the convergence of the algorithm. for the satisfaction of Lyapunov stability criteria.2  New Parameterization Approach The main drawback of the parameterization approach in (23.

77) in the following form: xn = Pgnu + W T P (23. ν must be chosen for tracking purposes and the update law for parameter estimation must be determined. PMSMs find wide range of applications in directdrive robotics (DDR) applications. the system becomes asymptotically linearized. Pg being parameter estimates. P ˜g represent the parameter Pf .e. This example considers the latter approach. the mathematical model of the system can be described by the following equations: dθ =ω dt dω = K c 39 cos 39θ + K s 39 sin 39θ − TL sin θ + ξ + (K t + K c 6 cos 6θ + K s 6 sin 6θ) iq dt f (x) g( x ) u (23. which require that load torque variations and torque pulsations (i. the control variable. The motor has 12 poles on its permanent magnet rotor and 39 teeth on its stator surface.88) As can be seen from the above equation. is now derived in a different form.23 -18 Control and Mechatronics − − ˜f . Now.91) © 2011 by Taylor and Francis Group. u. Pf . torque ripple and cogging torque) be eliminated mechanically.85) (23. The PMSM in this example is driving a single-link arm with variable load.86)  T WgT   P =  Pf PgT   (23.87) Thus. However. as can also be observed from (23. Pg are the actual parameters and P errors. The new parameterization approach is performed by rewriting the equation of motion (23. can be replaced by the constant input parameter if it is known for the system. LLC . Now.88): u= 1 ν −WT P Pgn ( ) (23. or via control.90) (23. where T WT =  W f Pf = Pf − Pf .89) Pgn.1:  Experimental Implementation of Parameter Identification for Torque Ripple Minimization in Direct Drive Systems [CP93] Below is a description of the experimental implementation of the Lyapunov-based parameter estimation for the estimation of load and friction uncertainties as well as the cancellation of undesirable nonlinear torque-ripple terms in permanent magnet synchronous motors (PMSM) [BG95] in a 1-degree-of-freedom (DOF) robotic arm configuration under gravity effects. or it can be given some constant value. the torque terms are no longer classified according to multiplication with the control variable. it is clear that if the parameters converge to their true values. by equating is obtained.. Example 23. With those assumptions. u. Pg = Pg − Pg xn = ν − WgT Pg − W fT Pf u (23.

Both methods are implemented for the direct-drive system in consideration with the use of the DS1104 motion controller.92) where θd and ωd are desired angular position and velocity. LLC . pulsation torque constants of 6th components (the biggest pulsation torque components). respectively. for both schemes. this effect is small when compared with the other components p is the number of pole pairs and p = 6 iq is the torque current (A) Implementation of the standard parameterization scheme The torque pulsations in the torque function can be eliminated by an appropriately chosen current input with current harmonics to cancel the effects due to geometric harmonics. Both parameterization approaches require the determination of ν for the desired tracking dynamics as well as the parameter update law as given in (23.78) and (23. To further the implementation of the two schemes on the given direct-drive system.94) where Γ1. u = iq . First. Ks6 are the torque constant. we will exploit the flexibility offered by the two parameter estimation schemes described above and perform the adaptive online identification of the unknown load and torque-ripple coefficients. and all torque constants divided by total of inertia (1/A · s2) Kc39.95) (23. Kc6.94).97) © 2011 by Taylor and Francis Group. Clearly.  ted PT =   K t K c 6 K s 6 K c 39 K s 39 TL  : parameter vector to be estimat WT =  1 cos 6θ sin 6θ cos 39θ sin 39θ − sin θ   : regressor vector (23.91). respectively.78) and (23. Ks39 are the cogging torque constants of 39th components (the biggest cogging torque components).93) (23. the parameter update laws will assume Pf = Pf = −Γ1 (e + σe )Wiq Pg = Pg = −Γ 2 (e + σe )W (23. and all torque constants divided by total of inertia (1/s2) TL is the load torque/total inertia due to the gravity effect (1/s2) ξ is the residues of the pulsation torques and cogging torques (1/s2). with these substitutions. x2 = ω . respectively.83): x1 = θ. the following substitutions are made in (23. Γ2 are arbitrary positive definite adaptation gain matrices. the control variable iq will be obtained in the following form for the standard parameterization scheme: iq = θd + K P e + K De + TL sin θ − K c 39 cos 39θ − K s 39 sin 39θ K t + K c 6 cos 6θ + K s 6 sin 6θ (23. respectively. but it would require accurate knowledge of the parameters (coefficients of the load and torque pulsations in (23. ν is selected as below: ν = ω d + K D (ω d − ω ) + KP (θd − θ ) (23.Adaptive Estimation 23 -19 where Kt.96) With the parameters updated according to (23. To solve this problem.93) and (23.83).

88) and (23.23 -20 Control and Mechatronics Inspection of (23. the torque terms are no longer grouped according to multiplication with the control variable. iq.101) The only difference is that the update law has the same form for all parameters unlike the standard approach. iq. respectively. which is varied from zero to M = 150 g. since Kt adaptation is also included in the control. The parameter error and regressor vector have the same form as in (23. which basically eliminates all the problems induced by the division of variables in iq. the . Parameter convergence is also not ensured due to this reason. However.96). also establishes guaranteed stability. This new parameterization approach also allows the choice of a simpler Lyapunov structure. Implementation of the new parameterization scheme To improve the convergence performance. convergence of e and e cannot actually be guaranteed unless special measures are taken on the variation of those terms in the denominator. the boundedness of ë and consequently. torque-ripple spectra are also provided for both schemes. we will now implement the new parameterization approach by rewriting the equation of motion (23. is iq = 1 ν −WT P K tn ( ) (23.99) Ktn can be replaced by the nominal torque constant if it is known for the system or it can be given some constant value. The derived control variable.95) and (23.98) As can be seen from the above equation.97). as can also be observed from (23. respectively. Due to the variables in the denominator of iq. such as V = (e + σe ) + σ2e 2 + P T Γ −1P 2 (23. is now derived in a different form. LLC . Comparative results are provided for both schemes. while Figure 23. with only constant Ktn in the denominator.97) reveals the drawback of the standard adaptive linearization as applied to the directdrive system.8a and b depict the torque spectrum for no load and for an M = 150 g.100) − ˜=P where P − P is the parameter error vector with its update law obtained with the same procedure as before. Figure 23. The improved performance through its application to the direct-drive system can be noted in the experimental results. under both no load and load. P = P = − (e + σ e ) Γ W (23. Finally. robustness. © 2011 by Taylor and Francis Group. A theoretical coverage of this issue can be found in [CP93]. iq. The experimental results for both parameterization approaches are given below. Figure 23. for a better demonstration of the performance of the two schemes. the control variable.89) in the following form: ω = K t iq + W T P (23. and parameter convergence [CB93]. Parameter estimation is performed to determine the unknown coefficients of the torque pulsations as well as the unknown load mass.7a through d present the performance for a constant speed reference trajectory. The new parameterization. which results in the lack of persistency of excitation.8c and d present the torque-ripple spectrum for standard and new parameterization schemes.

(c) variation of Kc 6 .5 292.77 36. load parameters in the denominator drift or do not converge at all. param. TL Standard param. In this chapter. 80 60 40 TL New param.Adaptive Estimation 292.75 36. the representation of all the variable parameters in the numerator achieved by the new parameterization approach results in an overall improved performance in terms of accuracy and robustness. and (d) Variation of TL and Kt.3 36.05 Error (rad) 0 New parameterization Standard parameterization –0. that is.79 23 -21 0. K s39. as well as the state vector x using only input–output measurements.58) when the parameters A. it is necessary to construct a scheme that estimates both the parameters of the plant (23. Kt New param.8 Standard parameterization (gray) θ (rad) 292. K s 6 . LLC . Ks6 New param. known as the Luenberger observer. B. B. it should be expected that only some of the state variables of the plant can be measured. However.05 –0.8 4 2 0 –2 –4 –6 –8 (c) 0 Ks39 Kc6 Standard param. Standard param.5 Adaptive Observers The main drawback of the schemes described in the previous sections is that they depend on the accessibility of all state variables or the availability of a strictly positive real transfer function model with partial-state measurements of the plant to be identified. while parameters and load variations in the numerator of the control law are estimated with high performance for both schemes. 20 0 –20 5 10 15 20 t (s) 25 30 35 40 –40 (d) 0 5 10 15 20 t (s) 25 30 35 40 Figure 23.4 30 35 40 292.58). (b) variation of θ (angular position). Hence. 23. Ks39 New K New c6 param. that is. used to estimate only the state vector x of the plant (23. A careful inspection of the figures demonstrates that.1 (a) 0 5 New parameterization (black) 10 15 20 t (s) 25 (b) 36. A.76 36. Kt Standard param.7  Comparative performance of standard and new parameterization scheme for constant speed (ωd = 2): (a) variation of error.78 t (s) 36.6 292. © 2011 by Taylor and Francis Group. Kc39 Standard Kc39 New param. param. Kc39.7 292. C are known. Hence. we refer to such schemes as adaptive observer or adaptive estimator.1 0. C . in most practical situations. Ks6 Standard param. A good starting point for choosing the structure of adaptive observer is the state observer.

1  Luenberger Observer When the initial condition x0 is unknown and A is not a stable matrix.5. 23. (b) for M = 150 g with PD control. A) is an observable pair and (AT. we can choose K so that the characteristic polynomial |sI − A + KC| = |sI − AT + CT K T | is a stable polynomial with desired eigenvalues of A − KC. known as the Luenberger full-state observer is used: ˆ = Ax ˆ +Bu + K ( y − y ˆ ). or A is stable but state observaˆ − x is required to converge to zero faster than the rate with which ∙eAt ∙ goes to zero.102) where K is a column matrix to be chosen by the designer. LLC . (c)  with adaptive linearization using standard parameterization. so the rate of convergence of the observation error e(t) to zero can arbitrarily be chosen by designing K appropriately. © 2011 by Taylor and Francis Group. The tion error e = x following observer. e0 = x (23.103) Because (C. and (d) with adaptive linearization using new parameterization. The Luenberger observer is seen to have ˆ − y. The state observation error a feedback term that depends on the output observation error y ˜=y ˆ e = x − x satisfies ˆ 0 − x0 e = (A − KC)e.8  Torque ripple spectrum (a) for M = 0 with PD control. x ˆ = Cx ˆ y ˆ (0) = x ˆ0 x (23.23 -22 PD M = 0 Acceleration ripple amptitude (dB(rad/s2)2) Control and Mechatronics PD M = 150 g Acceleration ripple amptitude (dB(rad/s2)2) 80 70 60 50 40 30 20 80 70 60 50 40 30 20 0 (a) 80 70 60 50 40 30 20 0 20 40 60 80 Geometry frequency (cycles per revolution) Standard parametrization M = 150 g (b) 80 70 60 50 40 30 20 20 40 60 80 Geometry frequency (cycles per revolution) New parametrization M = 150 g Acceleration ripple amptitude (dB(rad/s2)2) 0 20 40 60 80 (d) Acceleration ripple amptitude (dB(rad/s2)2) 0 20 40 60 80 (c) Geometry frequency (cycles per revolution) Geometry frequency (cycles per revolution) Figure 23. CT ) is a controllable pair.

Using the relationships A e (t ) = ∂F (xe (t )) ∂xe (t ) xe = x ˆe (23. L(t) will vary as the estimated system parameters are updated. the gain matrix.104) is linearized around the estimated state vector. it is a well-known fact that generally. The ELO is designed for nonlinear systems in the following form: x(t ) = f (x(t ). P(t )   B   +   u (t ) =  0 0      P (t )     Be xe F (xe (t )) y (t ) =  C  x (t )  0  P t     ( ) Ce xe ( ) (23. x ˆ e (t )) + xe (t ) = F (x ∂F ( x e ) ˆ e ) + Beu(t ) (xe − x ∂x e x ˆe (23. LLC . It should be noted that in the ELO scheme. To address such problems in addition to the commonly encountered nonlinearities in practical systems.104) P(t): variable parameter vector.2 Adaptive Estimation in Deterministic Nonlinear Systems Using the Extended Luenberger Observer As previously discussed. P(t )) + Bu(t ) y(t ) = Cx(t ) (23.106) ˆe(t) as Next. However. similar to that of LO.5.108) (23. system parameters are not known or may vary during system operation.107) to yield the equations below for the ELO scheme.Adaptive Estimation 23 -23 23. ELOs can perform the simultaneous estimation of states and unknown parameters.110) ˆ e (t ) = F (x ˆ e (t )) + Beu (t ) + L(t ) ( y(t ) − y ˆ (t )) x ˆ (t ) = Ce x ˆ e (t) y (23. To perform the estimation of slowly varying system parameters. the Luenberger observer has a linear structure and is used to estimate the system states with the use of input–output signals and under the assumption of known system parameters. the nonlinear system in (23.109) ˆ e (t ) = F (x ˆ e (t )) + Beu(t ) + L(t )( y(t ) − Ce x ˆ e (t )) x © 2011 by Taylor and Francis Group. using deterministic system models based on the assumption of slow parameter variations.105) (23. the ELO has been developed as an extension of Luenberger observers. an “extended” system model is obtained as below:  x (t )   f x (t ) .

114) with a measurement signal that can be given as z (k) = Hx(k) + e(k) (23.111) the state error vector can be represented as below. with the design of an appropriate observer gain matrix. w and v are considered to be white noise and with normal probability distributions p(w) ∼ N(0. in which predictions of the state and error covariance estimates. LLC . Q). Q and R are the process noise covariance and measurement noise covariance matrices. R).3. L(t) is calculated at every time step. we consider the problem where both the state x and parameters A.114) and (23. The optimization is performed based on the minimization of the mean of the squared error. upon which the “nonoptimal” EKF scheme is built.3 Adaptive Estimation in Stochastic Nonlinear Systems Using the Extended Kalman Filter The extended Kalman filter (EKF) scheme is another well-established adaptive estimation scheme for the simultaneous estimation of states and unknown parameters in nonlinear systems with stochastic models.5. and future states even when the model of the system is not accurately known. we will first discuss the basics of the Kalman filter (KF). 23.112) The gain matrix. namely the a priori estimates are calculated for the next time step. B.23 -24 Control and Mechatronics ˆ e (t ) xe (t ) = xe (t ) − x (23.115) The random variables in (23.1  Kalman Filter The KF is a well-known recursive algorithm that takes the stochastic linear state-space model of the system into account. The filter is capable of providing estimations of past. together with measured outputs to achieve the optimal estimation of states [BGH01] in multi-input. and the correction step. L(t) to yield the desired estimation dynamics: xe (t ) =   A e (t ) − L(t )Ce   xe (t ) (23.115) represent the process and measurement noise. 23. The design of the KF is based on the following stochastic linear difference model of the system for which state estimation will be performed: x(k) = Ax(k − 1) + Bu(k − 1) + z(k) (23. by matching the actual characteristic polynomial with a characteristic polynomial representing the desired observer dynamics. in which an improved a posteriori estimate is obtained by incorporating a new measurement into the a priori estimate. Similar to the approach taken in the previous section. present. The KF algorithm involves two main steps: prediction step. The prediction–correction algorithm can thus be summarized with the following equations: © 2011 by Taylor and Francis Group. multi-output systems. respectively. with sn + γ1sn−1 + … + γn = 0 (γi: constant) as can be seen below: sI − A e ( P ) + L(P )Ce = s n + α1(L(P ))s n −1 + + α n (L(P )) = 0 (23. t. which is the optimal “nonadaptive” state estimation scheme. respectively. C are to be estimated online simultaneously using an adaptive observer. p(v) ∼ N(0.113) Now.5.

Adaptive Estimation 23 -25 Prediction ˆ (k − 1) + Bu(k − 1) ˆ (k | k − 1) = Ax x Correction K(k) = U(k | k − 1)HT (HU(k | k − 1)HT + R)−1 ˆ (k | k) = x ˆ (k | k − 1) + K(k) ( z(k) − Hx ˆ (k | k − 1)) x U(k | k) = (I − K (k)H)U(k − 1 | k − 1) (23. which is first discritized in the following form: xe (k) = xe (k − 1) + Tfe (xe (k − 1). With this method. y in (23. Q The system output.BBG02]. z. the quasi linearization of (23. also taking system/process errors and measurement noises directly into account.120) ˆ e (k − 1). the design of the EKF scheme is also based on the extended model of the given nonlinear system as in (23. Similar to the design of the ELO. R © 2011 by Taylor and Francis Group.5.121) Here. The recursive nature of the algorithm is yet another attraction of the algorithm as it provides accurate estimation based on data from previous time steps. which has now ceased to be a burden.  ∂F   ∂F  Fx =  . in spite of its computational complexity. u ˆ (k − 1)] + Fx [xe (k − 1) − x ˆ e (k − 1)] + Fu[u(k − 1) − u ˆ (k − 1)] + z(k − 1) xe (k) = F[x (23.116) 23. Fu =    ∂ x  ∂u  x  e x ˆ e ( k −1). u ˆ ( k −1) ˆ e ( k −1).BGH01.118) ˆe(k − 1) and u(k − 1) of the Next. while the system noise taken into account in the design process inherently fulfills the persistency of excitation requirement for accurate estimation. which includes the measured signal as well as the measurement noise. These are the main reasons why EKF has found wide application in the industrial control systems. z (k) = Hxe (k) + e(k) (23. yielding Here.2  Extended Kalman Filter The EKF is an extension of KF scheme designed specifically for nonlinear systems.108) is replaced with the measured output. it is possible to make the online estimation of states while simultaneously performing identification of parameters in a relatively short time interval [SST93. The method ensures a high convergence rate.108).121) is performed around the estimated x previous step. thanks to the developments in high-performance processor technology. LLC . u(k − 1)) = F (xe (k − 1).119) w: Gaussian system noise with zero mathematical expectation and variance. which also makes the algorithm suitable for practical implementation. u(k − 1)) (23. H is the measurement matrix ε is the Gaussian measurement noise with zero mathematical expectation and variance.3. u ˆ ( k −1) (23.117) U(k | k − 1) = AU(k − 1 | k − 1)AT + Q (23.

125) Here.…. Two practical examples will be provided below to further clarify the implementation of the EKF scheme: The first example involves the online estimation of the variable load for the high-performance control of a nonlinear single-link arm. an EKF scheme will be developed to accurately estimate the load. PMSM-driven single-link arm in Figure 23.122) Here. load. p[xe(k)/zk]. ω. zk = {z(1). A PD+ controller is designed for the control of the arm and compensation of the variable load. and rotor resistance for the control of squirrel-cage induction motors (IMs) without the use of mechanical sensors. z(k)}.9  Single link arm.124) (23.123) through (23.126) Figure 23. DS1104. zk−1 = {z(1). the parameter identification algorithm can be given as ˆ e (k) = F [ x ˆ e (k − 1). hence.23 -26 Control and Mechatronics The estimation of xe(k) is performed based on the Bayes approach. θ. Now. z (k)   z (k) z  (23. flux. while the second example is for the estimation of velocity. commonly known as sensorless control of IMs. contains all the statistical information on the value of x expressed using the Bayes formula as  x (k)  p e k  =  z  k −1 k −1     xe (k)  p   xe (k) z  p  z (k ) x e (k). N is the covariance matrix of extrapolation error U is the covariance matrix of value error Du is the variance of the input signal The EKF given by (23.…. LLC . u ˆ (k − 1) ] + U(k)H T R −1 {z (k) − HF [ x ˆ (k − 1)]} x T  U(k) = N(k) − N(k)H T   R + H N(k)H  H N(k) T T N(k) = Fx U(k − 1)Fx + Fu Du (k − 1)Fu +Q −1 (23. z(k − 1)} The accuracy of the value will be determined by the covariance matrix of this distribution. z  p  k −1 =  k −1  p  z . using the fact that a posteriori ˆe(k) and can be probability density. and velocity. The estimator will be specifically designed to calPMSM culate the total inertia of the system in the calculation of the load. B k Mgl sin θ.9 is modeled as follows: θ g θ = ω.123) (23. ω = − ω + t iq − J J J (23. © 2011 by Taylor and Francis Group. at the M same time. z(2). while also estimating the angular position. z(2).125) provides the optimal value of the vector xe based on the optimization criteria and under measurement noise and external noise effecting the states. The control and estimation algorithms for both examples are performed on the high-performance motion control board.2:  Parameter Identification for the Control of a Single Link Arm [BGH01] The example below is a practical implementation of the EKF scheme for the high-performance control of a 1 DOF arm. Example 23. u ˆ e (k − 1).

will then be used to calculate the unknown load mass using the relationship ˆ ˆ (k) = J (k) − J m M l2 (23.0042 kg m2 Kp = 20 T = 0.2 kg load for are presented in the Figure 23. A careful inspection of the diagrams demonstrates steady TABLE 23. ω ˆ (k) .1. between the reference and angular position.2 and 0. LLC .128) Here.14 m Dσ = 2 × 10−5 Diq = 4.11 depicts the tracking error.11 also provides zoomed-in sections of the output in the transient and steady state for both load values under a step-type reference trajectory.0007 Nm s Kd/T = 5000 M = 0. and ˆ estimate θ J (k) using actual measurements of θ(k). e = θref − θ Kp is the proportional gain Kd is the derivative gain θref is the reference angular position value The system parameters are given in Table 23.2 Nm/A B = 0. aiming for the cancellation of the variable load term while providing the desired tracking dynamics: iq = Kpe + K d e + ˆ gl M sin θ kt (23. The estimated total inertia value.10.1  System Parameters J = 0. which demonstrates the actual θ values. the EKF scheme given by (23.Adaptive Estimation where θ is the angular position of the arm (rad) ω is the angular velocity of the arm (rad/s) J is the total mass of inertia reduced to motor shaft (kg/m2) B is the coefficient of viscous friction (Nm/rad/s) kt is the torque constant (Nm/A) M is the mass of the payload (kg) g is the acceleration of gravity (m/s2) l is the length of arm (m) iq is the or u control input (A) Jm is the moment of inertia of the motor J = Jm + Ml 2 23 -27 As discussed in the previous section.6 kg θref = π/2 l = 0.125) will provide the near-optimal value of the vector x under measurement and system noise.88 × 10−4 © 2011 by Taylor and Francis Group.127) The PD+ controller is designed as below.0008 s kt = 0. θref − θ 23. ˆ .6 and 0.0268 kg m2 Jmin = 0. Figure while Figure 23. ˆ J (k). The experimental results obtained with the EKF-based PD+ controller for different load masses of M = 0. The estimation algorithm will be used to ˆ (k).

2 3. 100 90 80 PD+ adaptive load terms 0. with EKF and PD+ for M = 0.5 1 1.01 0.6 kg M = 0. LLC .5 4 Figure 23.03 0.6 3.2 kg.02 0.03 M = 0.12  Estimated total inertia with EKF and PD+ for M = 0.10  Angular position.8 4 80 60 50 70 eθ (degree) 60 30 40 20 0 40 0 0 0.5 3 3.1 0.5 4 Figure 23.035 0.23 -28 90 80 70 θ (degree) 60 50 40 30 20 10 0 0 0.025 0.6 and 0.4 3.5 3 3.015 0.05 0.5 2 t (s) 2.01 0 0.6 kg Control and Mechatronics 4 Figure 23.2 kg. θ.6 and 0.02 0.2 kg.2 kg PD+