## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

To my parents and my wife Lei Tang

ADVANCES IN SPACE MAPPING TECHNOLOGY EXPLOITING IMPLICIT SPACE MAPPING AND OUTPUT SPACE MAPPING

By QINGSHA CHENG, M.ENG.

A Thesis Submitted to the School of Graduate Studies in Partial Fulfilment of the Requirements for the Degree Doctor of Philosophy

McMaster University © Copyright by Qingsha Cheng, February 2004

159 ii .). Chongqing University) B. Cheng M. Ph. (Department of Automation. Chongqing University) J.Eng. Engineering Institute of Canada Fellow. FIEE (United Kingdom) Fellow.) (University of London) D. Department of Electrical and Computer Engineering B.I.D. Professor Emeritus. Ontario TITLE: Advances in Space Mapping Technology Exploiting Implicit Space Mapping and Output Space Mapping AUTHOR: Qingsha S.C. (Eng. Royal Society of Canada Fellow. (Eng. (Imperial College) P. D. Bandler. Canadian Academy of Engineering SUPERVISOR: NUMBER OF PAGES: xx.Sc. IEEE Fellow.Eng.Eng.Eng.. (Province of Ontario) C.Sc. (Department of Automation.DOCTOR OF PHILOSOPHY (2004) (Electrical and Computer Engineering) McMASTER UNIVERSITY Hamilton.W.

ABSTRACT

This thesis contributes to advances in Space Mapping (SM) technology in computer-aided modeling, design and optimization of engineering components and devices. Our developments in modeling and optimization of microwave circuits include the SM framework and SM-based surrogate modeling; implicit SM optimization exploiting preassigned parameters; implicit, frequency and output SM surrogate modeling and design; an SM design framework and implementation techniques. We review the state of the art in space mapping and the SM-based surrogate (modeling) concept and applications. In the review, we recall proposed SM-based optimization approaches including the original algorithm, the Broydenbased aggressive SM algorithm, various trust region approaches, neural space mapping and implicit space mapping. Parameter extraction (PE) is developed as an essential SM subproblem. Different approaches to enhance uniqueness of PE are reviewed. Novel physical illustrations are presented, including the cheesecutting problem. A framework of space mapping steps is extracted. Implicit Space Mapping (ISM) optimization exploits preassigned parameters. We introduce ISM and show how it relates to the now welliii

ABSTRACT established (explicit) space mapping between coarse and fine device models. Through comparison a general space-mapping concept is proposed. A simple ISM algorithm is implemented. It is illustrated on the contrived “cheese-cutting problem” and applied to EM-based microwave modeling and design. An

auxiliary set of parameters (selected preassigned parameters) is extracted to match the coarse model with the fine model. The calibrated coarse model (the surrogate) is then (re)optimized to predict an improved fine model solution. This is an easy SM technique to implement since the mapping itself is embedded in the calibrated coarse model and updated automatically in the procedure of parameter extraction. We discuss the enhancement of the ISM by “output space” mapping (OSM) specifically, response residual space mapping (RRSM), when the model cannot be aligned. ISM calibrates a suitable coarse (surrogate) model against a fine model (full-wave EM simulation) by relaxing certain coarse model preassigned parameters. Based on an explanation of residual response

misalignment, our new approach further fine-tunes the surrogate by the RRSM. We present an RRSM approach. A novel, simple “multiple cheese-cutting” problem

illustrates the technique. The approach is implemented entirely in the Agilent ADS design environment. A new design framework which implements various SM techniques is presented.

We demonstrate the steps, for microwave devices, within the ADS (2003) schematic design framework. The design steps are friendly. The framework runs with Agilent Momentum, HFSS and Sonnet em.

Finally, we review various engineering

iv

ABSTRACT applications and implementations of the SM technique.

v

ABSTRACT vi .

for useful discussions during earlier parts of this work. S.A. Hailu. for his continuous support and interest. Thanks are extended to Dr. S.ACKNOWLEDGEMENTS The author wishes to express his sincere appreciation to Dr. Mr. now with Com Dev Ltd. Koziel of the Department of Electrical and Computer Engineering at McMaster vii . The author also thanks Dr. D. Tamás Terlaky in the Department of Computing and Software at McMaster University. A. N. Mostafa A.. for his expert guidance. Bandler. The author would like also to express his appreciation to Dr. Mr. Mohamed.K. Mohamed H.S. Nikolova. Supervisory Committee member. John W. Dr. continued support. Dakroury. director of research in the Simulation Optimization Systems Research Laboratory at McMaster University and President of Bandler Corporation. Mr. Ismail. The author thanks Dr. encouragement and supervision throughout the course of this work. Bakr for his continuous encouragement and useful discussions as a colleague in the Simulation Optimization Systems Research Laboratory and later as a Supervisory Committee member from the Department of Electrical and Computer Engineering at McMaster University.

J. NY. through the Micronet Network of Centres of Excellence. Mexico. Sistemas e Informática. Research Assistantship and Scholarship.ACKNOWLEDGEMENTS University and Dr. Wolfgang J. viii . USA. now part of Agilent EEsof EDA. The author is also grateful to Dr. now with Dept. North Syracuse. and through a Nortel Networks Ontario Graduate Scholarship in Science and Technology (OGSST). de Electrónica. Santa Rosa. from the Department of Electrical and Computer Engineering through a Teaching Assistantship. The author has benefited from working with the OSA90/hope™ and Empipe™ microwave computer-aided design systems developed by Optimization Systems Associates Inc. Jones. Donald R. University of Victoria. Financial assistance was awarded to the author from several sources: from the Natural Sciences and Engineering Research Council of Canada through grants OGP0007239 and STR234854. Technical University of Denmark. General Motors. Søndergaard of the Institute of Mathematical Modeling and Mr. The author was motivated to explore the idea of implicit SM after relevant remarks by Dr. J.E. for making em™ available and to Agilent Technologies. ITESO. The author thanks Dr. Pedersen of the Department of Civil Engineering. J. President. Canada. and comments by Dr. Inc. Sonnet Software. for making ADS Momentum™ and HFSS available. Hoefer. Madsen and Dr. CA.. for their good company and useful discussions.R. Rayas-Sánchez. K. for productive collaboration and stimulating discussions..C. Rautio. F.

ix . patience and continuous loving support. understanding.ACKNOWLEDGEMENTS Finally. thanks are due to my family for encouragement.

ACKNOWLEDGEMENTS x .

...4........2.......16 Aggressive Space Mapping Approach.5 Constrained Update.2 The Space Mapping Concept ..................................................20 2.............4..................4.17 2......21 2............1 The Optimization Problem..........................4 Jacobian Based Updates.......................22 2.....................................................................20 2.......1 Theory ....2..............................1 2..............4.......................19 2..........................15 2......................2.........2.............4............2 Multipoint Parameter Extraction (MPE)............................3 Broyden-like Updates ...18 2...........3 Statistical Parameter Extraction...................3 2....CONTENTS ABSTRACT ACKNOWLEDGEMENTS LIST OF FIGURES LIST OF TABLES Chapter 1 Chapter 2 2...............19 2...................................................23 xi .....5......................................................2 INTRODUCTION SPACE MAPPING: THE STATE OF THE ART iii vii xv xix 1 7 Introduction...................................................13 2............................................................................................3 Jacobian Relationships.............................5 Parameter Extraction.....4 Interpretation of Space Mapping Optimization ..................2 Unit Mapping .1 Single Point Parameter Extraction (SPE) ..................................7 The Space Mapping Approach...........18 2..........................................4 Original Space Mapping Approach ...........4........13 2..............................5..................6 Cheese-cutting Problem ...........5.....23 2...13 2.......15 2.

..2 Implicit Space Mapping (ISM): An Algorithm ......2......................37 Space Mapping Technology .......8...........................3....................................................5.6 Gradient Parameter Extraction (GPE) ............9 Chapter 3 3.................7 Other Considerations ....5 2.................................................7 2......CONTENTS 2.............4 3............44 Relationship with Explicit Space Mapping...39 3...29 The Space Mapping Concept .................32 Conclusions....4 Penalized Parameter Extraction ............5 PE Involving Frequency Mapping ..............4 2..........................................................................7 Chapter 4 4....8..30 Space Mapping Classification..............23 2...2 2..46 Cheese-Cutting Illustration ................................60 RESPONSE RESIDUAL SPACE MAPPING TECHNIQUES 63 Introduction........3.............48 Three-section 3:1 Microstrip Transformer Illustration ..................3 2...............5..............................................2 Building and Using Surrogates (Dennis) .....3..........50 3.................8.23 2.52 Frequency Implicit Space Mapping .....................................3..............5 Implicit Space Mapping..42 Interpretation and Insight........................5........................1 4......25 2.39 3..................................................................................64 xii .................34 IMPLICIT SPACE MAPPING OPTIMIZATION 37 Introduction.........6 2.......................................40 3......................1 3.....8...............28 Discussion of Surrogate Modeling and Space Mapping.......................................................3 Implicit Space Mapping (ISM): The Concept ...........4 3...............5.............1 3..........................1 2...............42 3....3 3.......28 2................30 Space Mapping Framework Optimization Steps .....................26 Output Space Mapping ...5 3..........................................6 3.........................................8............................8 Expanded Space Mapping Exploiting Preassigned Parameters............................55 Conclusions...........63 Response Residual Space Mapping ...............2............................................25 2......................................................................................1 Explicit Space Mapping.......................................2 Implicit Space Mapping.............................................54 HTS Filter Example ...........2 3.........28 Building and Using Surrogates .............3....

........................................................3 4......................................................................2.....................1 RF and Microwave Implementation .....3.....76 4....................2 Six-Section H-plane Waveguide Filter ...........5 Implicit and Response Residual Space Mapping Surrogate ......3...............1 Cheese-cutting problem ......................................................1 Optimization Steps..........................6 H-plane Waveguide Filter Design ..........................2..........81 ADS Schematic Design Framework for SM........................................85 5...............................76 4.............................7 Chapter 5 5....6.........92 5.4 4.............98 CONCLUSIONS 101 107 Appendix A SMX OBJECT-ORIENTED OPTIMIZATION SYSTEM A...........................2....2 ADS Schematic Design Framework for SM.........2 A........3 Three-Section Microstrip Transformer .82 5...........1 5...................82 5..83 5..........2 Multiple Cheese-cutting Problem .................................1 A...............................................3 Review of Other SM Implementations and Applications ....................4 Chapter 6 Conclusions...........1 Surrogate ..92 5...................................1 ADS Schematic Design Framework ................................................CONTENTS 4...........67 Response Residual Space Mapping Approach .........................2 Electrical Engineering Implementation .................98 5.................................................3 SMX Architecture.....................................................71 4...............97 5...........3.........................115 xiii ...........................2......2 Conclusions.............................4 Response Residual SM Implementation of HTS Filter.......................................................5....................71 4..........76 4..........3 Other Engineering Implementation.....5......................6..66 HTS Filter Example ....110 HTS Filter Example ..........................72 4...108 Algorithm Core: SMX Engine ........................111 115 Appendix B CHEESE-CUTTING PROBLEMS B..................78 IMPLEMENTABLE SPACE MAPPING DESIGN FRAMEWORK 81 Introduction.........91 5...........

.......2 Multiple Cheese-cutting Problem ...2..............118 B....4 B............................2 B.124 Response Residual SM Surrogate......127 Implicit SM and RRSM .............3 B.......................................................................2........1 B..CONTENTS B.122 Implicit SM ................................2..........2.....................128 131 141 151 xiv ...............................5 BIBLIOGRAPHY AUTHOR INDEX SUBJECT INDEX Aggressive Space Mapping..................2....................119 Response Residual ASM.......................

4 Fig..........39 Illustration of ISM............................ 2................ 2..... 3... ............................................. 2........... 2............. Here.... 3...... Here............................................. .........51 Implicit space mapping diagram for cheese problem........ 3.......LIST OF FIGURES Fig.......................1 Fig.............8 Fig........................2 Fig............ .............. 3...32 Illustration of explicit SM........ 3..2 Fig.................1 Fig. ..... (b) with extra mapping and output mapping.........................3 Fig........5 Fig....6 Linking companion coarse (empirical) and fine (EM) models through a mapping..49 Cheese-cutting problem⎯illustration of an intermediate parameter....................... .... .........4 Fig........47 Cheese-cutting problem⎯a demonstration of the ISM algorithm.................. 3............43 * Illustration of ISM prediction.......................................................... 3....... (a) The original SM. Q = 0 is solved for xc .......................................9 ............. 3.. Bandler and Snowden...... 3......... 2002)...................41 Illustration of ISM modeling........................45 When we set the preassigned parameters x = ∆ xc .. ISM is consistent with the explicit SM process..8 Illustration of the fundamental notation of space mapping (Steer.....................................14 Cheese-cutting problem solved by aggressive space mapping of model lengths...3 Fig..43 Synthetic illustration of ISM optimization with intermediate parameters............................................................. (a) implicit mapping within the surrogate................ (b) The ISM process interpreted in the same spaces.......... Q = 0 is solved for x.7 Fig.................................................................................................................51 xv Fig...... xi = w × l ............................22 Space mapping framework.

.........58 Fig.......... 3........ 4.. ...................................................... Getsinger...................57 The Momentum fine (○) and optimal coarse ADS model (⎯) responses of the HTS filter at the initial solution (a) and at the final iteration (b) after 2 iterations (3 fine model evaluations)..................66 The fine (○) and optimal coarse model (⎯) magnitude responses of the HTS filter...........................3 Fig....................... at the final iteration using ISM (b).................................. where P (⋅) is a mapping from optimizable design parameters to preassigned parameters..72 Multiple cheese-cutting problem: (a) the coarse model (b) and fine model..10 Fig...............5 Fig....12 Fig............................................................... ......................14 The Sonnet em fine (○) and optimal coarse ADS model (⎯) responses of the HTS filter at the initial solution (a) and at the final iteration (b) after one iteration (2 fine model evaluations)................ (a) the physical structure... 3............68 The fine (○) and optimal coarse model (⎯) responses of the HTS filter in dB at the initial solution (a).....................................6 ... and at the final iteration using ISM and RRSM (c).............. .......... 1995)........................... 3............................................70 Illustration of the RRSM surrogate......... Biernacki......... ...................................... 2003) exhibiting the quasi-global effectiveness of space mapping (light grid) versus a classical Taylor approximation (dark grid)......................... (b) the coarse model as implemented in Agilent ADS (2000).... Chen......53 The HTS filter (Bandler........... followed by one iteration of RRSM (b)..................... 4.........51 Calibrating (optimizing) the preassigned parameters x in Set A results in aligning the coarse model (b) or (c) with the fine model (a)...................................1 Fig.73 “Multiple cheese-cutting” problem: implicit SM and RRSM optimization: step by step.... 3............................ at the final iteration using ISM (a)......11 Three-section 3:1 microstrip transformer illustration............ 4........... Moskowitz and Talisa............ Ismail and Rayas-Sánchez.LIST OF FIGURES Fig.4 Fig...... Grobelny.................... 4..... .....59 Error plots for a two-section capacitively loaded impedance transformer (Søndergaard.. See text. In (c) we illustrate the ESMDF approach (Bandler....... 4.74 xvi Fig................2 Fig................................ 4............................ 2002).13 Fig..... 3...........

.......... 5.............. The process satisfies the stopping criteria.......... 5....2 Fig......86 Coarse (—) and fine (○) model responses |S11| at the initial solution of the three-section transformer..............................91 xvii Fig.................. 5......5 Fig...... This schematic extracts preassigned parameters x...... The goal is to match the coarse and fine model real and imaginary S11 from 5 to 15 GHz....... Coarse model optimization of the three-section impedance transformer........... The coarse and fine models are within the broken line........................................................................................ ........... ............................. 5...3 Fig............ 4.....................................9 .....................77 H-plane filter optimal coarse model response (⎯)..........8 Fig.....................................88 Reoptimization of the coarse model of the three-section impedance transformer using the fixed preassigned parameter values obtained from the previous calibration (parameter extraction)...89 Optimal coarse (—) and fine model (○) responses |S11| for the three-section transformer using Momentum after 1 iteration (2 fine model simulations)................79 S2P (2-Port S-Parameter File) symbol with terminals................................................. 5..............................75 (a) Six-section H-plane waveguide filter (b) ADS coarse model...7 Fig................ .........................................4 Fig.... ....................6 Fig......................................... .... ................86 Fine model simulated in ADS Momentum. and the fine model response at: (a) initial solution (○)... 4...82 The three-section 3:1 microstrip impedance transformer.......8 Fig......... xf = xf* = 12.............................. This schematic uses the minimax optimization algorithm...........85 Coarse model optimization.. The optimization algorithm uses the Quasi-Newton method..7 Fig........LIST OF FIGURES Fig........ (b) solution reached via RRSM after 4 iterations (○)..... 5............87 The calibration of the coarse model of the three-section impedance transformer............................ 5... 5...90 Implementation of response residual space mapping in Agilent ADS..................................1 Fig...... ...................... Finally..... 4...... 5............9 Parameter difference between the RRSM design and minimax direct optimization.. The coarse model is optimized using the minimax algorithm. The goal is to minimize |S11| of the calibrated coarse model.

5 in the final iteration (different heights for the fine model)................ Here xf = 12................................................................................ B............... A............1mil...............................111 Case 1: The initial response of the HTS filter for the “fine” model (OSA90).129 Parameter errors between the RRSM algorithm and minimax direct optimization.. ...........112 Case 2: The SMX optimized fine model (Agilent Momentum) response of the HTS filter......9 xviii .....127 RRSM step-by-step iteration demonstration––different heights for the fine model....................................................................2808 and xf* = 12....... B............................ A...... B...2808 vs.......... minimax solution xf = 12.....7 The modules of SMX.........................................117 The multiple cheese-cutting problem: (a) the coarse model (b) and fine model..... B.. ....... the ISM solution is x f = 12 vs................3 Fig... B...121 The response residual ASM algorithm converges in two iterations. B..118 The ASM algorithm may not converge in this case.....................2 Fig...........5 and xf* = 12....... minimax solution xf = 12........................5 Fig.............. A....6 Fig......3 Fig................... B.. A..............4 Fig.........1 Fig............... ......................109 Illustration of the derivation of basic optimizer class..............113 The aggressive SM may not converge to the optimal fine model solution in this case......................................113 Case 2: The final Momentum optimized fine model response of the HTS filter with a fine interpolation step of 0......................................... A.................................. ..................................................125 Case 1: c0 = 2...116 The RRSM solves the problem in one iteration (two fine model evaluations)...................2 Fig............................................... the ISM solution is x f = 12........ B. ................... f0 = 4..6 Fig...............112 Case 1: The optimal “fine” model (OSA90) response of the HTS filter.................. A................. Here xf = 12..........................8 Fig...130 Fig........ case 2: c0 = 4....LIST OF FIGURES Fig.............123 Parameter errors between the ISM algorithm and minimax direct optimization.........................................................1 Fig........... .........4 Fig.......5 Fig........... f0 = 4. B.........

........................1 OPTIMIZABLE PARAMETER VALUES OF THE THREESECTION IMPEDANCE TRANSFORMER............2 OPTIMIZABLE PARAMETER VALUES OF THE SIXSECTION H-PLANE WAVEGUIDE FILTER ..............................................80 TABLE 5.................................61 TABLE 4............................69 TABLE 4.LIST OF TABLES TABLE 2...................61 TABLE 3..........90 TABLE A...........114 xix .......................................................1 THE INITIAL AND FINAL DESIGNS OF THE FINE MODEL (OSA90) FOR THE HTS FILTER.................2 THE INITIAL AND FINAL PREASSIGNED PARAMETERS OF THE CALIBRATED COARSE MODEL OF THE HTS FILTER ...1 OPTIMIZABLE PARAMETER VALUES OF THE HTS FILTER.33 TABLE 2.......34 TABLE 3...................2 SPACE MAPPING CATEGORY EXPLANATION ...............................1 SPACE MAPPING CLASSIFICATION..............................1 AGILENT MOMENTUM/SONNET em OPTIMIZABLE PARAMETER VALUES OF THE HTS FILTER ........................................................................................................................

LIST OF TABLES xx .

conventional electrical models for components are no longer adequate. Temes and Calahan (1967) advocated CAD technology for filter design. EM simulators could be built to solve Maxwell’s equations for circuits of arbitrary geometrical shapes. As signal speed and frequency increase. The first CAD techniques in circuit design appeared in the sixties of the last century. We can single out the High Frequency Structure Simulator 1 . electromagnetic (EM)/physics effects. Since then design and modeling of microwave circuits applying optimization techniques have been extensively researched. With the dramatic increase in computer hardware performance. 1987b) presented excellent agreement between EM field solvers and measurements. become necessary. etc. including Design and modeling with physical/geometrical information. are used.. Analysis technologies such as the finite element method (FEM).Chapter 1 INTRODUCTION The manufacturability-driven design and time-to-market products in the electronics industry demand powerful computer-aided design (CAD) techniques. Rautio and Harrington (1987a. the method of moments (MoM).

Biernacki. Biernacki. Madsen and Søndergaard. 2002. Ismail and Rayas-Sánchez. Chen. Bandler. Bandler. Bakr. Mohamed. 2002. Bandler and Snowden. 1995. Chen. Bakr. Bandler. Chen and Madsen. Bakr. Ismail. Steer. Madsen and Søndergaard.S. Space Mapping (SM) is an optimization concept. Cheng. Bandler. 2002). a task beyond the reach of available design tools. 1998. Bandler and Snowden. because of extremely long or prohibitive computation. 2004. Cheng. 2 . Hemmers and Madsen. 2001. Cheng McMaster – Electrical and Computer Engineering (HFSS) from Ansoft and HP (Agilent) as the flagship FEM solver(s) and the MoM product em from Sonnet Software as the benchmark planar solver.PhD Thesis – Q. Rayas-Sánchez and Zhang. high-frequency. Steer. 2001. simply substituting conventional equivalent electrical models by EM/physics models into the design optimization process will not work. Due to the computational expense of EM/physics models. Bandler. Bakr. Grobelny and Hemmers. allowing expensive EM optimization to be performed efficiently with the help of fast and approximate “coarse” or surrogate models (Bandler. we need EM/physics-based component solutions on a much larger scale. circuit and systems design optimization. CAD procedures such as statistical analysis and yield optimization taking into account process variations and manufacturing tolerances in the components demands that the component models are accurate and fast so that design solutions can be achieved feasibly and reliably (Bandler. Bandler. 1994. Georgieva. Dakroury. To achieve success in modern. Bandler. Madsen and Søndergaard. Bandler. Biernacki. Mohamed. Bakr. 2004. Cheng. Dakroury. Rayas-Sánchez and Zhang. 2000. Ismail.

Chen. Research is being carried out on mathematical motivation. Cheng McMaster – Electrical and Computer Engineering Nikolova and Ismail. 3 . In its ten-year history. frequency and output SM surrogate modeling and design (Bandler. An objective is to present our developments in modeling and optimization of microwave circuits. Pedersen and Søndergaard. Dakroury. 2003). Mohamed. Grobelny and Hemmers (1994). Nikolova and Ismail. to place SM into the context of classical optimization. Hailu and Nikolova.PhD Thesis – Q. 2004). Gebre-Mariam. The theory is still in its infancy. design and optimization. but SM is accepted by the mathematical and engineering communities as a significant contribution. implicit SM optimization exploiting preassigned parameters (Bandler. Cheng.S. Madsen and Søndergaard. an implementable SM design framework (Bandler. 2004) and various other implementations and applications. 2004). Madsen. Space mapping was first introduced by Bandler. Cheng. 2004). Bakr. implicit. Procedures iteratively update and optimize surrogates based on fast physically-based “coarse” models. Biernacki. These developments include the SM framework and surrogate modeling for SM technology (Bandler. The aim of SM is to achieve a satisfactory design solution with a minimal number of computationally expensive “fine” model evaluations. This thesis addresses advances in SM technology in the computer-aided modeling. It has been applied with great success to otherwise expensive direct EM optimizations of microwave components and circuits with substantial computation speedup. researchers explored a number of variations and enormously successful applications. Cheng. Cheng.

A simple ISM algorithm is implemented. We show how it relates to the well-established (explicit) SM between coarse and fine device models. illustrations are presented. An auxiliary set of parameters (selected preassigned parameters) is extracted to match the coarse model with the fine model. SM framework steps are extracted. we discuss the enhancement of ISM by an “output space” 4 . We recall proposed approaches to SM-based optimization. In Chapter 4. the Broyden-based aggressive SM algorithm. The calibrated coarse model (the surrogate) is then (re)optimized to predict a better fine model solution. Through comparison a general SM concept is proposed. including the cheese-cutting problem. various trust region approaches. It is illustrated on a contrived “cheese-cutting problem” and applied to EM-based microwave modeling and design.S. we discuss implicit SM (ISM) optimization exploiting preassigned parameters. including the original algorithm. We illustrate our approach through optimization of an HTS filter using Agilent ADS (2000) with Momentum (2000) and Agilent ADS with Sonnet’s em (2001). Different approaches to Novel physical enhance uniqueness of parameter extraction are reviewed. The mapping itself is embedded in the calibrated coarse model and updated automatically in the procedure of parameter extraction. neural space mapping and implicit space mapping.PhD Thesis – Q. Cheng McMaster – Electrical and Computer Engineering Chapter 2 addresses basic concepts and notation of Space Mapping. We review the state of the art of SM technology and the SM-based surrogate (modeling) concept and applications in engineering optimization. In Chapter 3.

S. Cheng McMaster – Electrical and Computer Engineering mapping (OSM) or specifically. We present and demonstrate an Agilent ADS schematic framework for SM. a number of SM techniques are implemented in ADS with Momentum. We conclude in Chapter 6 with some suggestions for future research.PhD Thesis – Q. and Agilent HFSS in an interactive way. A new “multiple cheese-cutting” design problem illustrates the concept. Our approach is implemented entirely in the ADS framework. Using this framework. We present a significant improvement to ISM for EM-based microwave modeling and design. Sonnet em. In Chapter 5. “response residual space” mapping (RRSM) when the coarse and fine model cannot be aligned. emerges after only four EM simulations using ISM and RRSM with sparse frequency sweeps. easily implemented in Agilent ADS. Based on an explanation of residual response misalignment. we discuss a number of SM implementation frameworks and examples. our approach further fine-tunes the surrogate by exploiting RRSM. We review significant practical applications done by various groups and researchers from different engineering and mathematical communities. The author contributed substantially to the following original developments presented in this thesis: (1) Development of a space-mapping framework for space mapping 5 . An H-plane filter design demonstrates the method. We also present an RRSM approach. An accurate design of an HTS filter. ISM calibrates a suitable coarse (surrogate) model against a fine model (full-wave EM simulation) by relaxing certain coarse model preassigned parameters.

2000).” (5) Implementation of the implicit and output space-mapping (RRSM) algorithm. (8) Design of an ADS schematic framework for SM. (4) Contribution to the review paper: “Space mapping: the state of the art. (2) McMaster – Electrical and Computer Engineering Development of an implicit space-mapping algorithm to simplify microwave device design. Bandler. Cheng algorithms. Entirely within this framework. Madsen.PhD Thesis – Q. Rayas-Sánchez and Søndergaard. 6 .S. (7) Development of the demonstration “cheese-cutting” problem and “multiple cheese-cutting” problem. (3) Development of an implicit and output space-mapping (RRSM) algorithm to fine-tune microwave designs. (6) Development of the software package SMX to automate the SM optimization exploiting surrogates algorithm (Bakr. implicit SM and output space mappings (RRSM) are implemented.

For complex problems this cost may be prohibitive. Bandler and Snowden. long used for design verification. need to be exploited in the optimization process.Chapter 2 SPACE MAPPING: THE STATE OF THE ART 2. Alternative design schemes combining the speed and maturity of circuit simulators with the accuracy of EM solvers are desirable. 1988) directly utilize the simulated responses and possibly available derivatives to force the responses to satisfy the design specifications. Circuit-theory based simulation and CAD tools using empirical device models are fast: analytical solutions or available exact derivatives may promote optimization convergence. The recent exploitation 7 . Traditional optimization techniques (Bandler. But the higher the fidelity (accuracy) of the simulation the more expensive direct optimization is expected to be. 2002). The target of component design is to determine a set of physical parameters to satisfy certain design specifications. 1985.1 INTRODUCTION Engineers have been using optimization techniques for device. Bandler and Chen. Kellermann and Madsen. Electromagnetic (EM) simulators. component and system modeling and CAD for decades (Steer.

Bandler et al. 2. Cheng.g. practical or design parameters fine model responses design parameters coarse model responses ∇ ×H = jω D+J design parameters D=ε E ∇ ×E =− jω B B=µ H ∇ B= 0 responses Z ∇ D= ρ design parameters . 1995) demonstrated how SM intelligently links companion “coarse” (ideal. The SM approach updates the surrogate to better approximate the corresponding fine model.. a suitable surrogate is obtained. Cheng McMaster – Electrical and Computer Engineering of iteratively refined surrogates of fine. 8 . Madsen and Søndergaard.S. accurate or high-fidelity models. Bakr. fast or low-fidelity) and “fine” (accurate. e.d) C 3 = f (w responses fine space find a mapping to match the models coarse space Fig. Mohamed. and the implementation of Space Mapping (SM) methodologies (Bandler. This surrogate is faster than the “fine” model and at least as accurate as the underlying “coarse” model.1 Linking companion coarse (empirical) and fine (EM) models through a mapping. Dakroury. Through the construction of a space mapping. (1994. 2004) address this issue. Bandler conceived the SM approach in 1993 for modeling and design of engineering devices and systems. RF and microwave components using EM simulators.PhD Thesis – Q.

determined by a quasiNewton step. Hence. the surrogate is a linearly mapped coarse model. Biernacki. The original SM-based optimization algorithm was introduced by Bandler. Chen.PhD Thesis – Q. Generally. (1) (2) (3) (4) fine model simulation (verification) extraction of the parameters of a coarse or surrogate model updating the surrogate (re)optimization of the surrogate.S. 2. A low fidelity EM simulation or empirical circuit model could be a coarse model (see Fig. Biernacki. 1994) by exploiting each fine model iterate as soon as it is available. This iterate. optimizes the (current) surrogate model. nonuniqueness of the PE step 9 . Chen. Grobelny and Hemmers (1994). An EM simulator could serve as a fine model. where a linear mapping is assumed between the parameter spaces of the coarse and fine models. Hemmers and Madsen. The aggressive space mapping (ASM) approach (Bandler. Grobelny and Hemmers. Biernacki. Parameter Extraction (PE) is the key to establishing the mapping and updating the surrogate. 1995) eliminates the simulation overhead required in (Bandler. SM-based optimization algorithms comprise four steps. However. Chen. In this step. It is evaluated by a least squares solution of the linear equations resulting from associating corresponding data points in the two spaces. Cheng McMaster – Electrical and Computer Engineering high-fidelity) models of different complexities.1). the surrogate is locally aligned with a given fine model through various techniques.

1999) are such approaches. Biernacki. The SM model is then expected to yield a good approximation over a large region. i.. The hybrid aggressive SM algorithm 10 . SM techniques require sufficiently faithful coarse models to assure good results. Bakr. however. Many approaches are suggested to improve the uniqueness of the PE step. Mohamed.S. a penalty PE (Bandler. Mohamed. Biernacki and Chen. only small steps are needed. i. Bandler and Georgieva. Multi-point PE (Bandler. Bandler.e. Biernacki. Chen and Omeragic.t. Cheng McMaster – Electrical and Computer Engineering may cause breakdown of the algorithm (Bandler. A combination of the two strategies gives the highest solution accuracy and fast convergence.. Sometimes the coarse model and fine models are severely misaligned. 1997) and an aggressive PE (Bakr.e. If the coarse model reflects the nonlinearity of the fine model then the space mapping is expected to involve less curvature (less nonlinearity) than the two physical models. A recent gradient PE approach (Bandler. a statistical PE (Bandler.PhD Thesis – Q. 1996). Biernacki and Chen. 1999). it generates large descent iteration steps. 1996. Cheng. Chen and Omeragic. Madsen and Søndergaard. Chen and Huang. Bakr. 1999). A mathematical motivation (Bandler. Biernacki. Madsen and Søndergaard. it is hard to make the PE process work. 2004) places SM into the context of classical optimization based on local Taylor approximations. in which case the classical optimization strategy based on local Taylor models is better. 2002) takes into account not only the responses of the fine model and the surrogate.r. but the corresponding gradients w. Dakroury. Close to the solution. design parameters as well.

Cheng. Bandler. 1999) overcomes this by alternating between (re)optimization of a surrogate and direct response matching. Neural inverse SM simplifies (re)optimization by inversely connecting the ANN (Bandler. is then (re)optimized to evaluate the next iterate (fine model point). Rayas-Sánchez and Zhang. Ismail. Ismail. Ismail. 2004). 2003) utilize Artificial Neural Networks (ANN) in EM-based modeling and design of microwave devices. An auxiliary set of parameters (selected preassigned parameters such as dielectric constant or substrate height) is extracted to match the coarse and fine model responses. Neural space mapping approaches (Bandler. After updating an ANN-based surrogate (Bandler.S. the surrogate model based SM (Bakr. The resulting (calibrated) coarse model. 1999. Ismail. Madsen. Rayas-Sánchez and Zhang. Bakr. The next fine model iterate is then only an ANN evaluation. the surrogate. Rayas-Sánchez and Zhang. Cheng McMaster – Electrical and Computer Engineering (Bakr. Nikolova and Ismail. Bandler. a fine model optimal design is predicted in NSM (Bakr. 11 . 2000) by (re)optimizing the surrogate. Rayas-Sánchez and Zhang. The latest development of SM is implicit space mapping (ISM) (Bandler. Rayas-Sánchez and Zhang. Rayas-Sánchez and Søndergaard. Ismail. Bandler. 2003). Rayas-Sánchez and Zhang. Bandler. 1999). This is consistent with the knowledge-based modeling techniques of Zhang and Gupta (2000). 2000. More recently. 2000) optimization algorithm combines a mapped coarse model with a linearized fine model and defaults to direct optimization of the fine model. Georgieva and Madsen.PhD Thesis – Q. Bandler. Ismail.

2003a. Gould and Toint (2000) have stated that trust region methods have been effective in the SM framework. Ismail. 2000. 1999. Vicente studies convergence 12 . Søndergaard. 2000. Madsen’s group (Bakr. 2003b. Bakr. Ismail and Rayas-Sánchez. 2001. Gould and Toint. Bandler. Madsen and Søndergaard investigate convergence properties of SM algorithms (Madsen and Søndergaard. Bandler. 2001) considers the SM as an effective preprocessor for engineering optimizations. 2001) studies SM-based model enhancement.S. especially in circuit design. 1999. Cheng McMaster – Electrical and Computer Engineering The SMX (Bakr. The SM technology has been recognized as a contribution to engineering design (Zhang and Gupta. 2003b). Cheng. 2001. Rayas-Sánchez. Bandler. Vicente. Pedersen. Bakr (2000) introduces advances in SM algorithms. Hong and Lancaster. 2001. 2003a. Søndergaard. 2004). Zhang and Gupta (2000) have considered the integration of the SM concept into neural network modeling for RF and microwave design. Hong and Lancaster (2001) describe the aggressive SM algorithm as an elegant approach to microstrip filter design. 2002. Madsen and Søndergaard. Rayas-Sánchez (2001) employs artificial neural networks and Ismail (Ismail. 2000. For example.PhD Thesis – Q. especially in the microwave and RF arena. 2001). 2001. Conn. 2001. Pedersen. 2001) system is a first attempt to automate SM optimization by linking different simulators. Mathematicians are addressing mathematical interpretations of the formulation and convergence issues of SM algorithms (Bakr. Søndergaard. Conn. 2003a. Madsen and Søndergaard.

2. For example.g.2 THE SPACE MAPPING APPROACH The SM approach introduced by Bandler. optimal solution to be determined. |S11| at m selected frequency points ω or sample points. xf* is the 2. Chen. 2. 2. This simple CAD methodology embodies the learning process of a designer. Biernacki. e. U could be the minimax objective function with upper and lower specifications. the coarse and fine model design parameters are 13 .1 The Optimization Problem The design optimization problem to be solved is given by x* f arg min U ( R f ( x f )) xf (2-1) where Rf ∈ ℜm×1 is a vector of m responses of the model.PhD Thesis – Q. 2002.2.2 The Space Mapping Concept As depicted in Fig. 2. Grobelny and Hemmers (1994) involves a calibration of a physically-based “coarse” surrogate by a “fine” model to accelerate design optimization.2.S. xf ∈ ℜn×1 is the vector of n design parameters and U is a suitable objective function. 2003b). It makes effective use of the surrogate’s fast evaluation to sparingly manipulate the iterations of the fine model. It is assumed to be unique. and introduces SM to solve optimal control problems (Vicente. Cheng McMaster – Electrical and Computer Engineering properties of SM for design using the least squares formulation (Vicente.. 2003a).

respectively. given by xf * P −1 ( xc ) (2-4) as a good estimate of xf*. i. Cheng McMaster – Electrical and Computer Engineering denoted by xc and xf ∈ ℜn×1. The corresponding response vectors are denoted by Rc and Rf ∈ ℜm×1. respectively. where xc* is the result of coarse model optimization. 14 . 2002). xf fine model Rf (x f ) xc coarse model Rc ( xc ) xf xc = P ( x f ) such that Rc ( P ( x f )) ≈ R f ( x f ) xc Fig.PhD Thesis – Q. 2.S.e. We propose to find a mapping P relating the fine and coarse model parameters as xc = P ( x f ) such that (2-2) Rc ( P ( x f )) ≈ R f ( x f ) in a region of interest. solving (2-1) to find xf*..2 Illustration of the fundamental notation of space mapping (Steer. we declare x f . Bandler and Snowden. (2-3) Then we can avoid using direct optimization. Instead.

provided that Jc has full rank and m≥n. a mapped solution is found by minimizing the objective function g 2 . respectively. for instance in the minimax sense..2.PhD Thesis – Q. where g is defined by 2 15 . An expression for B which satisfies (2-6) can be derived as (Bakr. 2. This relation can be used to estimate the fine model Jacobian if the mapping is already established. i. Using (2-3) we obtain (Bakr. the Jacobian of P is given by ⎛ ∂P T JP (x f ) = ⎜ ⎜ ∂x f ⎝ T ⎞ ⎛ ∂ ( xc ) ⎞ ⎟ =⎜ ⎟ ⎜ ∂x f ⎟ ⎟ ⎠ ⎝ ⎠ T T JP (2-5) An approximation to the mapping Jacobian is designated by the matrix B∈ℜn×n.2. Georgieva and Madsen. B ≈ JP(xf ).S.e.3 Jacobian Relationships McMaster – Electrical and Computer Engineering Using (2-2). Cheng 2. Bandler. the mapping can be established through (2-7). 1999) T T B = ( J c J c ) −1 J c J f (2-7) If the coarse and fine model Jacobians are available. Bandler. Georgieva and Madsen. Subsequently.4 Interpretation of Space Mapping Optimization SM algorithms initially optimize the coarse model to obtain the optimal design xc*. 1999) J f ≈ Jc B (2-6) where Jf and Jc are the Jacobians of the fine and coarse models.

Here. 1994). 16 . Grobelny and Hemmers. Biernacki. the problem formulation can be rewritten as x f = arg min U ( Rc ( P ( x f )) xf (2-9) where x f may be close to xf* if Rc is close enough to Rf .S. Bandler.” Thus. an initial approximation of the mapping.e. Bandler. Grobelny and Hemmers (1994) assumed a linear mapping between the two spaces. Madsen and Søndergaard (2001). If xc* is unique then the solution of (2-9) is equivalent to driving the following residual vector f to zero f = f (x f ) * P ( x f ) − xc (2-10) 2. Biernacki. P(0) is obtained by performing fine model analyses at a pre-selected set of at least m0 base points. Cheng McMaster – Electrical and Computer Engineering g = g (x f ) * R f ( x f ) − Rc ( xc ) (2-8) Correspondingly.PhD Thesis – Q. Rc(P(xf)) is an expression of an “enhanced” coarse model or “surrogate. Rc(P(xf)) is optimized in the effort of finding a solution to (2-1).3 ORIGINAL SPACE MAPPING APPROACH In this approach (Bandler. according to Bakr.. m0 ≥ n+1. i. Chen. Chen. A corresponding set of coarse model points is then constructed through the parameter extraction (PE) process ( xc j ) arg min R f ( x (f j ) ) − Rc ( xc ) xc (2-11) The additional m0 – 1 points apart from xf(1) are required to establish fullrank conditions leading to the first mapping approximation P(0).

mj points which are used to establish the updated mapping P(j). m0 upfront high-cost fine model analyses are needed.4 AGGRESSIVE SPACE MAPPING APPROACH The aggressive SM algorithm (Bandler. Cheng McMaster – Electrical and Computer Engineering xc = P ( j ) ( x f ) = B ( j ) x f + c ( j ) where B (j) ∈ ℜn×n and c (j) ∈ ℜn×1. augments the set of base points in the coarse space. (2-12) At the jth iteration.S. we set the SM design as in (2-13). Upon termination. This algorithm is simple but has pitfalls. as determined by (2-11). P(j) is assumed close enough to our desired P. Hemmers and Madsen. First. 2. in general. If so. and xc ( m j +1) . Third. the sets of points in the two spaces may be expanded to contain. A rapidly improved design is anticipated following each fine 17 . Second. SM uses the current approximation P(j). Biernacki. Since the analytical form of P is not available. Chen. nonuniqueness in the PE process may lead to an erroneous mapping estimation and algorithm breakdown. a linear mapping may not be valid for significantly misaligned models. the set of base points in the fine space is augmented by x f ( m j +1) .PhD Thesis – Q. 1995) incorporates a quasi-Newton iteration using the classical Broyden formula (1965). If not. to estimate xf* at the jth iteration as xf ≈ xf ( m j +1) * = ( P ( j ) ) −1 ( xc ) ( m j +1) (2-13) ) is close enough to Rc The process continues iteratively until Rf ( x f (xc*).

PhD Thesis – Q.2 Unit Mapping A “steepest-descent” approach may succeed if the mapping between the two spaces is essentially represented by a shift. In this case Broyden’s updating 18 .4. The quasi-Newton step in the fine space is given by B ( j ) h( j ) = − f ( j ) (2-15) where B(j). Coarse model optimization produces xc .1 Theory The aggressive SM technique iteratively solves the nonlinear system f (x f ) = 0 (2-14) for xf. the approximation of the mapping Jacobian Jp defined in (2-5). The output * of the algorithm is an approximation to x f = P −1 ( xc ) and the mapping matrix B.4. that at the jth iteration. Solving (2-15) for h(j) provides the next iterate xf(j+1) x (f j +1) = x (f j ) + h( j ) (2-16) The algorithm terminates if ||f (j)|| becomes sufficiently small. 2. from (2-10).S. parameter extraction) is carried out in the coarse model space. This is executed indirectly through the PE (evaluation of * xc(j)). while the bulk of the computational effort (optimization. Note. The matrix B can be obtained in several ways. 2. Cheng McMaster – Electrical and Computer Engineering model simulation. the error vector f (j) requires an evaluation of P(j)(xf(j)). is updated using Broyden’s rank one update.

2002.S. We can solve (2-15) keeping the matrix B(j) fixed at B(j) = I. xf and xc at corresponding points we can use them to obtain B at each iteration through a least squares solution (Bandler. Georgieva and Madsen. Guillon (1998) and Pavio (1999) utilized this special case. Mohamed. 1999) as given in (2-7).4.4. One approach incorporates finite difference approximations and the 19 .t. Hybrid schemes can be developed following the integrated gradient approximation approach to optimization (Bandler. Daijavad and Madsen. Chen. 1988). Cheng McMaster – Electrical and Computer Engineering formula is not utilized. Bandler.4 Jacobian Based Updates If we have exact Jacobians w. Madsen and Søndergaard. Verdeyme. Bakr. 2.PhD Thesis – Q. (2-17) can be simplified using (2-15) to B ( j +1) = B ( j ) + f ( j +1) ( j )T h h ( j )T h ( j ) (2-18) 2.3 Broyden-like Updates An initial approximation to B can be taken as B(0) = I. the identity matrix. Note that B can be fed back into the PE process and iteratively refined before making a step in the fine model space. Bila. Bakr. B(j) can be updated using Broyden’s rank one formula (1965) B ( j +1) = B ( j ) + f ( j +1)− f ( j ) − B ( j ) h( j ) ( j )T h h ( j )T h ( j ) (2-17) When h(j) is the quasi-Newton step. Baillargeat.r.

demonstrates the aggressive SM approach.4. 2002).4. defined as E = J f − Jc B ∆B = B − I The analytical solution of (2-19) is given by T T B = ( J c J c + η 2 I ) −1 ( J c J f + η 2 I ) (2-20) (2-21) 2. The goal is a desired weight. 2.S. Bakr. 20 . Finite difference approximations could provide initial estimates of Jf and Jc. These are then used to obtain a good approximation to B(0). Our “response” is weight.5 Constrained Update On the assumption that the fine and coarse models share the same physical background. Bakr.6 Cheese-cutting Problem This simple physical example. depicted in Fig. Mohamed.3.PhD Thesis – Q. Cheng McMaster – Electrical and Computer Engineering Broyden formula (Bandler. ei and ∆bi are the ith columns of E and ∆B. A density of one is assumed. The designable parameter is length. Bandler. 2. respectively. Madsen and Søndergaard. Madsen and Søndergaard (2000) suggested that B could be better conditioned in the PE process if it is constrained to be close to the identity matrix I by letting B = arg min B T [e1 T T en η∆b1 T η∆bn ]T 2 2 (2-19) where η is a user-assigned weighting factor. The Broyden formula is subsequently used to update B.

2. Cheng McMaster – Electrical and Computer Engineering Our idealized “coarse” model is a uniform cuboidal block (top block of Fig. Note that we assume that the actual block (fine model) perfectly matches its coarse model. Let the actual block (“fine” model) be similar but imperfect (second block of Fig.3).5 PARAMETER EXTRACTION Parameter Extraction (PE) is crucial to successful SM. 2.3). we have corresponding values xf(1) and xc(1). cutting the cheese so that xf(1) = xc*. an 21 . This does not satisfy our goal. Thus. 2. We realign our coarse model to match the outcome of the cut. The optimal length xc* is easily calculated. Observe that the length of the coarse model is shrunk during PE to match our first outcome.S.3). i. except for the missing piece. This procedure can be repeated until the goal is satisfied.3).PhD Thesis – Q. also that the first and second attempts (cuts) to achieve our goal are confined to a uniform section. Our goal is achieved in one SM step. Assuming a unit mapping. a result consistent with expectations. 2. Typically.. This is a PE step in which we obtain a solution xc(1) (third block of Fig. 2.e. The difference between the proposed initial length of the block and the shrunk length is applied through the (unit) mapping to predict a new cut. we can write for j = 1 * ( x (f j +1) = x (f j ) + xc − xc j ) (2-22) to predict the next fine model length (last block of Fig. We take the optimal coarse model length as the initial guess for the fine model solution.

For example. Inadequate response data in the PE process may lead to nonunique solutions.3 Cheese-cutting problem solved by aggressive space mapping of model lengths.S. Grobelny and Hemmers. * xc optimal coarse model x (1) f initial guess * x (1) = xc f xc(1) PE prediction x (2) f * (1) x (2) = x (1) + P −1 ( xc − xc ) f f * (1) x (2) = x (1) + xc − xc f f Fig.PhD Thesis – Q. 2. 22 .5. Cheng McMaster – Electrical and Computer Engineering optimization process extracts the parameters of a coarse model or surrogate to match the fine model. we may use responses such as real and imaginary parts of the S-parameters in the PE even though the design criteria may include the magnitude of S11 only. Biernacki. 1994)is described by the optimization problem given in (2-11). Sufficient data to overdetermine a solution should be sought.1 Single Point Parameter Extraction (SPE) The traditional SPE (Bandler. It is simple and works in many cases. Chen. 2.

Biernacki and Chen. The SPE process is initiated from several starting points and is declared unique if consistent extracted parameters are obtained. Here. Chen and Huang.5.S. 1996. Bandler. 2. Cheng McMaster – Electrical and Computer Engineering 2. Biernacki. Ismail. 2.3 Statistical Parameter Extraction Bandler. Bandler. Rayas-Sánchez and 23 .2 Multipoint Parameter Extraction (MPE) The MPE approach (Bandler.4 Penalized Parameter Extraction Another approach is suggested in (Bandler. Chen and Omeragic. the point xc(j+1) is obtained by solving the penalized SPE process ( xc j +1) = arg min xc * Rc ( xc ) − R f ( x (f j +1) ) + w xc − xc (2-23) where w is a user-assigned weighting factor.5. Biernacki.5 PE Involving Frequency Mapping Alignment of the models might be achieved by simulating the coarse model at a transformed set of frequencies (Bandler. Chen and Omeragic suggest a statistical approach to PE. solution is selected. A more reliable algorithm is presented by Bakr.5.PhD Thesis – Q. Otherwise. Bandler and Georgieva (1999). Chen and Madsen (1998) and improved by Bakr. 1999) simultaneously matches the responses at a number of corresponding points in the coarse and fine model spaces. Biernacki.5. the best 2. 1997). Biernacki.

r. one using the l1 norm and the other is suitable for minimax optimization (Bandler. an EM model of a microwave structure usually exhibits a frequency shift w. Chen.PhD Thesis – Q. Biernacki. 1995). which can be alleviated by frequency transformation. 24 .. Biernacki. Chen. the coarse model point xc is extracted to match Rc with Rf. 1995) can implement this phase: a sequential algorithm and two exact-penalty function algorithms. Bandler. 2000).δ 0 (2-25) 0 In Phase 2.S.t. 1999).. an idealized representation. i = 1. Chen. Biernacki. This is done by finding arg min Rc ( xc . Cheng McMaster – Electrical and Computer Engineering Zhang. Hemmers and Madsen. Hemmers and Madsen.. Chen. Also. Madsen. Rayas-Sánchez and Søndergaard. Biernacki. 2. starting with σ = σ0 and δ = δ0. Frequency mapping introduces new degrees of freedom (Bakr. we determine σ0 and δ0 that align Rf and Rc in the frequency domain. k σ . Three algorithms (Bandler. Hemmers and Madsen.. 1995). Hemmers and Madsen (1995) ωc = P (ω ) σω + δ ω where σ represents a scaling factor and δ is an offset (shift). available quasistatic empirical models exhibit good accuracy over a limited range of frequencies. For example. σ 0ωi + δ 0 )) − R f ( x f ) . (2-24) The approach can be divided into two phases (Bandler. A suitable mapping can be as simple as frequency shift and scaling given by Bandler. In Phase 1.

1993. We can also consider neural weights in neural SM. Finite differences can be employed to estimate derivatives if exact ones are unavailable. 2002a. 2002b. Georgieva. as in the generalized SM tableau approach (Bandler. This approach reflects the idea of the MPE (Bandler. Zhang. but permits the use of exact or implementable sensitivity techniques (Bandler. We already discussed the scaling factor and shift parameters in the frequency mapping. Cheng McMaster – Electrical and Computer Engineering 2. 2000). Bandler. 2002) exploits the availability of exact Jacobians Jf and Jc. 2001) and surrogate model-based SM (Bakr. Bakr and Bandler.6 Gradient Parameter Extraction (GPE) GPE (Bandler. GPE matches not only the responses but also the derivatives of both models through the optimization problem.PhD Thesis – Q. Rayas-Sánchez and Zhang. 1988. Zhang and Biernacki. Alessandri. Madsen. preassigned parameters in implicit SM.S. Madsen and Søndergaard. mapping coefficients B. 25 . Nikolova. etc. At the jth iteration xc(j) is obtained through a GPE process. Bakr. Ismail. Mohamed. Song and Biernacki. 1990. Georgieva. Rayas-Sánchez and Søndergaard.7 Other Considerations We can broaden the scope of parameters that are varied in an effort to match the coarse (surrogate) and fine models. Glavic. Biernacki and Chen. Mongiardo and Sorrentino.5. Bandler and Bakr. 2004). 2. Bandler.. 1996) process.5.

the SM technique is expanded to calibrating the coarse model by allowing some preassigned parameters (we call them key preassigned parameters (KPP)) to change in certain coarse model components. Cheng McMaster – Electrical and Computer Engineering 2. 1997). this is done by adding circuit components to nonadjacent individual coarse model elements. Georgieva. Bandler. 2001. 2002). In Ye and Mansour (1997). 1994. Chen. Ismail. Rayas-Sánchez and Zhang (2001). In Bandler. As a result. Chen. Ismail and RayasSánchez. 26 . the coarse model can be calibrated to align with the fine model.6 EXPANDED SPACE MAPPING EXPLOITING PREASSIGNED PARAMETERS A design framework for microwave circuits is proposed by Bandler. Rayas-Sánchez and Zhang. Ismail and Rayas-Sánchez (2002). Ismail. Georgieva. Grobelny and Hemmers. The concept of calibrating coarse models (circuit based models) to align with fine models (typically an EM simulator) in microwave circuit design has been exploited by several authors (Bandler. Biernacki. Biernacki.PhD Thesis – Q.S. Ye and Mansour. Grobelny and Hemmers (1994) and Bandler. Here. The original SM technique is expanded by allowing some preassigned parameters (which are not used in optimization) to change in some components of the coarse model (Bandler. this calibration is performed by means of optimizable parameter space transformation known as space mapping. Those components are referred to as “relevant” components and a method based on sensitivity analysis is used to identify them.

The coarse model is decomposed into two sets of components. Lewis and Torczon. The trust region size is updated (Bakr. Then it establishes a mapping from some of the optimizable parameters to the KPP. 2001). 1998. Cheng McMaster – Electrical and Computer Engineering Examples of KPP are dielectric constant and substrate height in microstrip structures. the Expanded Space Mapping Design Framework (ESMDF) algorithm calibrates the coarse model by extracting the KPP such that the coarse model matches the fine model. The optimization step is accepted only if it results in an improvement in the fine model objective function.S. Therefore. Possible practical stopping criteria are elaborated in (Ismail. Søndergaard. junctions. Dennis.PhD Thesis – Q. It is assumed that the coarse model consists of several components such as transmission lines. At each iteration. Biernacki. Bandler. Ismail (2001) presents a method based on sensitivity analysis to perform this decomposition. 27 . Alexandrov. The mapped coarse model (the coarse model with the mapped KPP) is then optimized subject to a trust region size. The KPPs are allowed to change in the first set and are kept intact in the second set. Some solutions to overcome the problems associated with the KPP extraction process is also presented. 1999) according to the agreement between the fine and mapped coarse model. the algorithm enhances the coarse model at each iteration either by extracting the KPP and updating the mapping or by reducing the region in which the mapped coarse model is to be optimized. The algorithm terminates if one of certain relevant stopping criteria is satisfied. 1998. etc. Chen and Madsen.

Dennis uses the term “surrogate” to denote the function s to which an optimization routine is applied in lieu of applying optimization to the fine model f. 2000) on the Workshop on Surrogate Modelling and Space Mapping (2000) Dennis integrates the terminology “coarse” and “fine” from the SM community with his own. Gebre-Mariam. 2001.7 OUTPUT SPACE MAPPING The “output” or response SM concept could address a residual misalignment in the optimal responses of the coarse and fine models.PhD Thesis – Q.8 DISCUSSION OF SURROGATE MODELING AND SPACE MAPPING 2. a coarse model such as Rc = x2 will never match the fine model Rf = x2 – 2 around its minimum with any mapping xc=P(xf). also known as experimental designs.S. Another piece of terminology he uses is “surface” to denote a function (it may be vector valued) trained to fit or to smooth fine model data. The surfaces are generated from the data sites. 2002). For example. xc. He notes 28 . Pedersen and Søndergaard. 2. Cheng. Cheng McMaster – Electrical and Computer Engineering 2. Dennis mentions several ways to choose fine model data sites. 2003). xf ∈ ℜ.1 Building and Using Surrogates (Dennis) In his summarizing comments (Dennis.8. Current research is directed to this topic (Bandler. An “output” or response mapping can overcome this deficiency by introducing a transformation of the coarse model response based on a Taylor approximation (Dennis. Madsen.

Some surrogates attempt to fit the fine model directly (e. Bandler and Madsen (2001) emphasize that “surrogate optimization” refers to the process of applying an optimization routine directly to a coarse model. a surrogate. The former do this by Broyden updates and the latter by the derivatives of the surface. an approximation to which needs to be updated in each iteration.” Dennis’s definition of surrogate agrees with our definition in the sense that the surrogate is an enhanced coarse model.S. The mapping (surface) is the same during all iterations. by polynomials). He classifies SM in terms of “local space mappings and methods that use poised designs implicitly or explicitly approximate derivatives. in other cases the information gained during the optimization 29 .” Then he used the surface concept to interpret SM.” Dennis discusses “heuristics” that optimize the surrogate and (perhaps) correct the surface part of the surrogate.2 Building and Using Surrogates In an editorial.8. Here.g. Cheng McMaster – Electrical and Computer Engineering that “surfaces (are) designed to correct a coarse model and to be combined with the coarse model to act as a surrogate in optimization. “The surrogate is the coarse model applied to the image of the fine model parameters under the space mapping surface. 2. Dennis regards the mapping as a surface. which is a function (or a model) that replaces the original fine model.. We think of the mapping as that part of the surrogate.PhD Thesis – Q.

neural or ISM. surrogates of increasing fidelity are developed during the optimization process.g. aggressive SM. in which the coarse model is (re)aligned with the fine model to permit (re)calibration. The fine model is verified and checked to see if it satisfies the design specifications. This suggests a new fine model design iterate.r.4. Step 2 Select a mapping process (original. 30 . etc.8. design parameters. At last the aligned. correcting) the optimization variables. 2. Cheng McMaster – Electrical and Computer Engineering process is used to train the surrogate to fit the data derived from evaluation of the fine model (e. calibrated.) Step 3 Optimize the coarse model (initial surrogate) w.S. The second is PE.PhD Thesis – Q. 2.3 The Space Mapping Concept The SM-based optimization algorithms we review have four major steps. Step 1 Select a coarse model suitable for the fine model. mapped or enhanced coarse model (the surrogate) is (re)optimized..4 Space Mapping Framework Optimization Steps A flowchart of general SM is shown in Fig. 2. The first is fine model simulation (verification). The third is updating or (re)mapping the surrogate using the information obtained from the first two steps. In this case. coarse models may be enhanced by mapping (transforming.8. by artificial neural networks).t. In the SM approach.

etc. Cheng McMaster – Electrical and Computer Engineering Step 4 Simulate the fine model at this solution. we use symbol .4. the surrogate is rebuilt by the neural network training process and prediction is obtained by evaluating the neural network). However. or in the prediction (Step 8).S. Ismail. 7 and 8. neuron weights (Bandler. Ismail and Rayas-Sánchez. aggressive SM. Rayas-Sánchez and Zhang. 2002). Step 6 Apply parameter extraction using preassigned parameters (Bandler. 7 and 8 are separate steps in neural space mapping (training data is obtained by parameter extraction. e.r. 2..g.PhD Thesis – Q. ISM. We let operator (·) represent implied.g. where the surrogate is not explicitly rebuilt. Step 8 Reoptimize the “mapped coarse model” (surrogate) w. Step 7 Rebuild surrogate (may be implied within Step 6 or Step 8). Step 6 can be termed modeling 31 . response meets specifications.g. coarse space parameters. e. and to represent Step 6. e.. design parameters (or evaluate the inverse mapping if it is available). Step 9 Go to Step 4. Steps 6. Comments As shown in Fig. where the surrogate is rebuilt by extracting preassigned parameters. Step 7 may be implied in either the parameter extraction process (Step 6). 1999).. We can see that rebuilding the surrogate (Step 7) may be implied in either the PE process (Step 6) or in the reoptimization (Step 8). Step 5 Terminate if a stopping criterion is satisfied. respectively.t.

4 yes End may be implied Space mapping framework. TABLE 2. McMaster – Electrical and Computer Engineering Start select models and mapping framework optimize coarse model Neural Space Mapping Implicit Space Mapping Aggressive Space Mapping ( ) ( ) simulate fine model criterion satisfied no parameter extraction update surrogate (update mapping) optimize surrogate (invert mapping) Fig.S. Cheng for some cases. 2. Their properties are in two categories: model alignment and fine model prediction.8. Each category has a few sub-categories to specify the details of the SM techniques. In this table. implicit and output SM. 2.5 Space Mapping Classification TABLE 2.2 32 .PhD Thesis – Q. a number of SM technologies are categorized in 3 types: explicit/input.1 classifies SM.

can be used in optimization 8 by solving system of equations 33 .1 SPACE MAPPING CLASSIFICATION explicit SM/input SM original SM neural SM tableau approach partial SM ISM IOSM NISM SMN ASM OSM SMIS • •1 • • • • • using upfront fine model points fine model gradient required multi-point PE by design parameters by nondesignable parameters model generation linear mapping nonlinear mapping by optimizing surrogate by inverting mapping8 by evaluating inverse mapping • •1 • • • • • •2 • • • •4 • • • • • • • • aligning surrogate/ realign (updating) surrogate • • •4 • •7 • • •4 • • • • • • •3 • • • •5 • •7 • • •6 • predicting fine model • • • 1 2 applicable but not implemented extract neural weights 3 extract preassigned parameters 4 for single-layer perceptron case 5 output SM only 6 implicit SM only 7 modeling approach. Cheng McMaster – Electrical and Computer Engineering shows the explanations of these categories. TABLE 2.S.PhD Thesis – Q.

S. The simple CAD methodology follows the traditional experience 34 .PhD Thesis – Q.e.2 SPACE MAPPING CATEGORY EXPLANATION category using upfront fine model points fine model gradient required multi-point PE by design parameters by non-designable parameters model generation linear mapping nonlinear mapping by optimization surrogate by inverting mapping by evaluating inverse mapping explanation using more than one fine model point to start the design using fine model Jacobian in the loop using several points to extract parameters extracting designable parameters in PE extracting non-designable parameters in PE a calibrated model (surrogate) is available for modeling purposes the mapping is linear the mapping is nonlinear obtain the prediction by optimizing the surrogate obtain the prediction by inverting the mapping i. solving a system of equations obtain the prediction by evaluating the inverse mapping 2. Cheng McMaster – Electrical and Computer Engineering TABLE 2.9 CONCLUSIONS In this chapter we reviewed the SM techniques and the SM-oriented surrogate (modeling) concept and their applications in engineering design optimization.

A design framework we call expanded space mapping exploiting preassigned parameters is reviewed. trust region aggressive space mapping. Proposed approaches to SM-based optimization include the original SM algorithm.S. the Broyden-based aggressive space mapping. yet appears to be amenable to rigorous mathematical treatment. It is used to align the surrogate with the fine model at each iteration. Cheng McMaster – Electrical and Computer Engineering and intuition of engineers. Different approaches to enhance the uniqueness of parameter extraction are reviewed.PhD Thesis – Q. The mapped coarse model is then optimized subject to a trust region size. This technique expands original space mapping allowing some preassigned parameters (which are not used in optimization) to change in some components of the coarse model. SM techniques are categorized through their properties. hybrid aggressive space mapping. Parameter extraction is an essential subproblem of any SM optimization algorithm. SM concepts and an SM framework are discussed. neural space mapping and implicit space mapping. The general steps for building surrogates and SM are indicated. including the gradient parameter extraction process. 35 . The aim and advantages of SM are described.

S. Cheng McMaster – Electrical and Computer Engineering 36 .PhD Thesis – Q.

Ismail. The calibrated coarse model (the surrogate) is then (re)optimized to predict a better fine model solution. Through A simple ISM algorithm is implemented. we introduce the idea of implicit space mapping (ISM) (Bandler. In Bandler. This is an easy SM technique to implement since the mapping itself is embedded in the calibrated coarse model and updated automatically in the procedure of parameter extraction. Cheng.1 INTRODUCTION In this chapter. Bandler. 2004) and show how it relates to the wellestablished (explicit) SM between coarse and fine device models. We illustrate our approach through optimization of an HTS filter using Agilent ADS with Momentum and Agilent ADS with Sonnet’s em. comparison a general SM concept is proposed. It is illustrated on a contrived “cheese-cutting problem” and is applied to EM-based microwave modeling and design. Chen. An auxiliary set of parameters (selected preassigned parameters) is extracted to match the coarse model with the fine model. Rayas-Sánchez 37 . Hemmers and Madsen (1995). Biernacki. Grobelny and Hemmers (1994). Biernacki. Nikolova and Ismail. Bandler.Chapter 3 IMPLICIT SPACE MAPPING OPTIMIZATION 3. Chen.

In Ye and Mansour (1997). The component values are updated iteratively. Cheng McMaster – Electrical and Computer Engineering and Zhang (1999). Bandler. The ESMDF algorithm (Bandler. Madsen. geometrical parameters such as substrate height or mathematical concepts such as frequency transformation parameters. It establishes an explicit mapping from the optimizable design parameters to preassigned (non-optimized) parameters. The preassigned parameters. a calibration is performed through a mapping between optimizable design parameters of the fine model and precisely corresponding parameters of the coarse model such that their responses match.PhD Thesis – Q. 2002) calibrates the coarse model by extracting certain preassigned parameters such that corresponding responses match. The ISM approach does not establish an explicit mapping. Examples of preassigned parameters are physical parameters such as dielectric constant in microstrip structures. We repeat this process until the fine model response is sufficiently close to the target response. calibrate the “mapping”.S. the coarse model is calibrated against the fine model by adding circuit components to nonadjacent individual coarse model elements. With these preassigned parameters now fixed. Typically. we reoptimize the calibrated coarse model. Then we assign its optimized design parameters to the fine model. Rayas-Sánchez and Søndergaard (2000). Ismail and Rayas-Sánchez. This mapping is iteratively updated. Bakr. In each iteration we extract selected preassigned parameters to match the coarse model with the fine model. they are not optimized but clearly they influence the 38 . which are updated. We suggest an indirect approach.

PhD Thesis – Q.S. Cheng responses.

McMaster – Electrical and Computer Engineering

As in Bandler, Ismail and Rayas-Sánchez (2002) we allow the

preassigned parameters (of the coarse model) to change in some components and keep them intact in others. We implement our technique in Agilent ADS (2000).

3.2

**SPACE MAPPING TECHNOLOGY
**

We categorize space mapping into (1) the original or explicit SM and (2)

implicit space mapping. Both share the concept of “coarse” and “fine” models. Both use an iterative approach to update the mapping and predict the new design. 3.2.1 Explicit Space Mapping In explicit SM, we should be able to draw a clear distinction between a physical coarse model and the mathematical mapping that links it to the fine

design parameters

fine model

responses

space mapping

coarse model surrogate

responses

Fig. 3.1

Illustration of explicit SM.

39

PhD Thesis – Q.S. Cheng

McMaster – Electrical and Computer Engineering

model. See Fig. 3.1. Here, the mapping together with the coarse model constitute a “surrogate”. In each iteration, only the mapping is updated, while the physical coarse model is kept fixed. If the inverse mapping is available at each iteration, then the solution (best current prediction of the fine model) can be evaluated directly. Otherwise an optimization is performed on the mapping itself (not the mapped coarse model) to obtain the prediction. Examples of explicit SM are the original SM (Bandler, Biernacki, Chen, Grobelny and Hemmers, 1994), aggressive SM (Bandler, Biernacki, Chen, Hemmers and Madsen, 1995), neural SM (Bandler, Ismail, Rayas-Sánchez and Zhang, 1999), etc. 3.2.2 Implicit Space Mapping Sometimes identifying the mapping is not obvious: it may be buried within the coarse model. If the “mapping” is integrated with the coarse model, the (mapped) coarse model becomes a calibrated coarse model or enhanced coarse model, which we also call a “surrogate”. See Fig. 3.2(a) (the rectangular box). In the next step, the calibrated or enhanced coarse model is optimized to obtain an “inverse” mapped solution. If the implicitly mapped model is not sufficiently good after calibration, we may add an explicit mapping or output mapping (Bandler, Cheng, Dakroury, Mohamed, Bakr, Madsen and Søndergaard, 2004, Bandler, Cheng, Gebre-Mariam, Madsen, Pedersen and Søndergaard, 2003). See Fig. 3.2(b). Both explicit and implicit SM iteratively calibrate the mapped model when approaching the fine model solution. Interestingly, the explicit mapping could be 40

PhD Thesis – Q.S. Cheng

McMaster – Electrical and Computer Engineering

design parameters

fine model

responses

**surrogate space coarse mapping model preassigned parameters
**

(a)

responses

design parameters

fine model

responses

surrogate space mapping extra mapping space coarse mapping model preassigned parameters responses space mapping output mapping

(b) Fig. 3.2 Illustration of ISM, (a) implicit mapping within the surrogate, (b) with extra mapping and output mapping.

expressed in the form of ISM by using a simple mathematical substitution. We discuss this in Section 3.3.

41

S. x ( j ) ) (3-3) over a region in the parameter space. for example. 3. Cheng McMaster – Electrical and Computer Engineering 3. a set of other (auxiliary) parameters. As in Fig.r. Here. x ( j ) ) . We solve Q ( x f . ISM then utilizes the mapping to obtain a prediction of x f . The corresponding coarse *( model (the surrogate) response vector is Rc ( xc j ) . x to obtain x ( j ) indirectly by an optimization algorithm.PhD Thesis – Q. at the jth iteration. we denote by xc j ) a coarse model optimum point (usually designable parameters) for given x ( j ) . 3. We think of this as a modeling procedure.t.1 IMPLICIT SPACE MAPPING (ISM): THE CONCEPT Implicit Space Mapping *( At the jth iteration.3. we set 42 . preassigned parameters. As indicated in Fig.3 3. x ) = 0 (3-1) w. ISM aims at establishing an implicit mapping Q between the spaces x f . xc and x .4. also referred to as parameter extraction in this case.3. xc . during which we set *( x f = xc = xc j −1) (3-2) such that *( *( R f ( xc j −1) ) ≈ Rc ( xc j −1) .

xc . Here.3 Illustration of ISM modeling. x such that Rc ( xc .S. x Fig. x ) = 0 such that * xc = arg min U ( Rc ( xc . Cheng McMaster – Electrical and Computer Engineering mapping fine model Rf (x f ) xc x coarse model Rc ( xc . 3.PhD Thesis – Q. mapping fine model Rf (x f ) xc x xf coarse model Rc ( xc . x ) = 0 xc . xc .4 * Illustration of ISM prediction. 3. x ) xf xf Q ( x f . Q = 0 is solved for x. Q = 0 is solved for xc . x ) xf Q ( x f . x )) xc xc . Here. 43 . x ) ≈ R f ( x f ) Fig.

5. design parameters such as physical length and width of a microstrip line can be mapped to intermediate parameters 44 . mapping is usually nonlinear and implicit. See Fig.e. the prediction is obtainable by optimizing a mapped coarse model or surrogate.t.r.3. 3. x f is a solution. A set of intermediate parameters xi is introduced for this purpose.PhD Thesis – Q. i..e.S. Cheng McMaster – Electrical and Computer Engineering x = x( j) (3-4) Since the where x ( j ) is obtained from the foregoing modeling procedure. 3. In a physically based simulation. However. of the nonlinear system (3-1) which is enforced through a parameter extraction (modeling) w. we can construct and connect a known mapping to a physical coarse model to study the behavior of the mapping. x ( j ) )) xc (3-5) Then the fine model parameters are assigned (predicted) as *( x f = xc j ) (3-6) In general. we find *( xc j ) arg min U ( Rc ( xc . the mapping is buried in the coarse model. and subsequent prediction of the fine model solution (through optimization of the calibrated coarse model). found iteratively..2 Interpretation and Insight As mentioned before. i. ISM optimization obtains a space-mapped design x f whose response approximates an optimized Rc target. x. we can synthesize examples to develop insight into ISM.

3. a corresponding hidden mapping in the modeling procedure can be thought of as finding *( xi( j ) = P ( xc j −1) . and the intermediate parameters may not be directly accessible. in that case. Assuming the intermediate parameters xi are accessible. 45 .S. be extractable (detachable). 1998). design parameters fine model responses intermediate parameters coarse model surrogate responses space mapping Fig. Cheng McMaster – Electrical and Computer Engineering such as electrical length and characteristic impedance through well-known empirical formulas (Pozar. the transformation from circuit parameters to physical parameters may be implicit. The prediction is then obtained through optimizing suitably (the preassigned parameters of) calibrated microstrip components.PhD Thesis – Q. x ) (3-7) to match the coarse and fine model responses.5 Synthetic illustration of ISM optimization with intermediate parameters. The mapping may. and it can be (re)optimized to obtain an “inverse” mapped solution (the prediction). For a library of microstrip components.

the prediction is * ( x (f j ) = x (f j −1) + xc − xc j ) (3-10) This agrees with the steps of aggressive space mapping (Bandler. Cheng McMaster – Electrical and Computer Engineering * Let xi* be the intermediate solution producing coarse model optimum Rc .S. 3. on the other hand. Biernacki. The ISM in this case. We can show that after the modeling procedure. The corresponding * * response is denoted by Rc . Hemmers and Madsen. xc j ) starts with xc and depends on the current value of x and will change from iteration to iteration through reoptimization. that ISM extracts ∆ xc rather than xc during parameter extraction. Chen. An interesting point that relates the ISM to the explicit mapping is when we set the preassigned parameters at jth iteration ( x ( j ) = ∆ xc j ) ( *( xc j ) − xc j −1) (3-9) ( where xc j ) is obtained through parameter extraction. 1995) using a unit mapping.3. xc is fixed. Once obtained. In *( * ISM. the prediction procedure can then be expressed as *( xc j ) = P −1 ( xi* . highlighted in Fig.6.PhD Thesis – Q.6(b). 46 . as in Fig. 3. as seen in Fig.3 Relationship with Explicit Space Mapping The first step in all SM-based algorithms results in an optimal coarse * model design xc for given nominal preassigned parameters x. is consistent with the original SM with the difference. 3. Correspondingly. x ( j ) ) (3-8) 3.6(a).

PhD Thesis – Q. Rayas-Sánchez and Zhang. Hemmers and Madsen. in the SM-based surrogate approach (Bakr. Preassigned parameters x could also represent other variables such as the space mapping parameters B. Cheng McMaster – Electrical and Computer Engineering In the case of neuro SM (Bandler.6 When we set the preassigned parameters x = ∆ xc . 47 . Madsen. ISM is consistent with the explicit SM process. 3. c. 2000). Ismail. if we set x=w (3-11) where w represents the weights of the neurons. (a) The original SM. in frequency SM (Bandler. Biernacki. σ. xc* • • xf(0) xc*(0) • x (0) • f xc* • xc(1) xc * • • PE xc xf(0) • xc*•(1) • xc*(0) x (1) • f xc*(1) • xc*(0) • PE x • f x (0) x • f (1) xc(1) (a) • • (0) xf xf(0) • (b) Fig. and δ. Rayas-Sánchez and Søndergaard. 1999). neuro space mapping is representable by ISM. (b) The ISM process interpreted in the same spaces. 1995). then by associating the artificial neural networks (ANN) with the coarse model. Chen. Bandler.S.

3.7).4 units to match the fine model weight (parameter extraction). depicted in Fig. Cheng etc. An unbiased cut of the same length in the fine model weighs 24 units (fine model evaluation). The procedure continues in this manner until the irregular block is sufficiently close to the desired weight of 30 units. The width (preassigned parameter) of the (coarse) model is shrunk to 2. w) = l × w × h 48 . 3. we see that the error reaches 1% after 3 iterations. A length of 10 units will give 30 units of weight for the coarse model (top block in Fig. 3. in this case. A unity density and a cross-section of 3 by 1 units are assumed. 3.4 McMaster – Electrical and Computer Engineering Cheese-Cutting Illustration The ISM process can be demonstrated by a simple example. A reoptimization of the length of the calibrated coarse model (the surrogate) is performed to achieve the goal. The weight of the coarse cheese model can be written as Rc (l . The goal is to deliver a segment of cheese of weight 30 units (target “response”). is an indirect approach. Then the new length of 12. The “fine” model has a corresponding cuboidal shape with a defect of 6 missing units of weight (the second block from top). The “coarse” model is a cuboidal block (top block in Fig.5 units is assigned to the irregular block (fine model). ISM. From the illustration. A direct approach would extract the length in the parameter extraction process.S.3.7). the cheese- cutting problem.PhD Thesis – Q.7.

5 x (2) = 2. .7 Cheese-cutting problem⎯a demonstration of the ISM algorithm.PhD Thesis – Q.4 volume =24 target volume =30 volume =31. Cheng McMaster – Electrical and Computer Engineering x (0) = 3 optimal coarse model initial guess parameter extraction (modeling) prediction *(1) xc = 12.5 f *(1) xc = 12.5 volume =31. 3.S.7 × 100% 30 =1% *(2) xc = 11.5 *(0) xc = 10 1 target volume =30 volume =24 x (0) = 10 f 1 xc = 10 x (1) = 2.9 verification x (2) = 11.52 verification parameter extraction (modeling) prediction x (1) = 12.5 target volume =30 volume =29..9 f Fig.7 error = 30 − 29.. 49 .

5 Three-section 3:1 Microstrip Transformer Illustration We use an example of the three-section microstrip impedance transformer (Bakr. The coarse model is shown in Fig. The coarse model utilizes the empirical transmission line circuit models available in the circuit simulator Agilent ADS Schematic (2000) and the circuit parameters are converted (implicitly mapped) into physical dimensions of the microstrip lines using well-known empirical formulas (Pozar. 3. as in Fig. The filter structure is shown in Fig.9. We can feed the parameters and variables of the cheese-cutting problem in implicit SM diagram as shown in Fig.8. 50 . The designable parameters are the width and physical length of each microstrip line. An intermediate variable xi is the area xi = w × l We can see that each prediction procedure returns xi to a fixed xi* = 30.10(b). 1998). 3. respectively. w and h are the length. 3. Biernacki and Chen. width and height. Bandler. 1997). 3.10(a). Cheng McMaster – Electrical and Computer Engineering where l.PhD Thesis – Q. The fine model utilizes a full-wave electromagnetic simulator (Agilent Momentum (2000)).S. The intermediate variables xi are the circuit parameters. 3. which produces the optimal coarse model design.3.

x ) Fig.PhD Thesis – Q. 51 . 3. Cheng McMaster – Electrical and Computer Engineering xi l h Fig.10 Three-section 3:1 microstrip transformer illustration.9 Implicit space mapping diagram for cheese problem. 3. xi = w × l . L1 L2 L3 W1 W2 W3 x f = [W1 W2 W3 L1 L2 L3 ]T (a) xc = [W1 W2 W3 L1 L2 L3 ]T TLIN TLIN TLIN x = [ε1 H1 ε 2 H 2 ε 3 H 3 ]T E1 Z1 E2 Z2 (b) E3 Z3 xi = [ E1 E2 E3 Z1 Z 2 Z3 ]T = P ( xc . 3.S. l fine model xi space mapping coarse model surrogate Rf Rc w h Fig.8 w Cheese-cutting problem⎯illustration of an intermediate parameter.

our proposed approach is easier to implement in commercial microwave simulators. Cheng McMaster – Electrical and Computer Engineering 3. As implied in Fig. In Set B. Therefore. where the mapping is explicit (see Fig. Step 3 Obtain the optimal (calibrated) coarse model parameters by solving (3-5). Ismail and Rayas-Sánchez (2002). Ismail and Rayas-Sánchez (2002) or through experience. We can follow the sensitivity approach of Bandler. 3. we keep preassigned parameters x0 fixed. we vary certain preassigned parameters x. In Set A. 3. We catalog the preassigned parameters into two sets as in (Bandler.11 we represent a microwave circuit whose coarse model is decomposed. Ismail and Rayas-Sánchez (2002) to formally select components for Sets A and B. Ismail and Rayas-Sánchez.4 IMPLICIT SPACE MAPPING (ISM): AN ALGORITHM In Fig.11(b). Step 2 Set j = 0 and initialize x(0). 2002).11(c)).PhD Thesis – Q. The algorithm is summarized as follows Step 1 Select candidate preassigned parameters x as in Bandler. This contrasts with Bandler.S. in each iteration of the parameter extraction process we set xc = x (f j ) (3-12) Notice also that we do not explicitly establish a mapping between the optimizable parameters and the preassigned parameters. 52 . 3.

53 . In (c) we illustrate the ESMDF approach (Bandler. Cheng ω xf x0 McMaster – Electrical and Computer Engineering fine model Rf (a) ω xf coarse model Set A Set B Rc ≈ R f x x0 (b) ω xf coarse model x Set A Set B Rc ≈ R f P(.S. 3.PhD Thesis – Q. 2002). Ismail and RayasSánchez. where P (⋅) is a mapping from optimizable design parameters to preassigned parameters.11 Calibrating (optimizing) the preassigned parameters x in Set A results in aligning the coarse model (b) or (c) with the fine model (a).) x0 (c) Fig.

Step 2 Select the frequency transformation and initialize associated preassigned parameters. Step 5 Simulate the fine model at x (f j ) . 54 . we can use a linear transformation of frequency ωc = σω + δ (Bandler. For example..S.5 FREQUENCY IMPLICIT SPACE MAPPING Frequency implicit SM is a special kind of implicit SM. 3. Biernacki. In each iteration. Algorithm Step 1 Select a coarse model and a fine model. Step 7 Calibrate the coarse model by extracting (parameter extraction step) the preassigned parameters x (noting (3-12)) x ( j+1) = arg min R f ( x (f j ) ) − Rc ( x (f j ) . Step 6 Terminate if a stopping criterion (e. x ) x (3-13) Step 8 Increment j and go to Step 3.g. response meets specifications) is satisfied. we extract selected frequency transforming preassigned parameters to match the updated surrogate model with the fine model. We repeat this process until the fine model response is sufficiently close to the target (optimal original coarse model) response.PhD Thesis – Q. Then we assign its optimized design parameters to the fine model. Cheng McMaster – Electrical and Computer Engineering Step 4 Predict x (f j ) from (3-6).

initialized as [1 0]T. Step 8 Go to Step 4. response meets specifications.425. Cheng McMaster – Electrical and Computer Engineering Chen.6 HTS FILTER EXAMPLE We consider the HTS bandpass filter of Bandler. namely. H= 20 mil and 55 .PhD Thesis – Q. Examples involving frequency implicit SM have been investigated. e. Biernacki. design parameters (or evaluate the inverse mapping if it is available).S.g. The preassigned parameters are then [σ δ]T. design parameters.t. The physical structure is shown in Fig. 3. Moskowitz and Talisa (1995). Step 3 Optimize the coarse model (initial surrogate) w. Chen. Hemmers and Madsen. Grobelny. Step 5 Terminate if a stopping criterion is satisfied. Step 7 Reoptimize the “frequency mapped coarse model” (surrogate) w.t. 1995). Step 4 Simulate the fine model at this solution. Design variables are the lengths of the coupled lines and the separation between them.r.12(a).r. Step 6 Apply parameter extraction (PE) to extract frequency transforming preassigned parameters. 3. x f = [ S1 S2 S3 L1 L2 L3 ]T The substrate used is lanthanum aluminate with εr= 23.. Getsinger.

Cheng McMaster – Electrical and Computer Engineering substrate dielectric loss tangent of 0.11(b)) is empty.25% bandwidth.. where we notice severe misalignment.13(a).00003. We choose εr and H as the preassigned parameters of interest. Sonnet em (2001)has also been used as a fine model.058 GHz The design This corresponds to 1. 3. The relevant responses at the initial solution are shown in Fig. The algorithm requires 2 iterations (3 fine model simulations). The preassigned parameter vector is x = [ε r1 H1 ε r 2 H 2 ε r 3 H 3 ]T The fine model is simulated first by Agilent Momentum (2000). The total time taken is 26 min (one fine model simulation takes approximately 9 min on an Athlon 1100 MHz).967 GHz S21 ≥ 0. Set A (Fig. 3.S. The length of the input and output lines is L0=50 mil and the lines are of width W= 7 mil. Responses at the final iteration are shown in Fig. 3. coupled lines “CLin5” are identical to “CLin1” and “CLin4” to “CLin2”.e.95 for 4.13(b). Notice the symmetry in the HTS structure. 3.008 GHz ≤ ω ≤ 4.425]T. i. Our Agilent ADS (2000) coarse model consists of empirical models for single and coupled microstrip transmission lines. 56 .05 for ω ≥ 4.099 GHz and for ω ≤ 3. See Fig. Set B (Fig. 3. specifications are S21 ≤ 0.PhD Thesis – Q. with ideal open stubs. Here.12(b).11(b)) consists of the three coupled microstrip lines. thus x0=[20 mil 23.

(b) the coarse model as implemented in Agilent ADS (2000).PhD Thesis – Q. 57 H W S1 S2 . 3. (a) the physical structure. Grobelny. Getsinger. 1995). Chen.S.12 The HTS filter (Bandler.0 mil Term Term2 Num=2 Z=50 Ohm (b) Fig. Biernacki. Moskowitz and Talisa. Cheng McMaster – Electrical and Computer Engineering S2 L 0 S1 L 2 L 1 S3 L L L1 2 3 L 0 εr (a) Term Term1 MLIN Num=1 TL1 Z=50 Ohm W=W mil L=50 mil MCLIN CLin1 W=W mil S=S1 mil L=L1 mil MCLIN CLin3 W=W mil MCLIN S=S3 mil CLin2 W=W mil L=L3 mil S=S2 mil L=L2 mil MCLIN CLin4 W=W mil S=S2 mil L=L2 mil MCLIN CLin5 W=W mil S=S1 mil L=L1 mil MLIN TL2 W=W mil L=50.

20 frequency (GHz) (b) Fig. The initial solution and the final result in 1 iteration (2 fine model simulations) are shown in Fig.14(a) and (b). TABLE 3.00 4. respectively. Cheng McMaster – Electrical and Computer Engineering It takes 74 minutes to complete a sweep on an Intel P4 2200 MHz machine.05 4.05 4.90 3.PhD Thesis – Q. 58 .90 3.S. 3.10 4.20 frequency (GHz) (a) 0 -20 ⏐S21⏐dB -40 -60 -80 3.15 4.15 4.95 4. 3.13 The Momentum fine (○) and optimal coarse ADS model (⎯) responses of the HTS filter at the initial solution (a) and at the final iteration (b) after 2 iterations (3 fine model evaluations).1 shows initial and final 0 -20 ⏐S21⏐dB -40 -60 -80 3.10 4.95 4.00 4.

The parameter extraction process uses real and imaginary S parameters and the ADS quasi-Newton optimization algorithm. 3.S.00 4.20 frequency (GHz) (a) 0 -20 ⏐S21⏐dB -40 -60 -80 -100 3.05 4.00 4.15 4.10 4. while coarse model optima are obtained by the ADS minimax optimization algorithm.20 frequency (GHz) (b) Fig.14 The Sonnet em fine (○) and optimal coarse ADS model (⎯) responses of the HTS filter at the initial solution (a) and at the final iteration (b) after one iteration (2 fine model evaluations).05 4.2 shows the variation in the preassigned (coarse model) parameters.95 4.15 4. McMaster – Electrical and Computer Engineering TABLE 3. 59 .10 4.90 3.90 3.PhD Thesis – Q.95 4. Cheng designs. 0 -20 ⏐S21⏐dB -40 -60 -80 -100 3.

Cheng McMaster – Electrical and Computer Engineering 3. The HTS filter design is entirely carried out by Agilent ADS and Momentum (3 frequency sweeps) or Sonnet em.7 CONCLUSIONS Based on a general concept. we present an effective technique for microwave circuit modeling and design w. A general SM concept is presented which enables us to verify that our implementation is correct and that no redundant steps are used. (only 2 frequency sweeps) with no matrices to keep track of. full-wave EM simulations. 60 .S.PhD Thesis – Q.t.r. We vary preassigned parameters in a coarse model to align it with the EM (fine) model. We believe this is the easiest to implement “Space Mapping” technique offered to date.

30 186.67 mil 23.02 95.68 185.PhD Thesis – Q.53 104.425 23.97 22.00 mil 24.19 88.65 196.45 23.404 24.80 mil 19.56 104.334 Final iteration em 18.81 24.2 THE INITIAL AND FINAL PREASSIGNED PARAMETERS OF THE CALIBRATED COARSE MODEL OF THE HTS FILTER Preassigned parameters H1 H2 H3 Original values 20 mil 20 mil 20 mil 23.425 Final iteration Momentum 19.S.86 22.80 192.86 Solution (mil) Sonnet em 186. Cheng McMaster – Electrical and Computer Engineering TABLE 3.50 23.1 AGILENT MOMENTUM/SONNET em OPTIMIZABLE PARAMETER VALUES OF THE HTS FILTER Parameter L1 L2 L3 S1 S2 S3 Initial solution (mil) 189.05 mil 19.245 24.42 TABLE 3.79 93.03 189.10 191.12 103.79 mil 17.95 Solution (mil) Agilent Momentum 187.94 ε r1 ε r2 ε r3 61 .425 23.42 mil 17.

S. Cheng McMaster – Electrical and Computer Engineering 62 .PhD Thesis – Q.

2004). exploits preassigned parameters such as the dielectric constant and substrate height (Bandler. Bakr. we present a significant improvement to ISM.Chapter 4 RESPONSE RESIDUAL SPACE MAPPING TECHNIQUES 4. It is consistent with the idea of pre-distorting design specifications to permit the fine model greater 63 . our new approach further fine-tunes the surrogate by exploiting an “response residual space” mapping (RRSM). In this chapter. 2004). presented in Chapter 3. Dakroury. Madsen and Søndergaard. Based on an explanation of residual misalignment close to the optimal fine model solution. where a classical Taylor model is seen to be better than SM. Mohamed. The RRSM we suggest is very simple to apply. Nikolova and Ismail.1 INTRODUCTION As presented in Chapter 2. The novel implicit space mapping (ISM) concept. In the parameter extraction process these parameters were exploited to match the fine model. the space mapping (SM) concept exploits coarse models (usually computationally fast circuit-based models) to align with fine models (typically CPU intensive full-wave EM simulations) (Bandler. Cheng. Cheng.

(In SM. An intuitive “multiple cheesecutting” example demonstrates the concept. An accurate design of an HTS filter. See. emerges after only four EM simulations using ISM and RRSM with sparse frequency sweeps (two iterations of ISM. 1995). The RRSM surrogate is matched to the fine model through parameter extraction.S. Chen. waveguide filter design emerges after four iterations. Cheng McMaster – Electrical and Computer Engineering latitude—anticipating violations and making the specifications correspondingly stricter.2 RESPONSE RESIDUAL SPACE MAPPING The RRSM addresses residual misalignment between the optimal coarse model response and the true fine model optimum response Rf (xf*).g. followed by one application of the RRSM). easily implemented in Agilent ADS (2000). (Bandler. We use sparse 4. e.PhD Thesis – Q. We embed a linear mapping to relate the actual (fine model) frequency and the transformed (coarse model) frequency into the surrogate. In this chapter we also broaden the concept of auxiliary (preassigned) parameters to frequency transformation parameters. an ADS A six-section H-plane framework implements the SM steps interactively. frequency sweeps and do not use the Jacobian of the fine model. using the implicit SM and RRSM optimization entirely within the design framework. Our RRSM exploits this to fine tune the surrogate model. we present a microwave design framework for implementing an implicit and RRSM approach. For the first time. Biernacki. Hemmers and Madsen. At the end of this chapter. an 64 .

2002).PhD Thesis – Q. Fig. xc.23 79. 2003) at the final iterate xf(i). This is the deviation of the fine model from its classical Taylor approximation. 2003) for a twosection capacitively loaded impedance transformer (Søndergaard.27]T. the light grid shows || R f ( x (fi ) + h) − Rc ( Lp ( x (fi ) + h)) || . Cheng McMaster – Electrical and Computer Engineering exact match between the fine model and the mapped coarse model is unlikely. a linearized mapping) from the fine model. It is seen that the Taylor approximation is most accurate close to xf(i) whereas the mapped coarse model is best over a large region.) For example. a coarse model such as Rc = x2 will never match the fine model Rf = x2 – 2 around its minimum with any mapping xc=P(xf). approximately [74. Centered at h = 0. 2001..1 depicts model effectiveness plots (Søndergaard.e. 4.S. The dark grid shows || R f ( x (fi ) + h) − L f ( x (fi ) + h) || . i. An “output” or response mapping can overcome this deficiency by introducing a transformation of the coarse model response based on a Taylor approximation (Dennis. This represents the deviation of the mapped coarse model (using the Taylor approximation to the mapping. Response residual SM aims at establishing a mapping O between Rs (output mapped surrogate response) and Rc (mapped coarse model response) 65 . xf ∈ ℜ.

See text.1 Error plots for a two-section capacitively loaded impedance transformer (Søndergaard. Cheng McMaster – Electrical and Computer Engineering Rs = O ( Rc ) (4-1) (4-2) such that Rs ≈ R f We can predict the fine model solution using this surrogate.PhD Thesis – Q. Cheng. 4. 4.3 IMPLICIT AND RESPONSE RESIDUAL SPACE MAPPING SURROGATE Our proposed algorithm starts with ISM (Bandler. 2003) exhibiting the quasi-global effectiveness of space mapping (light grid) versus a classical Taylor approximation (dark grid). 66 . Nikolova and xf1 xf2 Fig.S.

. x ))) xc (4-5) Then we predict an update to the fine model solution as *( x f = xc i +1) (4-6) 4. Cheng. normally unity. we consider the HTS bandpass filter of (Bandler. then we create a surrogate with response Rs. The coarse model parameters xc are obtained by (re)optimizing the surrogate (4-3) to give *( xc i +1) arg min U (O ( Rc ( xc . x ) (4-4) is the residual between the mapped coarse model response after PE and the fine model response. In this chapter we consider a mapping of the form Rs = O ( Rc ) Rc ( xc . λm } ∆ R (4-3) where *( ∆R = R f ( x f ) − Rc ( xc i ) . λ2 . Cheng McMaster – Electrical and Computer Engineering Ismail. Getsinger. Moskowitz and Talisa. Design variables are the lengths of the coupled 67 . x ) + diag {λ1 .S. Biernacki. Grobelny. The physical structure is shown in Section 3.4 HTS FILTER EXAMPLE Again. 2004) does not improve the match.PhD Thesis – Q. Nikolova and Ismail. If the calibration (PE) step in (Bandler. 1995) as in Chapter 3. and where the λj are user-defined weighting parameters.5. 2004). which will eventually happen close to xf*. Chen.

68 . 4.98 4.10 4. After two iterations (3 fine model 1.08 4.2(a) and Fig.12 4.90 |S21| 0.08 4.2 The fine (○) and optimal coarse model (⎯) magnitude responses of the HTS filter. Cheng McMaster – Electrical and Computer Engineering lines and the separations between them. 4. The relevant responses at the initial solution are shown in Fig.85 0.14 frequency (GHz) (b) Fig.75 3. 4.75 3.04 4.96 3.06 4.00 4.2 (a). The fine model is simulated by Agilent Momentum (2000). followed by one iteration of RRSM (b).00 4.S.80 0.80 0.06 4.02 4.00 0.96 3. The design specifications are the same as described in Section 3. 4.04 4. Fig.94 3.94 3.98 4.10 4.95 0.85 0. where we notice severe misalignment.00 0.02 4.12 4.95 |S21| 0. at the final iteration using ISM (a).3(b) show the response after running the ISM algorithm.PhD Thesis – Q.90 0.14 frequency (GHz) (a) 1.5.

65 196.245 19. m as initial values. 4.80 mil 24. The initial and final preassigned parameters of the calibrated coarse model of the HTS filter have the same values as in (Bandler. x(3) = [24.18 86.5.28 200.1 OPTIMIZABLE PARAMETER VALUES OF THE HTS FILTER Solution reached Solution by the by the ISM ISM and RRSM algorithm 187. 4. After one iteration of RRSM. Since we believe we are close to the true optimal solution.86 177. The PE uses real and imaginary S parameters and the ADS quasi-Newton optimization algorithm.86 178.10 191.1 shows initial and final designs.3(c). 4. i.53 104. we introduce the output space mapping and use the output space mapped response in (4-3) with λj = 0. 2004). j = 1.S. as seen in Fig.404 19. the calibration step does not improve further.17 Parameter L1 L2 L3 S1 S2 S3 Initial solution 189. Nikolova and Ismail.03 189.2(b) and Fig..56 104.97 22.79 93.05 mil 24.00 mil]T. while coarse model and RRSM surrogate optima are TABLE 4.PhD Thesis – Q. 2. we obtain the improved response shown in Fig.95 All values are in mils 69 .50 23. ….334 19. Cheng McMaster – Electrical and Computer Engineering simulations).e.30 186.15 85. This is achieved in only 4 fine model evaluations.99 20. Cheng.02 95. The total time taken is 35 min (one fine model simulation takes approximately 9 min on an Athlon 1100 MHz).3(b). TABLE 4.

20 frequency (GHz) (a) 0 -20 ⏐S21⏐dB -40 -60 -80 3.15 4.00 4.00 4.95 4. Cheng McMaster – Electrical and Computer Engineering obtained by the ADS minimax optimization algorithm.05 4. and at the final iteration using ISM and RRSM (c).00 4.10 4.3 The fine (○) and optimal coarse model (⎯) responses of the HTS filter in dB at the initial solution (a).10 4.95 4.10 4.15 4.90 3.15 4.90 3. 0 -20 ⏐S21⏐dB -40 -60 -80 3. at the final iteration using ISM (b).95 4.20 frequency (GHz) (b) 0 -20 |S21| dB -40 -60 -80 3.05 4.20 frequency (GHz) (c) Fig. 4.S.05 4.90 3. 70 .PhD Thesis – Q.

Pedersen and Søndergaard. we match the response residual SM surrogate with the fine model in a parameter extraction (PE) process.1 Surrogate The response residual surrogate is a calibrated (implicitly or explicitly space mapped) coarse model plus an output or response residual as defined in the previous section.4. 1963. Each residual element (sample point) may be weighted using a weighting parameter λj. Cheng McMaster – Electrical and Computer Engineering 4. Here. Entirely in ADS. 2003). The residual is a vector whose elements are the differences between the calibrated coarse model response and the fine model response at each sample point after parameter extraction. 1964) design is achieved after only five EM simulations (Agilent HFSS (2000)) or four iterations. j = 1…m. GebreMariam.5. 4. a good six-section H-plane waveguide filter (Young and Schiffman.S.PhD Thesis – Q. A novel and simple “multiple cheese-cutting” problem is used as an illustration. The surrogate is shown in Fig. where m is the number of sample points. Cheng. 71 . Young and Jones. Matthaei. It differs from the approach described in (Bandler.5 RESPONSE RESIDUAL SPACE MAPPING APPROACH In this section we introduce the response residual space mapping (RRSM) approach. An implementation in an ADS (2003) design framework is presented in the next chapter. 4. Madsen.

GebreMariam. we match the previous output residual SM surrogate (instead of the calibrated coarse model of (Bandler. 4.5.4 Illustration of the RRSM surrogate. A density of one is assumed. The goal is to cut through the slices to obtain a weight for each one as close to a desired weight s as possible.5(a)]. The designable parameter is the length of the top slice [see Fig. Madsen. Cheng McMaster – Electrical and Computer Engineering design parameters from previous iteration fine fine model model calibrated coarse model residual weighting parameters + design parameters calibrated coarse model surrogate response Fig. In the parameter extraction. Our “responses” are the weights of individual cheese slices.2 Multiple Cheese-cutting Problem We develop a physical example suitable for illustrating the optimization process. Note that we measure the length 72 . Pedersen and Søndergaard. 4. Cheng.PhD Thesis – Q. 4. 2003) to the fine model at each sample point.S.

The coarse model involves 3 slices of the same height x. We cut on the left-hand side (the broken line).S. 4. The responses of the coarse model are given by Rc1 = x ⋅ xc ⋅1 Rc 2 = x ⋅ ( xc − c) ⋅1 Rc 3 = x ⋅ ( xc − c) ⋅1 The fine model is similar but the lower two slices are f1 and f2 units shorter. Cheng McMaster – Electrical and Computer Engineering from the right-hand end. 73 . The lengths of the two lower slices are c units shorter than the top one. the preassigned parameter shown in Fig. 4. namely.5 Multiple cheese-cutting problem: (a) the coarse model (b) and fine model. The optimal length xc* can be calculated to minimize the differences between the weights of the slices and the desired weight s. The heights of the slices are x1. We use minimax optimization.5(a). 4. The corresponding responses of the fine model are xc x c x c x xf x1 f1 x2 f2 x3 (a) (b) Fig. than the top slice [Fig. respectively.PhD Thesis – Q.5(b)]. respectively. x2 and x3.

PhD Thesis – Q.S.9813 fine model verificaiton length = 11. 4.8862 5 calculate residual 0 0 5 (10) 10 15 0 0 5 (11) 10 15 0 0 5 (12) 10 15 new surrogate with residual added .t.1457 5 0 0 5 (7) 10 15 0 0 5 (8) 10 15 0 0 5 (9) 10 15 fine model verificaiton length = 12. heights heights = 0.r. 74 . optimize surrogate new length = 11.t. Cheng McMaster – Electrical and Computer Engineering R f 1 = x1 ⋅ x f ⋅1 R f 2 = x2 ⋅ ( x f − f1 ) ⋅1 R f 3 = x3 ⋅ ( x f − f 2 ) ⋅1 5 5 5 0 0 5 (1) 10 15 0 0 5 (2) 10 15 0 0 5 (3) 10 15 optimal coarse model length = 11 5 5 initial fine model length = 11 5 PE w.8728 0 0 5 (4) 10 15 0 0 5 (5) 10 15 0 0 5 (6) 10 15 calculate residual 5 5 new surrogate with residual added optimize surrogate new length = 12..6 “Multiple cheese-cutting” problem: implicit SM and RRSM optimization: step by step. heights heights = 0.1457 5 5 PE w.9813 Fig.r..

i. Fig.e. x1 = x2 = x3 = 1.xf*|| 10 -10 10 -15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 iteration Fig. We set c = 2 and f1 = f2 = 4. The heights of the slices are fixed at unity for the fine model. 75 .7 Parameter difference between the RRSM design and minimax direct optimization. 4. The specification s is set to 10. Finally. The coarse model preassigned parameter x is initially unity. We demonstrate the implicit and RRSM optimization process.6 shows the first two iterations of the algorithm. The RRSM algorithm converges to the optimal fine model solution as shown in Fig.S. step by step.7. 4. 4.. Cheng 10 0 McMaster – Electrical and Computer Engineering 10 -5 ||xf . xf = xf* = 12.PhD Thesis – Q.

85. The setup of the problem follows Bakr.PhD Thesis – Q. Georgieva and Madsen (1999).8(a). L2. |S11| ≥ 0. Young and Jones. Cheng McMaster – Electrical and Computer Engineering 4.58 76 . A waveguide with a cross-section of 1. 1963. j = 1…m (λj ≤ 1 because the optimizer has difficulty reoptimizing the surrogate with the full residual added). a second implicit SM and RRSM iteration with the full residual added. We use 23 sample points.2 Six-Section H-plane Waveguide Filter The six-section H-plane waveguide filter (Young and Schiffman. 1964) is shown in Fig.S. W3. W4}. W1. The design parameters are the lengths and widths: {L1.0 GHz.4≤ ω ≤ 9. 4. for frequency ω ≥ 9. for frequency range 5. W2. 4.16.622 inches (3.2 GHz. Matthaei.5 GHz. Bandler.372 × 0.485 × 1. The following iterations are employed: two iterations of implicit SM to drive the design to be close to the optimal solution.6.6 4.1 H-PLANE WAVEGUIDE FILTER DESIGN Optimization Steps We use the ADS framework exploiting implicit SM and RRSM to design an H-plane filter. Design specifications are |S11| ≤ 0. |S11| ≥ 0.5.6.5. one implicit SM and RRSM iteration using weighting parameters λj = 0. for frequency ω ≤ 5. L3.

4.PhD Thesis – Q. We the select waveguide width of each section as the preassigned parameter to calibrate the coarse model. The coarse model is simulated using ADS (2003) as in Fig. 77 . We utilize a simplified version of a formula due to Marcuvitz (1951) in evaluating the inductances. The six sections are separated by seven H-plane septa.8 (a) Six-section H-plane waveguide filter (b) ADS coarse model.S. which have a finite thickness of 0. There are various approaches to calculate the equivalent inductive susceptance corresponding to an H-plane septum.508 mm). The frequency coefficient of each (a) Term Term1 TLIN TL1 INDQ RWG L3 RWG2 INDQ L4 RWG RWG7 INDQ L9 TLIN TL2 Term Term2 … (b) Fig.02 inches (0. The coarse model consists of lumped inductances and waveguide sections. Cheng McMaster – Electrical and Computer Engineering cm) is used.8(b). 4.

Based on an explanation of residual misalignment.9(b) shows the fine model response after running the algorithm using the Agilent HFSS simulator. Our approach is implemented entirely in the ADS framework. We present a RRSM modeling technique that matches the RRSM surrogate with the fine model.7 CONCLUSIONS We propose significant improvements to implicit space mapping for EM- based modeling and design. The fine model exploits Agilent HFSS. The required HTS filter models and RRSM surrogate are easily implemented by Agilent ADS and Momentum with no matrices to keep track of. is also harnessed as a preassigned parameter to compensate for the suceptance change.S. An accurate HTS microstrip filter design solution emerges after only four EM simulations with sparse frequency sweeps. for convenience PI. TABLE 4. Fig.9(a) shows the fine model response at the initial solution. the total time taken for five fine model simulations is 15 minutes on an Intel P4 3 GHz computer. Since no Jacobian is needed. A good H-plane filter design emerges after only five EM simulations using the implicit and RRSM with sparse 78 . 4.2 shows the initial and optimal design parameter values of the six-section H-plane waveguide filter. 4. 4. our new approach further fine-tunes the surrogate by exploiting an “response residual space” mapping. Fig.5 minutes on an Intel Pentium 4 (3 GHz) computer with 1 GB RAM and running in Windows XP Pro.PhD Thesis – Q. Cheng McMaster – Electrical and Computer Engineering inductor. One frequency sweep takes 2. A new “multiple cheese-cutting” design problem illustrates the concept.

4.6 0.6 0.9 H-plane filter optimal coarse model response (⎯).8 |S11| 0. (b) solution reached via RRSM after 4 iterations (○).8 |S11| 0.4 0. and the fine model response at: (a) initial solution (○).0 5 6 7 8 9 10 frequency (GHz) (b) Fig. frequency sweeps and no Jacobian calculations.0 0.0 0. Cheng McMaster – Electrical and Computer Engineering 1.S.2 0.2 0.0 5 6 7 8 9 10 frequency (GHz) (a) 1.PhD Thesis – Q. 79 .4 0.

49926 0.591645 0.660396 0.PhD Thesis – Q.519416 0.44544 0. Cheng McMaster – Electrical and Computer Engineering TABLE 4.2 OPTIMIZABLE PARAMETER VALUES OF THE SIX-SECTION H-PLANE WAVEGUIDE FILTER Parameter W1 W2 W3 W4 L1 L2 L3 Initial solution 0.555849 0.644953 0.630762 0.S.67667 Solution reached via RRSM 0.665449 all values are in inches 80 .499802 0.5033 0.463828 0.44168 0.

Empipe (1997) and Momentum_Driver (Ismail. A set of design or preassigned parameters and frequencies have to be sent to the different simulators and corresponding responses retrieved.1 INTRODUCTION The required interaction between coarse model. The Datapipe technique allows the algorithm to carry out nested optimization loops in two separate processes while maintaining a functional link between their results (e. 2001) have been designed to drive and communicate with Sonnet’s em (2001) and Agilent Momentum (2000) as fine models. the next increment to xf is a function of the result of parameter extraction). It uses Agilent ADS circuit models as coarse models. The steps of the framework are listed. Aggressive SM optimization of 3D structures (Bandler.g.. fine model and optimization tools makes SM difficult to automate within existing simulators. ADS 81 . Biernacki and Chen. 1996) has been automated using a two-level Datapipe (1997) architecture of OSA90. Software packages such as OSA90 or Matlab can provide coarse model analyses as well as optimization tools.Chapter 5 IMPLEMENTABLE SPACE MAPPING DESIGN FRAMEWORK 5. We present an ADS schematic framework for SM.

thereby placing our own work into context. The ADS component S-parameter file enables S-parameters to be imported in Touchstone file format from different EM simulators (fine model) such as Sonnet’s em and Agilent Momentum. Cheng McMaster – Electrical and Computer Engineering has a suite of built-in optimization tools.1 ADS SCHEMATIC DESIGN FRAMEWORK FOR SM ADS Schematic Design Framework Term Term1 1 Ref 2 Term Term2 S2P SNP1 Fig. In this chapter we provide a brief summary of applications of SM by other researchers and engineers. In the algorithm iteration we fill each design with proper data and optimize it. 82 .2 5. 5. 5.2. We implement these steps upfront in ADS Schematic designs.PhD Thesis – Q.S.1 S2P (2-Port S-Parameter File) symbol with terminals. These major steps of SM are friendly for engineers to apply. This PE procedure can be done simply by proper setup of the ADS optimization components (optimization algorithm and goals). Imported S-parameters can be matched with the ADS circuit model (coarse model) responses.

Step 2 Optimize the coarse model using the ADS optimizer. Step 3 Copy and paste the parameters into the parameterized fine model (Agilent Momentum. e. n is the port number. Here. the Momentum fine model can also be generated using the Generate/Update Layout command. Step 4 Simulate the fine model and save the responses in Touchstone 83 . Fig. Cheng McMaster – Electrical and Computer Engineering Agilent ADS (2003) has a huge library of circuit models that can be used as “coarse” models.S. Many EM simulators (“fine” model) such as Sonnet’s em (2001).. Agilent Momentum (2000).g. 5. and Agilent HFSS (2000) support Touchstone file format.2. gradient search. 5. we import S-parameters and match them with the ADS circuit model (coarse model) responses in the PE procedure.2 ADS Schematic Design Framework for SM Step 1 Set up the coarse model in ADS schematic. The residual between the calibrated coarse model and fine model can also be obtained using the SnP file and MeasEqn (Measurement Equation) component.PhD Thesis – Q. Quasi-Newton search. Using this file. discrete search.1 is a symbol of 2-port S-Parameter File component S2P with terminals. genetic algorithm. These major steps of SM are friendly for engineers to apply. An S-parameter file SnP in ADS can import data files (S-parameters) in Dataset or Touchstone format. In ADS. or Sonnet’s em). ADS also has a suite of easy-to-use optimization tools. HFSS/Empipe3D (2000). random search.

PhD Thesis – Q.S. Cheng

McMaster – Electrical and Computer Engineering

format (Agilent Momentum, HFSS, or Sonnet’s em) or Dataset (Momentum). Step 5 If stopping criteria are satisfied, stop. Step 6 Parameter extraction (a) Import the responses to the ADS schematic using SnP component under Data Items. (b) Set up ADS (calibrated) coarse model or response residual SM (RRSM) surrogate to match the SnP component. (c) Run ADS optimization to perform parameter extraction. Comment: Here, you may extract the coarse model design parameter or the preassigned parameters to implement explicit (original or aggressive SM) or implicit space mapping, respectively. Step 7 Predict the next fine model solution by (a) Explicit SM: transfer extracted parameters to MATLAB (2002) (or other scientific computing tool) and calculate a prediction based on the algorithm in (Bandler, Biernacki, Chen, Grobelny and Hemmers, 1994, Bandler, Biernacki, Chen, Hemmers and Madsen, 1995), or, (b) Implicit SM: reoptimize the calibrated coarse model w.r.t. design parameters to predict the next fine model design, and/or, (c) RRSM: reoptimize the surrogate (calibrated coarse model plus 84

PhD Thesis – Q.S. Cheng

McMaster – Electrical and Computer Engineering

L1

L2

L3

W1

W3

W2

Fig. 5.2

The three-section 3:1 microstrip impedance transformer. response residual) w.r.t. design parameters to predict the next fine model design.

Step 8 Update the fine model design and go to Step 4. We implement implicit and response residual SM optimization in the ADS schematic framework in an interactive way. momentum, HFSS, or Sonnet’s em. The fine model is Agilent

5.2.3 Three-Section Microstrip Transformer An example of ADS implementation ISM optimization is the three-section microstrip impedance transformer (Bakr, Bandler, Biernacki and Chen, 1997) (Fig. 5.2) as we described in Section 3.3. The coarse model is shown in Fig. 5.3. The design specifications are ⏐S11⏐ ≤ 0.11 for 5 GHz ≤ ω ≤ 15 GHz

85

**PhD Thesis – Q.S. Cheng
**

specification coarse model design parameters

**McMaster – Electrical and Computer Engineering
**

optimization algorithm coarse model parameter conversion frequency sweep

Term Term1 Num=1 Z=50 Ohm

**TLIN TL1 Z=Z1 Ohm E=E1 F=f 0 GHz
**

Var Eqn

TLIN TL2 Z=Z2 Ohm E=E2 F=f 0 GHz

TLIN TL3 Z=Z3 Ohm E=E3 F=f 0 GHz

Term Term2 Num=2 Z=150 Ohm

GOAL

Goal OptimGoal1 Expr="db(mag(S11))" SimInstanceName="SP1" Min= Max=-20 Weight=1 RangeVar[1]="f req" RangeMin[1]=5GHz RangeMax[1]=15GHz

Var Eqn

VAR Width_to_Z0 epslon_r=9.7 h=0.635 epslon_e1=(epslon_r+1)/2+(epslon_r-1)/(2* sqrt(1+12*h/W1)) Z1= if ((W1/h)<=1) then ((60/sqrt(epslon_e1))* ln(8*h/W1+ W1/(4*h))) else (120*pi/(sqrt(epslon_e1)*(W1/h+1.393+0.667*ln(W1/h+1.444)))) endif epslon_e2=(epslon_r+1)/2+(epslon_r-1)/(2* sqrt(1+12*h/W2)) Z2= if ((W2/h)<=1) then ((60/sqrt(epslon_e2))* ln(8*h/W2+ W2/(4*h))) else (120*pi/(sqrt(epslon_e2)*(W2/h+1.393+0.667*ln(W2/h+1.444)))) endif epslon_e3=(epslon_r+1)/2+(epslon_r-1)/(2* sqrt(1+12*h/W3)) Z3= if ((W3/h)<=1) then ((60/sqrt(epslon_e3))* ln(8*h/W3+ W3/(4*h))) else (120*pi/(sqrt(epslon_e3)*(W3/h+1.393+0.667*ln(W3/h+1.444)))) endif

VAR Optimizable_Variables_in_mil W1mil=15 opt{ 0.001 to 400 } W2mil=6 opt{ 0.001 to 400 } W3mil=2 opt{ 0.001 to 400 } L1mil=120 opt{ 0.0001 to 1200 } L2mil=120 opt{ 0.0001 to 1200 } L3mil=120 opt{ 0.0001 to 1200 } mil2mm=0.0254 W1=W1mil*mil2mm W2=W2mil*mil2mm W3=W3mil*mil2mm L1=L1mil*mil2mm L2=L2mil*mil2mm L3=L3mil*mil2mm

OPTIM

Optim Optim1 OptimTy pe=Minimax ErrorForm=MM MaxIters=1000 P=2 DesiredError=-1000 StatusLev el=4 UseAllGoals=y es

Var Eqn

VAR Electrical_length_to_phy sical_length f 0=10 E1=4*L1/1000*90*sqrt(epslon_e1)*f 0/c0*1e9 E2=4*L2/1000*90*sqrt(epslon_e2)*f 0/c0*1e9 E3=4*L3/1000*90*sqrt(epslon_e3)*f 0/c0*1e9

S-PARAMETERS

S_Param SP1 Start=5 GHz Stop=15 GHz Step=100 MHz

Fig. 5.3

Coarse model optimization. Coarse model optimization of the threesection impedance transformer. The coarse model is optimized using the minimax algorithm.

The designable parameters are the width and physical length of each microstrip line. Here, the reflection coefficient S11 is used to match the two model responses. The fine model is an Agilent Momentum model. The designable parameters for the fine model are the widths and physical lengths of the three microstrip lines. The thickness of the dielectric substrate is 0.635 mm (25 mil) and its relative permittivity is 9.7. The effect of nonideal dielectric is considered

Fig. 5.4

Fine model simulated in ADS Momentum.

86

PhD Thesis – Q.S. Cheng

McMaster – Electrical and Computer Engineering

0.14 0.12 0.10

⏐S11⏐

0.08 0.06 0.04 0.02 0.00 5 6 7 8 9 10 11 12 13 14 15

frequency (GHz)

Fig. 5.5 Coarse (—) and fine (○) model responses |S11| at the initial solution of the three-section transformer. by setting the loss tangent to 0.002. We use 11 frequency points in the sweep. The first step is to obtain an optimal coarse model design using the ADS Schematic (minimax) optimization utilities as in Fig. 5.3. In this schematic, we show the starting point (in mils) of the coarse model design parameter values. The coarse model parameter conversion components implement well-known empirical formulas (Pozar, 1998). The schematic will sweep S-parameters in the band. When we “simulate” the schematic, ADS provides an optimal coarse

model solution. We apply the obtained design parameters to the fine model (Fig. 5.4). To achieve this, we can create a Momentum layout from schematic layout 87

001 Weight=1 RangeVar[1]="freq" RangeMin[1]=5GHZ RangeMax[1]=15GHz V ar E qn VAR Width_to_Z0 epslon_e1=(epslon_r1+1)/2+(epslon_r1-1)/(2* sqrt(1+12*h1/W1)) Z1= if ((W1/h1)<=1) then ((60/sqrt(epslon_e1))* ln(8*h1/W1+ W1/(4*h1))) else (120*pi/(sqrt(epslon_e1)*(W1/h1+1.001 to 1000 } h3=0.444)))) endif T erm T erm3 Num=3 Z=50 Ohm 1 2 R ef S2P SNP1 ImpMaxFreq=15 GHz ImpDeltaFreq=1 GHz T erm T erm4 Num=4 Z=150 Ohm Fig. 5.635 opt{ 0.001 to 1000 } epslon_r1=9.70331 noopt{ 0.001 to 1000 } h2=0.5.7 opt{ 0.444)))) endif epslon_e2=(epslon_r2+1)/2+(epslon_r2-1)/(2* sqrt(1+12*h2/W2)) Z2= if ((W2/h2)<=1) then ((60/sqrt(epslon_e2))* ln(8*h2/W2+ W2/(4*h2))) else (120*pi/(sqrt(epslon_e2)*(W2/h2+1. Cheng McMaster – Electrical and Computer Engineering directly or copy and paste the parameters to the parameterized Momentum fine model. the fine model real and imaginary responses are used in the parameter extraction (calibration) step (Fig.6).91547 noopt{ 0.PhD Thesis – Q. The goal is to match the real and imaginary parts of S11 at the same optimization frequency algorithm sweep fine model coarse model design parameters preassigned parameters specifications S-PARAMETERS S_Param SP1 Start=5 GHz Stop=15 GHz Step=1000 MHz T erm T erm1 Num=1 Z=50 Ohm GOAL T LIN T L1 Z=Z1 Ohm E=E1 F=f0 GHz Var E qn T LIN T L2 Z=Z2 Ohm E=E2 F=f0 GHz VAR Optimizable_Variables_in_mil W1mil=14. The optimization algorithm uses the Quasi-Newton method.667*ln(W2/h2+1.001 to 1000 } Goal OptimGoal3 Expr="abs(real(S11)-real(S33))" SimInstanceName="SP1" Min= Max=0. In this step.393+0.616 noopt{ 0.0254 W1=W1mil*mil2mm W2=W2mil*mil2mm W3=W3mil*mil2mm L1=L1mil*mil2mm L2=L2mil*mil2mm L3=L3mil*mil2mm T LIN T L3 Z=Z3 Ohm E=E3 F=f0 GHz T erm T erm2 Num=2 Z=150 Ohm OPTIM Optim Optim2 OptimT ype=Quasi-Newton ErrorForm=L2 MaxIters=100000 P=2 DesiredError=-1000 StatusLevel=4 FinalAnalysis="None" NormalizeGoals=no SetBestValues=yes Seed= UseAllGoals=yes V ar E qn VAR Electrical_length_to_physical_length f0=10 E1=4*L1/1000*90*sqrt(epslon_e1)*f0/c0*1e9 E2=4*L2/1000*90*sqrt(epslon_e2)*f0/c0*1e9 E3=4*L3/1000*90*sqrt(epslon_e3)*f0/c0*1e9 V ar E qn VAR Preassigned_Parameters h1=0.001 to 400 } W2mil=5. This schematic extracts preassigned parameters x.444)))) endif epslon_e3=(epslon_r3+1)/2+(epslon_r3-1)/(2* sqrt(1+12*h3/W3)) Z3= if ((W3/h3)<=1) then ((60/sqrt(epslon_e3))* ln(8*h3/W3+ W3/(4*h3))) else (120*pi/(sqrt(epslon_e3)*(W3/h3+1.393+0.844 noopt{ 0.635 opt{ 0. 5. The goal is to match the coarse and fine model real and imaginary S11 from 5 to 15 GHz.7 opt{ 0.8217 noopt{ 0.001 to 400 } L1mil=117.918 noopt{ 0. 88 .S.667*ln(W1/h1+1.393+0.635 opt{ 0.6 The calibration of the coarse model of the three-section impedance transformer.001 to 400 } W3mil=1.001 to 1000 } epslon_r2=9.0001 to 1200 } L2mil=120. In the fine model preassigned parameters are (always) kept fixed at nominal values.001 to 1000 } epslon_r3=9. Imported by S2P (2-Port S-Parameter File).7 opt{ 0. We obtain the fine model response as Fig. The coarse and fine models are within the broken line.0001 to 1200 } L3mil=123.0001 to 1200 } mil2mm=0.667*ln(W3/h3+1. the preassigned parameters of coarse model are calibrated to match the fine and coarse model responses. 5.001 Weight=1 RangeVar[1]="freq" RangeMin[1]=1GHZ RangeMax[1]=15GHz GOAL Goal OptimGoal2 Expr="abs(imag(S11)-imag(S33))" SimInstanceName="SP1" Min= Max=0.

001 to 400 } L1mil=117.8217 opt{ 0.393+0. With fixed preassigned parameters the new coarse model (surrogate) is reoptimized w.0001 to 1200 } L3mil=123.7.4046 noopt{ 0. i. This schematic uses the minimax optimization algorithm.844 opt{ 0.001 to 1000 } h3=0.S. optimization algorithm frequency sweep coarse model design parameters preassigned parameters specifications OPTIM Optim Optim1 OptimType=Minimax ErrorForm=MM MaxIters=1000 P=2 DesiredError=-1000 StatusLevel=4 FinalAnalysis="None" NormalizeGoals=no SetBestValues=yes Seed= UseAllGoals=yes T erm T erm1 Num=1 Z=50 Ohm GOAL TLIN TL4 Z=Z1 Ohm E=E1 F=f0 GHz Var Eqn TLIN TL5 Z=Z2 Ohm E=E2 F=f0 GHz VAR Optimizable_Variables_in_mil W1mil=14. We apply the prediction to the fine model again.PhD Thesis – Q.001 to 1000 } Goal OptimGoal1 Expr="db(mag(S11))" SimInstanceName="SP1" Min= Max=-20 Weight=1 RangeVar[1]="freq" RangeMin[1]=5GHz RangeMax[1]=15GHz Var Eqn VAR Width_to_Z0 epslon_e1=(epslon_r1+1)/2+(epslon_r1-1)/(2* sqrt(1+12*h1/W1)) Z1= if ((W1/h1)<=1) then ((60/sqrt(epslon_e1))* ln(8*h1/W1+ W1/(4*h1))) else (120*pi/(sqrt(epslon_e1)*(W1/h1+1.t. This schematic is similar to Fig.0001 to 1200 } mil2mm=0.0001 to 1200 } L2mil=120. Cheng McMaster – Electrical and Computer Engineering time. 5.001 to 1000 } epslon_r2=10.73242 noopt{ 0.675311 noopt{ 0.001 to 1000 } epslon_r1=10.444)))) endif epslon_e2=(epslon_r2+1)/2+(epslon_r2-1)/(2* sqrt(1+12*h2/W2)) Z2= if ((W2/h2)<=1) then ((60/sqrt(epslon_e2))* ln(8*h2/W2+ W2/(4*h2))) else (120*pi/(sqrt(epslon_e2)*(W2/h2+1.667*ln(W1/h1+1. A quasi-Newton algorithm is used to perform this procedure.8.444)))) endif Fig. The goal is to minimize |S11| of the calibrated coarse model. 89 .91547 opt{ 0.667*ln(W2/h2+1.741192 noopt{ 0.918 opt{ 0.001 to 1000 } h2=0. a set of preassigned parameter values providing the best match are found. 5.3.7 Reoptimization of the coarse model of the three-section impedance transformer using the fixed preassigned parameter values obtained from the previous calibration (parameter extraction).6273 noopt{ 0. the original specification.667*ln(W3/h3+1.0254 W1=W1mil*mil2mm W2=W2mil*mil2mm W3=W3mil*mil2mm L1=L1mil*mil2mm L2=L2mil*mil2mm L3=L3mil*mil2mm T LIN T L6 Z=Z3 Ohm E=E3 F=f0 GHz T erm T erm2 Num=2 Z=150 Ohm Var Eqn VAR Electrical_length_to_physical_length f0=10 E1=4*L1/1000*90*sqrt(epslon_e1)*f0/c0*1e9 E2=4*L2/1000*90*sqrt(epslon_e2)*f0/c0*1e9 E3=4*L3/1000*90*sqrt(epslon_e3)*f0/c0*1e9 Var Eqn S-PARAMETERS S_Param SP1 Start=5 GHz Stop=15 GHz Step=100 MHz VAR Preassigned_Parameters h1=0. Supposedly we obtain a good match between the fine and coarse model. but with a different set of preassigned parameter values.001 to 400 } W3mil=1. we proceed to the next step.. It takes 2 fine model simulations.393+0.70331 opt{ 0. The fine model simulation gives a satisfying result as in Fig.616 opt{ 0. 5.e.r. The ADS minimax algorithm is used again in this case.001 to 1000 } epslon_r3=9. This is done as in Fig. 5.001 to 400 } W2mil=5.393+0.98638 noopt{ 0.444)))) endif epslon_e3=(epslon_r3+1)/2+(epslon_r3-1)/(2* sqrt(1+12*h3/W3)) Z3= if ((W3/h3)<=1) then ((60/sqrt(epslon_e3))* ln(8*h3/W3+ W3/(4*h3))) else (120*pi/(sqrt(epslon_e3)*(W3/h3+1.

8 Optimal coarse (—) and fine model (○) responses |S11| for the threesection transformer using Momentum after 1 iteration (2 fine model simulations).733 90 .S.04 0.34991 1.8217 5.1 OPTIMIZABLE PARAMETER VALUES OF THE THREE-SECTION IMPEDANCE TRANSFORMER Parameter W1 W2 W3 L1 L2 L3 Initial solution 14.354 6. The process satisfies the stopping criteria.10 0.844 all values are in mils Solution reached via ISM 15.918 123.749 117.08 0.00 5 6 7 8 frequency (GHz) 9 10 11 12 13 14 15 Fig.70331 117.70155 113.141 121. 5.PhD Thesis – Q.02 0.616 120. Cheng McMaster – Electrical and Computer Engineering 0.91547 1. TABLE 5.06 ⏐S11⏐ 0.

05 Weig ht=1 Rang eVar[1]="freq " Rang eMin[1]=4.334 noopt{ unconst } OPT IM Optim Optim1 OptimType=Minimax Term Term1 Num=1 Z=50 Ohm MLIN TL1 Subst="MSub" W=W mil L=50 mil Mod=Kirschning MCLIN CLin1 Subst="MSub1" W=W mil S=S1 mil L=L1 mil M CLIN CLin2 Subst="MSub2" W=W mil S=S2 mil L=L2 mil MCLIN CLin3 Subst="MSub3" W=W mil S=S3 mil L=L3 mil MCLIN CLin4 Subst="MSub2" W=W mil S=S2 mil L=L2 mil MCLIN CLin5 Subst="MSub1" W=W mil S=S1 mil L=L1 mil MLIN TL2 Subst="MSub" W=W mil L=50.80 noopt{ unconst } H2=19. The coarse model is calibrated to match the fine model response.97 opt{ unconst } W=7 VAR VAR2 H1=19.9.160GHz Term Term5 Num=5 Z=50 Ohm MLIN TL4 Subst="MSub" W=W mil L=50 mil Mod=Kirschning MCLIN CLin9 Subst="MSub12" MCLIN W=W mil CLin10 S=S12 mil Subst="MSub11" L=L12 mil W=W mil S=S11 mil L=L11 mil MCLIN CLin7 Subst="MSub12" MCLIN W=W mil CLin6 S=S12 mil Subst="MSub13" L=L12 mil W=W mil S=S13 mil L=L13 mil MLIN TL3 Subst="MSub" MCLIN W=W mil CLin8 L=50. 91 .2. Cheng 5.56 noopt{ +/.20 % } S12=93.S.5645 opt{ unconst } S3=104.3 opt{ unconst } L3=186.20 % } S13=104.05 Weig ht=1 Rang eVar[1]="freq " Rang eMin[1]=3. 5.97 noopt{ +/.008GHz Rang eMax[1]=4.855 opt{ unconst } L1=187.86 noopt{ +/.20 % } L13=186. A coarse model is optimized as in the previous sub-section. A fine model is simulated using the coarse model design parameters.0 mil Subst="MSub11" Mod=Kirschning W=W mil S=S11 mil L=L11 mil Term Term6 Num=6 Z=50 Ohm GOAL Goal OptimGoal2 Expr="mag (S21-delta_R)" SimInstanceName="SP1" Min=0.058GHz Me as Eqn Var Eqn 1 2 Re f Term Term3 Num=3 Z=50 Ohm Term Term4 S2P Num=4 SNP1 Z=50 Ohm File="C:\shasha\Research\Documentation\Surrog VAR VAR5 S11=22.099GHz Rang eMax[1]=4.404 noopt{ unconst } Er2=24.967GHz Va r Eqn VAR VAR3 S1=22.05 noopt{ unconst } H3=19.PhD Thesis – Q.7884 opt{ unconst } S2=93.404 noopt{ unconst } Er12=24. 5.9 Implementation of response residual space mapping in Agilent ADS.s2p" previous coarse model parameters previous coarse weighted model preassigned misalignment parameters Fig.79 noopt{ +/.00 noopt{ unconst } Er1=24.20 % } L11=187.254 noopt{ unconst } Er13=24.00 noopt{ unconst } Er11=24.4 McMaster – Electrical and Computer Engineering Response Residual SM Implementation of HTS Filter The Response Residual SM example of the HTS filter is described in Chapter 4.30 noopt{ +/. We discuss the implementation technique in this section. previous coarse model is the fine model design preassigned parameters parameters Var Eq n previous design coarse coarse goal optimizer model model GOAL Goal OptimGoal1 Expr="mag (S21-delta_R)" SimInstanceName="SP1" Min= Max=0.334 noopt{ unconst } Meas Eqn Meas 1 delta_R=(S 65-S43)*.9GHz Rang eMax[1]=3. In Fig.254 noopt{ unconst } Er3=24.10 noopt{ +/.20 % } Var Eqn VAR VAR4 H11=19.075 opt{ unconst } L2=191.0 mil Mod=Kirschning Term Term2 Num=2 Z=50 Ohm GOAL Goal OptimGoal3 Expr="mag (S21-delta_R)" SimInstanceName="SP1" Min= Max=0.05 noopt{ unconst } H13=19.5 ate_SM \HTS_mom_MSL2.80 noopt{ unconst } H12=19.95 Max= Weig ht=1 Rang eVar[1]="freq " Rang eMin[1]=4.20 % } L12=191.

The residual is calculated using weighted misalignment between the fine model and previous coarse model. 1997) architecture of OSA90. A set of design or preassigned parameters and frequencies have to be sent to the different simulators and corresponding responses retrieved. The fine model response is imported from the fine model simulation. fine model and 5.PhD Thesis – Q.g..3. We optimize the surrogate to predict next fine model design. The new surrogate is generated using residual and the calibrated coarse model. 5.1 optimization tools makes SM difficult to automate within existing simulators. Empipe (1997) and Momentum_Driver (Ismail. Software packages such as OSA90 or MATLAB can provide coarse model analyses as well as optimization tools. 2001) have been designed to drive and communicate with Sonnet’s em (2001) and Agilent Momentum (2000) as fine models. Aggressive SM optimization of 3D structures (Bandler. 1996) has been automated using a two-level Datapipe (OSA90/hope. Cheng McMaster – Electrical and Computer Engineering calibrated coarse model in which the design parameters and preassigned parameters are fixed.S. Biernacki and Chen. the next 92 .3 REVIEW OF OTHER SM IMPLEMENTATIONS AND APPLICATIONS RF and Microwave Implementation The required interaction between coarse model. The Datapipe technique allows the algorithm to carry out nested optimization loops in two separate processes while maintaining a functional link between their results (e.

The fine model exploits HP HFSS Ver. 2000). 1963. 1999). There are various approaches to calculate the equivalent inductive susceptance corresponding to an H-plane septum.S. as well as with user-defined simulators. SMX has been linked with Empipe and Momentum_Driver to drive Sonnet’s em and Agilent Momentum. Six-Section H-Plane Waveguide Filter Bakr (2000) consider to apply SM technique in designing a six-section Hplane waveguide filter (Young and Schiffman. Bandler. A good result is obtained using HASM (Bakr. Cheng. Chattaraj. Matthaei. which is automated for the first time. Ismail and RayasSánchez. 2001) (see Appendix A) optimization system implements the surrogate model-based SM algorithm (Bakr.2 through HP Empipe3D. Cheng McMaster – Electrical and Computer Engineering increment to xf is a function of the result of parameter extraction). 1964). It is simulated using OSA90/hope. SMX system The object-oriented SMX (Bakr. Young and Jones. We utilize a simplified version of a formula due to Marcuvitz (1951) in evaluating the inductances. 5.PhD Thesis – Q. Neural Networks and Space Mapping Devabhaktuni. The coarse model consists of lumped inductances and dispersive transmission line sections. The proposed Knowledge-based Automatic Model Generation (KAMG) to 93 . Bandler. Madsen. Bandler. Rayas-Sánchez and Søndergaard. Automatic Model Generation. Yagoub and Zhang (2003) propose a technique for generating microwave neural models of high accuracy using less accurate data. Georgieva and Madsen.

Cheng McMaster – Electrical and Computer Engineering make extensive use of the coarse generator and minimal use of the fine generator.S. Estes and Zhao (2002) apply typical SM techniques in optimization of high-density multilayer RF and microwave circuits. Low Temperature Co-firing Ceramic (LTCC) capacitor. Vahldieck and Amari (2002) present two examples: a direct coupled 4-resonator E-plane filter and a dual-mode filter. The EM solver is based on Mode Matching. SM Models for RF Components Snel (2001) proposed the SM technique in RF filter design for power amplifier circuits. Space Mapping Implementation of Harscher et al. space-mapped RF filter components.PhD Thesis – Q. 94 . These components can be incorporated in the design of ceramic multilayer filters for different center frequencies in wireless communication systems. This technique combines EM simulations with a minimum prototype filter network (surrogate). They apply the SM approach to a three-pole bandstop filter. They illustrated their technique through an HTS filter. CAD Technique for Microstrip Filter Design Ye and Mansour (1997) apply SM steps to reduce the simulation overhead required in microstrip filter design. Ofli. Multilayer Microwave Circuits (LTCC) Pavio. LTCC three-section bandstop filter and an LTCC broadband tapered transformer. The library is implemented in the Agilent ADS design framework. Harscher. He suggests building a library of fast.

Wu. Cheng McMaster – Electrical and Computer Engineering Cellular Power Amplifier Output Matching Circuit Lobeek (2002) demonstrates the design of a DCS/PCS output match of a cellular power amplifier using SM. Lobeek also applies the SM model to monitor the statistical behavior of the design w.t. combline-type filters and multiple-coupled cavity filters. along with the required CAD formulas (knowledge) for typical embedded multilayer passives. a silicon passive integration die. Their technique is implemented in the WATML-MICAD software. Damavandi and Borji (2002) consider a 3-level design methodology for complex RF/microwave structure using an SM concept. Applications include a parallel-coupled line filter.S.PhD Thesis – Q. Zhao. 95 . parameter values. Monte Carlo analysis with EM accuracy based on the space-mapped model shows good agreement with manufactured data. Ehlert and Fang (2003) present an explicit knowledge embedded space mapping optimization technique. Wang and Cheng (2004) propose a concept called the dynamic coarse model and apply to the optimization design of LTCC multilayer RF circuits with the aggressive SM technique. Chaudhuri. LTCC RF Passive Circuits Design Wu. The design uses a 6-layer LTCC substrate. Zhang. discrete surface mount designs as well as bond wires.r. A Multilevel Design Optimization Strategy Safavi-Naeini. They apply the proposed scheme on the design of low temperature cofired ceramic (LTCC) RF passive circuits.

Panariello.PhD Thesis – Q. Cheng Waveguide Filter Design McMaster – Electrical and Computer Engineering Steyn. The proposed methodology uses the SM concept to exploit the benefits of both domains. Smith. With the aggressive SM technique only 4 coupling coefficients were sufficient to obtain the same error. The proposed approach reduces overall tuning time compared with traditional techniques.S. Combline Filter Design Swanson and Wenzel (2001) introduce a design approach based on the SM concept and commercial FEM solvers. From a good starting point. one iteration is needed to implement the design process. Coupled Resonator Filter Pelz (2002) applies SM in realization of narrowband coupled resonator filter structures. CAD of Integrated Passive Elements on PCBs Draxler (2002) introduces a methodology for CAD of integrated passive elements on Printed Circuit Board (PCB) incorporating Surface Mount Technology (SMT). 96 . Lehmensiek and Meyer (2001)consider the design of irises in multi-mode coupled cavity filters. A 5-pole coupled resonator filter design is achieved with fast convergence. Dielectric Resonator Filter and Multiplexer Design Ismail. Wang and Yu (2004) apply SM optimization with FEM (fine model) to design a 5-pole dielectric resonator loaded filter and a 10-channel output multiplexer.

The complete aggressive SM design procedure required 3 iterations to converge (10 times faster than directly using a precise simulation tool). Both examples converge after only 5 iterations (Choi. Boria and Esteban (2000) apply the aggressive SM procedure to build an automated design of inductively coupled rectangular waveguide filters. Cheng Nonlinear Device Modeling McMaster – Electrical and Computer Engineering Zhang. Kim. Comb Filter Design Gentili. Park and Hahn. 2001).PhD Thesis – Q. Xu.2 Electrical Engineering Implementation Magnetic Systems Choi.3. Ding and Zhang (2003) introduce a new Neuro-SM approach for nonlinear device modeling and large signal circuit simulation. Park and Hahn (2001) utilize SM to design magnetic systems. Gomez.S. 5. The Neuro-SM approach is demonstrated by modeling the SiGe HBT and GaAs FET devices. They validate the approach by two numerical examples: a magnetic device with leakage flux and a machine with highly saturated part. Macchiarella and Politi (2003) implement an accurate design of microwave comb filters using SM technique. Bergner. Kim. Yagoub. Inductively Coupled Filters Soto. 97 . They use internal circuit model parameters as preassigned parameters to apply the implicit SM.

Feng and Huang (2003) employ the generalized space mapping (GSM) technique for modeling and simulation of photonic devices. We introduced an easy to use ADS schematic framework for SM. They illustrate their approach with a simple structural problem of minimizing the weight of a beam subject to constraints such as stress.S. 98 We .3. 5.4 CONCLUSIONS In this chapter. Cheng Photonic Devices McMaster – Electrical and Computer Engineering Feng. Their aim is to establish a mapping between the constraints of a fine model and a coarse model. 5. Vehicle Crashworthiness Design Redhe and Nilsson (2002) apply the SM technique and surrogate models together with response surfaces to structural optimization of crashworthiness problems. Zhou and Huang (2003) apply the SM technique for design optimization of antireflection (AR) coatings for photonic devices such as the semiconductor optical amplifiers (SOA). Using the SM technique CPU time is reduced relative to the traditional response surface methodology. Bhaskar and Keane (2001) apply the SM technique in civil engineering structural design.PhD Thesis – Q.3 Other Engineering Implementation Structural Design Leary. we discussed implementations and framework techniques.

It also places our work into context.S.PhD Thesis – Q. Cheng McMaster – Electrical and Computer Engineering demonstrate its implementation. This verifies that SM technology can be applied to substantially different design and modeling areas in science and engineering. 99 . We reviewed various successful implementations by other groups and researchers.

PhD Thesis – Q. Cheng McMaster – Electrical and Computer Engineering 100 .S.

Approaches reviewed include the original SM algorithm. hybrid aggressive space mapping. yet appears to be amenable to rigorous mathematical treatment. Different approaches to enhance the uniqueness of parameter 101 . We also discuss various implementations. Parameter extraction is an essential subproblem of any SM optimization algorithm. neural space mapping and implicit space mapping. The general steps for building surrogates and SM are indicated. The aim and advantages of SM are described. In Chapter 2 we review the SM technique and the SM-oriented surrogate (modeling) concept and their applications in engineering design optimization. the Broyden-based aggressive space mapping.Chapter 6 CONCLUSIONS This thesis presents innovative methods for electromagnetics-based computer-aided modeling and design of microwave circuits exploiting implicit and output space mapping (SM) and surrogate modeling technology. trust region aggressive space mapping. It is used to align the surrogate with the fine model at each iteration. The simple CAD methodology follows the traditional experience and intuition of engineers. These technologies are demonstrated by the so-called “cheese problem” and illustrated by designing several practical microstrip structures.

We believe this is the easiest to implement “Space Mapping” technique offered to date. The HTS filter design is entirely carried out by Agilent ADS and Momentum (3 frequency sweeps) or Sonnet em. In Chapter 4 we propose significant improvements to implicit space mapping for EM-based modeling and design.S. An expanded space mapping design framework is reviewed. We vary preassigned parameters in a coarse model to align it with the EM (fine) model. Based on a general concept. The required HTS filter models and RRSM are easily implemented by Agilent ADS and Momentum with no matrices to keep track of. A general SM concept is presented which enables us to verify that our implementation is correct and that no redundant steps are used. full-wave EM simulations in Chapter 3. This technique expands original space mapping by allowing certain preassigned parameters (which are not used in optimization) to change in some components of the coarse model. we present an effective technique for microwave circuit modeling and design w.r. Based on an explanation of residual misalignment. The space mapping concept and frameworks are discussed. (only 2 frequency sweeps) with no matrices to keep track of.PhD Thesis – Q. Cheng McMaster – Electrical and Computer Engineering extraction are reviewed. SM techniques are categorized based on their properties.t. including the recent gradient parameter extraction process. An accurate HTS microstrip filter design solution emerges after only four EM simulations with 102 . our new approach further fine-tunes the surrogate by exploiting an “output space” mapping (OSM) or “response residual space” mapping (RRSM).

They prove that SM technology can be applied to different design and modeling areas. A new “multiple cheese-cutting” design problem illustrates the concept. A good H-plane filter design emerges after only five EM simulations using the implicit and RRSM with sparse frequency sweeps and no Jacobian calculations. which is automated for the first time. It implements the surrogate model-based SM (SMSM) algorithm.S. Appendix A describes the object-oriented SMX optimization system. We can explore further on this aspect. From the experience and knowledge gained in the course of this work the author is convinced that the following research topics should be interesting to investigate: (1) Implicit SM optimization is emphasized in this thesis. Our approach is implemented entirely in the ADS framework. We introduce an easy ADS schematic design framework for SM and demonstrate in a three-section transformer. We also review other successful implementations both from our group and from other researchers or engineers.PhD Thesis – Q. Chapter 5 discusses the implementation framework techniques. ISM is also a modeling technique. McMaster – Electrical and Computer Engineering We present the RRSM modeling technique that matches the output residual SM surrogate with the fine model. Cheng sparse frequency sweeps. We utilize the simple physics and geometry of ideal or contrived cheese blocks (slices) to study and illustrate various space mapping techniques. Appendix B explains our so-called cheese-cutting problems. For 103 .

PhD Thesis – Q. we can use a set of preassigned parameters to calibrate a coarse (large) grid EM simulator to match a fine (small) grid EM simulator. It is possible to apply the design framework to more SM algorithms. (2) The ISM discussed in this thesis uses the same preassigned parameters for all the frequency (sample) points. (4) Currently. the ADS schematic design framework for SM is applicable to several kinds of space mapping (aggressive or implicit/output SM) algorithms. This may not be sufficient for frequency-sensitive devices. We can introduce a frequency-based ISM. (3) In this thesis. The new ISM could make a better surrogate. We could also support other 104 . We could extend OSM/RRSM to other types of space mapping. For example. output SM or RRSM is only used to calibrate the implicitly mapped coarse model. we can use the output calibrated coarse model in aggressive SM to speed up or increase the probability of convergence. The ISM model uses different preassigned parameters for different frequency points or different frequency bands. This ISM interpolated coarse grid EM simulator could act as a fast but accurate EM simulator. Cheng McMaster – Electrical and Computer Engineering example. Taking advantage of continuous preassigned parameters we could interpolate the responses of structures that are not on the discrete grids of the coarse EM simulator meshes.S.

e..g. Ansoft HFSS under this design framework. It needs human intervention at each iteration because of the current limitations of ADS. In the near future. (5) The ADS schematic design framework for SM described in this thesis is not automated. Cheng McMaster – Electrical and Computer Engineering EM simulators. Agilent is constantly updating its ADS.PhD Thesis – Q. More examples and illustrations could be done in this framework.S. 105 . However. the semi-automated interactive implementation could be automated as soon as a better sequential optimization method is available from Agilent.

S. Cheng McMaster – Electrical and Computer Engineering 106 .PhD Thesis – Q.

The SMX architecture integrates these modules. Bandler. 2000). Object-Oriented Design (OOD) abstracts the basic behavior of the models and optimizers modules. It can exploit a frequencysensitive mapping. Frank. Another advantage of OOD is reusability and extendibility. 1999) of the fine model is iteratively used to solve the original design problem. Torczon and Trosset. SMX has been linked with Empipe (1997) and Momentum_Driver (Ismail. Dennis. Madsen. Ismail and RayasSánchez. Cheng. a surrogate (Booker. Rayas-Sánchez and Søndergaard. 2001) to drive Sonnet’s em and Agilent Momentum. The SMX engine implements the SMSM algorithm. Serafini. as well as with user-defined simulators. 2001) optimization system implements the surrogate model-based SM (SMSM) algorithm (Bakr. A universal parameter setting and results retrieval method is utilized for all simulators and optimizers. 107 SMX is . This surrogate model is a convex combination of a mapped coarse model and a linearized fine model. In the SMSM approach.Appendix A SMX OBJECT-ORIENTED OPTIMIZATION SYSTEM The object-oriented SMX (Bakr. which is automated for the first time. Bandler.

1990). The SMX system is described in the Unified Modeling Language (UML). Here the module or object is an instance of a certain class. Many commercial simulators and optimizers could be derived from these classes.PhD Thesis – Q.1. SMX takes advantage of the multi-thread capability of the Microsoft Windows operating system (Petzold. The A. a complicated system can be decomposed into relatively independent small objects without losing readability and intuitiveness. It includes data structures describing the properties of the object. Cheng McMaster – Electrical and Computer Engineering intended to support a number of EM and circuit simulators. Synchronization and communication between threads are properly arranged. 108 . Here. SMX is capable of optimizing while showing intermediate results and interacting with the user. as shown in Fig. the SMX system is decomposed into 6 modules. Each module can carry out certain functionality.1 SMX ARCHITECTURE SMX automates the algorithm and drives EM/circuit simulators. The user-friendly interface responds smoothly while the SMX core is running in a different thread in the background. the basic functionality of simulators and optimizers is abstracted in the two basic classes Simulator and Optimizer. Using the encapsulation concept. structure of each object can be represented in UML. Using this language.S. Object- oriented design is employed to decompose the algorithm into independent modules (objects). A.

1 The modules of SMX. c ( i ) and γ ( i ) in graphical and numerical format to the user. designable parameters and critical mapping parameters B ( i ) . etc. s ( i ) . The interface initiates the starting point xc(0) .S. the constraints and the control signals for the coarse and fine models. to the user interface. Madsen. A.. 2000). Cheng McMaster – Electrical and Computer Engineering The user interface and SMX engine run in two separate threads concurrently. σ ( i ) . t ( i ) . Bandler. The user chooses the simulators and setup problem specifications through a user interface. SMX user interface file system SMX engine model optimizer simulator Fig. Rayas-Sánchez and Søndergaard. the current status. The SMX engine performs optimization and returns the progress. 109 . responses R f and Rc .PhD Thesis – Q. The engine can optimize a model using either classical optimization methods such as gradient-based minimax or the SMSM algorithm (Bakr. The user interface feeds back the optimization status such as objective function.

Optimizer. Optimizer is shown in 110 . as well as for minimax design optimization. OptimizeCoarseModel This function uses m_pCoarseModelOptimizer. It calls gets the error values and their GetErrors to evaluate the error used for SetConstrantMatrix parameter extraction. additional parameter setup and objective function. GetErrors. three base classes. With override of optimization routines. The Optimizer base class is an abstract class. Simulator and Model. FDF Optimizer. The Huber optimizer is used for parameter extraction in To carry out space mapping.PhD Thesis – Q. a pointer to a * minimax optimizer object to obtain xc . the classes can be derived from are GetNorm. It provides the interface for standard optimization routines. are abstracted and built. Huber. After setup. The inheritance relation of Fig. sets constraints in matrix form.S. the from the coarse model is optimized by the member function (0) starting point xc . Then the member function OptimizeSurrogate is called to optimize the surrogate model Rs(i ) ( x f ) starting from the optimized coarse model. Cheng McMaster – Electrical and Computer Engineering A. Different optimizers use different GetNorm norms as their objective functions. OptimizeSurrogate. Minimax or other optimization Optimizer Some of the important functions in and SetConstraintMatrix. The purely virtual function FDF is overridden to obtain the appropriate norm.2. derivatives by perturbation.2 ALGORITHM CORE: SMX ENGINE The SMX engine is abstracted as the SMX_Engine class. A.

Huber GetNorm SetHuberThreshold . A. The SMX_Engine utilizes SurrogateModel which is derived from a base Model class..3 HTS FILTER EXAMPLE We consider two cases of the HTS filter problem (Bandler. Since Optimizer needs normalized parameters.... OSA90/hopeTM (1997) and Agilent MomentumTM (2000) are commercial simulators currently derived from the Simulator class. The Model class sends data to the simulator and retrieves responses from it. The Model class functions as a wrapper of a simulator. Interface functions are overridden for each new derived simulator class.. Cheng McMaster – Electrical and Computer Engineering Optimizer GetNorm FDF GetError SetConstraintMatrix .PhD Thesis – Q.2 Illustration of the derivation of basic optimizer class.S.. The responses are Simulator obtained independent of the simulator. Similar to simulators. is one of the Model members. scaling factors are added. MinMax GetNorm . Optimizer. 111 . Fig. Obviously. A. Additional parameters may also be added. the Simulator class is a parent class for different Commercial simulators and user defined simulators are derived classes. Biernacki.

112 .3 Case 1: The initial response of the HTS filter for the “fine” model (OSA90). A. L2 and L3 of the coupled Fig. the “coarse” and “fine” models are both empirical models of OSA90/hope. The “coarse” model uses the ideal open circuit for the open stubs while the “fine” model uses empirical models. A. Grobelny. In Case 1. Moskowitz and Talisa.PhD Thesis – Q. 1995). Getsinger. Fig. The designable parameters are the lengths L1.S.4 Case 1: The optimal “fine” model (OSA90) response of the HTS filter. Cheng McMaster – Electrical and Computer Engineering Chen.

Cheng McMaster – Electrical and Computer Engineering lines and their separation S1. Fig. A. S2 and S3.1mil. A. TABLE A. The “fine” model response in the first iteration is shown in Fig. A. 113 .3.6 Case 2: The final Momentum optimized fine model response of the HTS filter with a fine interpolation step of 0.S.PhD Thesis – Q. The SMX system obtained the optimal solution in 4 iterations. A.5 Case 2: The SMX optimized fine model (Agilent Momentum) response of the HTS filter.4.1 shows the initial Fig. The “fine” model response at the final iteration is shown in Fig.

04 98. We use fine interpolation resolution (0. Then the minimax optimizer in Momentum is used to refine this solution. SMX obtains the solution in 4 iterations.5 shows the fine model responses at the fourth SMX iteration. In Case 2 we use Momentum as the fine model. while the coarse model is the same as in Case 1.55 191. Fig.44 114.71 185.03 99.82 21.PhD Thesis – Q.90 all values are in mils 185. A.50 198.S.08 100. See Fig. It takes approximately 32 hours on an IBM Aptiva computer with AMD-K7 650MHz CPU and 384MB RAM.1mil for all parameters).84 187. Cheng McMaster – Electrical and Computer Engineering and final parameters obtained by SMX optimization.6. TABLE A. A.91 20.21 114 .1 THE INITIAL AND FINAL DESIGNS OF THE FINE MODEL (OSA90) FOR THE HTS FILTER Parameter L1 L2 L3 S1 S2 S3 x(f1) x(f4) 187.

The response is evaluated by * Rc = xc ⋅ 3 ⋅1 = 30 In practice. Our idealized “coarse” model is a uniform cuboidal block (top block of Fig. B. Our “response” is weight. For example. The coarse model may not be capable of matching the fine model in the parameter extraction. The goal is a desired weight of 30 units.Appendix B CHEESE-CUTTING PROBLEMS Our so-called cheese-cutting problems utilize the simple physics and geometry of ideal or contrived cheese blocks (slices) to study and illustrate various space mapping techniques. B. demonstrate how the aggressive SM approach may not converge in certain cases. A width of 3 units is used.1 CHEESE-CUTTING PROBLEM The cheese blocks. the coarse model 115 . A height and a density of one are assumed. The designable parameter is length. However.1). B. coarse models may have limitations. in this one-dimensional linear cheese-cutting example. the range of the coarse model may be smaller than the range of the fine model. depicted in Fig.1. The optimal length xc* = 10 is easily calculated.

116 .1.1 The aggressive SM may not converge to the optimal fine model solution in this case. This is a parameter extraction (PE) step in which we obtain a solution xc(1) = 10 (third block of Fig.g.PhD Thesis – Q.3xc } ⋅1 Let the actual block (“fine” model) be similar but imperfect (second block of Fig. we notice that PE is not able to generate a * xc 3 1 optimal coarse model * xc = 10. we will confine the coarse model range. exit Fig. [30. The algorithm exits. Rc = 30 x (1) f initial guess (1) xc * x (1) = xc . To mimic limitations in higher dimensions and for non-linear coarse models. R f =30 − 6 = 24 f PE * xc(1) = 10. B.1). cutting the cheese so that xf(1) = xc* = 10. B. noting the coarse model constraint)..S. In this example. e. +∞) .e. A six-unit part is missing. f = xc(1) − xc = 0. by rewriting the coarse model as follows Rc = max{30. We realign (calibrate the design parameter xc) our coarse model to match the outcome of the cut (on the fine model). Cheng McMaster – Electrical and Computer Engineering range is not limited. This value is equal to xc*. i. We take the optimal coarse model length as the initial guess for the fine model solution. B. The fine model response is R f = x (1) ⋅ 3 ⋅1 − 6 = 24 f This does not satisfy our goal..

Rc = 30 x (1) f initial guess * x (1) = xc . re-optimize the coarse model w. A response residual space mapping (RRSM) takes into account the mismatch in the parameter extraction and is able to overcome the problem. 117 . the new specification.t. We calculate the residual ∆R after the PE (see 3rd block of Fig. or in other words.r.S. This procedure can be thought of as to correct (relax or tighten) the coarse model specification by a residual. B. xc = 12 * x (2) = xc = 12 f Fig. See Fig. the original specification to obtain a new (or updated) xc*. R f =30 − 6 = 24 f xc(1) PE (1) xc = 10. The optimal fine model solution is obtained in 1 iteration (2 fine model evaluations). B.2 The RRSM solves the problem in one iteration (two fine model evaluations). an essential assumption in the ASM.PhD Thesis – Q. We obtain the new fine model response. Cheng McMaster – Electrical and Computer Engineering good match between coarse and fine model.r.2) and generate a surrogate Rs using the original coarse model plus the residual ∆R. ∆R = R f − Rc = −6 x (2) f prediction * Rs =Rc + ∆ R = 3 xc − 6.t. We re-optimize the surrogate w. Let xf(2) = xc* = 12. * xc 3 1 optimal coarse model * xc = 10.2. B.

We cut on the left-hand side. The designable parameter is the length of the top slice [see Fig. namely.S.PhD Thesis – Q. The goal is to cut through the slices to obtain a weight for each one as close to a desired weight s as possible. A width and a density of one are assumed. Our “responses” are the weights of individual cheese slices. We use minimax optimization. the preassigned parameter shown in Fig.3(a). B. Note that we measure the length from the right-hand end.3(a)]. The optimal length xc* can be calculated to minimize the differences between the weights of the slices and a desired weight s. B.3 The multiple cheese-cutting problem: (a) the coarse model (b) and fine model. This example is intended to examine the mismatch problem in the parameter extraction. The lengths of the two lower slices are c units shorter than the top one. The coarse model involves 3 slices of the same height x. The responses of the coarse model are given by xc x c x c x (a) xf x1 f1 x2 f2 x3 (b) Fig. B. Cheng McMaster – Electrical and Computer Engineering B. 118 .2 MULTIPLE CHEESE-CUTTING PROBLEM We develop a physical example suitable for illustrating space mapping optimization.

f (1) = P ( x (1) ) − xc . Step 2 Set x (f j +1) = x (f j ) + h( j ) . we review the following ASM algorithm (Bandler. j = 1 . than the top slice [Fig. respectively. B. i. Hemmers and Madsen. The heights of the slices are x1. 1995) * * Step 0 Initialize x (1) = xc . If f ( j +1) ≤ η .2) B. * Step 4 Compute f ( j +1) = P ( x (f j +1) ) − xc . x2 and x3. Cheng McMaster – Electrical and Computer Engineering Rc1 = x ⋅ xc ⋅1 Rc 2 = x ⋅ ( xc − c) ⋅1 Rc 3 = x ⋅ ( xc − c) ⋅1 (B.1 Aggressive Space Mapping Before we apply ASM to the multiple cheese-cutting problem. Biernacki. 119 . f f Stop if f (1) ≤ η . Step 1 Solve B ( j ) h( j ) = − f ( j ) for h( j ) .e.PhD Thesis – Q. respectively.1) The fine model is similar but the lower two slices are f1 and f2 units shorter.3(b)]. B (1) = 1 . The corresponding responses of the fine model are R f 1 = x1 ⋅ x f ⋅1 R f 2 = x2 ⋅ ( x f − f1 ) ⋅1 R f 3 = x3 ⋅ ( x f − f 2 ) ⋅1 (B.S. ( Step 3 Perform parameter extraction optimization to get xc j +1) . evaluate P ( x (f j +1) ) ..2. stop. Chen.

In this example we set c = 0 and f1 = f2 = 4. Cheng McMaster – Electrical and Computer Engineering Step 5 Update B ( j ) to B ( j+1) using the Broyden formula. see Step 4 of the ASM algorithm). From (B. Step 6 Set j = j +1. B.4(6)].4) (B. coarse mode parameter c = c0 and x1 = x2 = x3 = x = 1. go to Step 1.e.3) is from applying the minimax optimality condition to the coarse model. x = x1 = x2 = x3 = 1. B. The heights of the cheese blocks are fixed at unity. i. We demonstrate the Aggressive Space Mapping algorithm using the coarse and fine model in Fig. We use the optimality conditions of ASM on this multiple cheese-cutting problem to show the possible non-convergence to fine model optimal solution. The specification s is set to 10.4) is the least-squares optimality condition (stationary point) for the parameter extraction and (B.PhD Thesis – Q.5) where (B.5) is the exit condition (stopping criteria.3) (B.S. We assume the fine model parameters f1 = f2 = f0.3. We obtain the optimality conditions for ASM ⎧ x* − c − 10 = 10 − x* c ⎪ c 0 ⎪ ⎨2 ⋅ ( xc − x f ) + 4 ⋅ [( xc − c0 ) − ( x f − f 0 )] = 0 ⎪ * ⎪ xc − xc = 0 ⎩ (B.3) we obtain 120 .4(5). (B.4). We can see there is a significant residual between the coarse and fine model. The algorithm exits after 1 iteration since f = 0 as in Fig. B. B. A new surrogate is needed to find the real solution [Fig. We show the ASM algorithm may not converge to fine model optimal solution in this case (Fig.

6667.PhD Thesis – Q. xc = 10 6 4 2 10 15 0 0 5 (2) * x (1) = xc = 10.6667= 12.5) and (B. B. xc = 7.6) From (B.6667 f 6 4 2 6 4 2 0 0 5 (5) (2) Rc ( xc ) ≈ R f ( x (2) ). x* = 12 f f f =x (2) c − x = 10 − 10 = 0 < η * c exit Fig.3333. R f ( x (1) ) f f 10 15 B 6 4 2 0 0 5 (1) = 1. f (1) * f (1) = xc − xc = 7.4 The ASM algorithm may not converge in this case.6) with (B.7) 6 4 2 0 0 5 (1) * * Rc ( xc ). Cheng McMaster – Electrical and Computer Engineering * xc = 10 + c0 2 (B. xc(2) = 10.6667 B (1) h = − f (1) . 121 . f (2) 10 15 0 0 5 (6) 10 15 R f ( x* ).7) we obtain (B. h(1) = − f (1) / B (1) = 2.S.4) we obtain 2 2 x f = xc − c0 + f 0 3 3 Incorporating (B. x (2) = x (1) + h(1) f f x (2) = 10 + 2. 6 4 2 10 (3) 15 0 0 5 (4) 10 15 (1) (1) Rc ( xc ) ≈ R f ( x (0) ).333 − 10 = −2.

Step 0 Initialize j =1. ( Step 3 Perform parameter extraction optimization to get xc j +1) . f ∆R = 0 and evaluate P ( x (f j +1) ) .PhD Thesis – Q.4(4). The new algorithm uses the ASM stopping criterion to switch to an RRSM surrogate calibration.6667 which is consistent with the solution obtained in Fig. B. f * B (1) = 1 . Step 1 Solve B ( j ) h( j ) = − f ( j ) for h( j ) . i. We * x (1) = xc .2 Response Residual ASM Based on the observation of the non-convergence.2. calibrate the coarse model using the residual.8) 6 3 Equation (B. A misaligned coarse model may result in non-convergence of the ASM. Surrogate Rs = Rc + ∆R is used as a coarse model to perform PE. The optimal solution is not reached. Step 2 Set x (f j +1) = x (f j ) + h( j ) . we revise the ASM algorithm to include the response residual SM surrogate calibration.8) shows the solution is a function of coarse model parameter c0. Cheng McMaster – Electrical and Computer Engineering x f = 10 − c0 2 f 0 + (B.S. f (1) = P ( x (1) ) − xc . we obtain xf = 12. If h ≤ ε . stop. 122 . Letting c0 = 0 and f0 = 4..e. B. * Step 4 Compute f ( j +1) = P ( x (f j +1) ) − xc .

f (3) = xc − xc = 0 h (2) = − f (2) = 0. xc = 9. 6 4 2 0 0 (1) f −1 * c 6 4 2 5 (4) (1) c 6 4 2 0 5 (5) (2) (2) Rc ( xc ) ≈ R f ( x (2) ). xc = 7.3333. xc = 10 6 4 2 10 15 0 0 5 (2) * x (1) = xc = 10. A new 6 4 2 0 0 5 (1) * * Rc ( xc ).6667 6 4 2 0 0 5 (10) 10 (3) f 6 4 2 15 0 0 5 (11) (3) c 10 15 ∆R = R − Rc ( x ) min Rs = Rc ( xc ) + ∆R * (3) * xc = 9.3333 = 0 < η 10 15 min Rs = Rc ( xc ) + ∆R * * xc = 9. minimize * * the surrogate Rs = Rc + ∆R to obtain a new xc and f = xc − xc . exit Fig.5 The response residual ASM algorithm converges in two iterations. xc = 10. f (1) * f (1) = xc − xc = 7. f (2) = xc(2) − xc = 0.6667 10 15 B (1) = 1.PhD Thesis – Q. x (2) = x (1) + h (1) f f 6 4 2 0 0 5 (7) 10 15 6 4 2 0 0 6 4 2 5 (8) 10 15 0 0 5 (9) (3) Rs ( xc(3) ) ≈ R f ( x (3) ).3333. initial condition ∆R = 0 is added in Step 0.S. R f ( x (1) ) f f 6 4 2 10 15 0 0 5 (3) (1) (1) Rc ( xc ) ≈ R f ( x (0) ).3333. B. f ( * f (2) =xc 2) − xc = 10 − 10 = 0 < η 10 15 0 10 15 0 0 5 (6) 10 (2) f 15 x (2) f = x + P ( x − x ) = 10 + (10 − 7. Cheng Step 5 If McMaster – Electrical and Computer Engineering ( ( f ( j +1) ≤ η or xc j +1) = xc j ) .333 − 10 = −2.6667. h(1) = − f (1) / B (1) = 2.6667 ∆R = R − Rc ( x ) (2) c B (1) h = − f (1) .6667 = 12 f f h (2) = − f (2) = −0. Step 6 Set j = j +1. B ( j +1) = 1 else update B ( j ) to B ( j +1) using the Broyden formula. update ∆R = R f − Rc .3333.6667 x (3) = x (2) + h (2) = 12. In this algorithm.3333)= 12.3333 − 9. 123 . f (3) * f (3) =xc − xc = 9.6667 − 0. go to Step 1.

B. B. we obtain the optimal solution xf = 12 as shown in Fig. B. the parameter extraction uses the RRSM calibrated coarse model to match the fine model. Applying optimality conditions..5. which stops the algorithm when a very small prediction is obtained. In 2 iterations (3 fine model evaluations).S. we change the surrogate or update B according to the original stopping criterion.6. We demonstrate this using the multiple cheese-cutting problem.R fn ]T.. initially unity.PhD Thesis – Q. In Step 3. It shows that the algoritm does not converge to the optimal solution. Cheng McMaster – Electrical and Computer Engineering stopping criterion is added in Step 2. We assume that the least-squares solution is used for the parameter extraction and minimax for coarse (surrogate) optimization. In Step 5. B.. We investigate the optimality condition for ISM. We set the fine model parameters f1 = f2 = 4. We consider general coarse model responses Rc = [ Rc1. We obtain the convergence curve and solution xf =12..2808 as in Fig. Here we choose coarse model height x as a preassigned parameter. fine model heights x1 = x2 = x3 = 1 and coarse mode parameter c = 2.2. We use this new algorithm to solve the same multiple cheese-cutting problem as in Fig.3 Implicit SM Implicit SM (ISM) may not converge for certain preassigned parameters if the mismatch of the coarse model and fine model is not compensated.Rcn ]T and fine model responses Rf = [ R f 1.4 again. we obtain 124 .

9 0..… .S. corresponding response functions Rc1 ..8 0.2808 and xf* = 12. Rc1 .9) (B.PhD Thesis – Q...6 Parameter errors between the ISM algorithm and minimax direct optimization.12) Rci − sci < γ for i = k + 1.3 0... s1 .10) x f = xc Rc1 − s1 = Rc 2 − s2 = Rck − sk = γ (B.9) is a necessary 1 0.xf*|| 0.4 0.6 0..11) (B.7 ||xf . 125 . n where γ is the minimax “equal-ripple” peak deviation. Rck are the active calibrated coarse model function values.. Rcn .5 0. Cheng McMaster – Electrical and Computer Engineering ∑ 2 ⋅ ( R fi − Rci ) ∂xci = 0 T i ∂R (B..2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 iteration Fig. sn are specifications for Equation (B. B. Here xf = 12....

B. the coarse model parameter c of (B.9)-(B.16) Substituting (B. Cheng McMaster – Electrical and Computer Engineering Equation (B.16) into (B. we obtain ⎧2 ⋅ ( xc ⋅ x − x f ) ⋅ xc + 4 ⋅ [( xc − c0 ) ⋅ x − ( x f − f 0 )]( xc − c0 ) = 0 ⎪ ⎨ x f = xc ⎪ ⎩( xc − c0 ) ⋅ x − 10 = 10 − x ⋅ xc (B. we have 3 2 2 2 2 12 xc − (120 + 8 f 0 + 14c0 ) xc + (12c0 f 0 + 4c0 + 160c0 ) xc − 4c0 f 0 − 80c0 = 0 (B.12).15) where (B.2) be f0.1) be c0 and fine model parameters f1 and f2 of (B. It is consistent 126 .2808.7.15) is from the minimax optimality condition (B.13) (B.13) is from the derivative condition (B.17) We define 3 2 2 2 2 g ( xc ) 12 xc − (120 + 8 f 0 + 14c0 ) xc + (12c0 f 0 + 4c0 + 160c0 ) xc − 4c0 f 0 − 80c0 (B. Equation (B.9) and (B. We assume fine model preassigned parameters x1 = x2 = x3 = 1.14) and (B.PhD Thesis – Q. condition enforced in each iteration.12) are the minimax (equal ripple) optimality condition. We apply the conditions to the multiple cheese-cutting problem. From (B.18) is the solution that ISM converges to. If we choose [c0 f0] = [2 4].15) we obtain x= 20 2 xc − c0 (B. as shown in Fig.14) (B.11). Let the specification for all the responses be 10. Using (B. the root is 12.11) and (B.18) The root of equation (B.10) is the condition for minimum in the parameter extraction.S.13).

2.0000 fine model minimax solution 5 6 7 8 9 10 xc 11 12 13 14 15 Fig. minimax solution xf = 12.2808 vs. the ISM solution is x f = 12. B. minimax solution xf = 12. f0 = 4 solution x c=12. a simple RRSM calibration will push the solution to the optimum. This shows that ISM may not converge to the optimal solution if there is a misalignment (c0 ≠ f0) between the coarse and fine model responses.18) is obtained as 12. Cheng McMaster – Electrical and Computer Engineering 7000 c0 = 2. which is the minimax solution of the fine model. When we set [c0 f0] = [4 4].PhD Thesis – Q. with the experimental solution shown in Fig.6.S. f0 = 4. i. the ISM solution is x f = 12 vs. the first-order derivatives match. We again use the multiple cheese-cutting 127 . f0 = 4. B..2808 solution x c=12.e.7 Case 1: c0 = 2. f0 = 4 6000 5000 4000 3000 g(x c ) 2000 1000 0 -1000 -2000 -3000 c0 = 4. the root of (B. case 2: c0 = 4.4 Response Residual SM Surrogate If we assume that the coarse model is aligned with the fine model directionally. B.

For the fine model.. We now show a variation of the problem. and that the parameter extraction cannot get a better match by changing the preassigned parameters. Cheng McMaster – Electrical and Computer Engineering problem as an example.g.S. x2 = 0.6 and x3 = 0. we have the residual between the coarse and the fine models (noting xc = xf) as ∆R1 = R f 1 − Rc1 = xc ⋅ x0 − xc ⋅ x0 = 0 ∆R2 = R f 2 − Rc 2 = ( xc − f 0 ) ⋅ x0 − ( xc − c0 ) ⋅ x0 = (c0 − f 0 ) ⋅ x0 ∆R3 = R f 3 − Rc 3 = ( xc − f 0 ) ⋅ x0 − ( xc − c0 ) ⋅ x0 = (c0 − f 0 ) ⋅ x0 The new surrogate becomes (B. With RRSM calibration. we obtain that of the fine model. The coarse model is the same as the previous coarse model.8.2). e. B.20) has the same form as the fine model (B. the heights of the cheese blocks are x1 =1.20) Equation (B.6 and the lengths are the same: f1 = f2 = 0.1) be c0 and the fine model parameters f1 and f2 of (B.2) be f0. The algorithm will not converge to the fine model optimal solution using ASM or ISM.19) ⎧ Rc1 + ∆R1 = xc ⋅ x0 ⎪ Rs = Rc + ∆R = ⎨ Rc 2 + ∆R2 = R f 2 = ( xc − c0 ) ⋅ x0 + (c0 − f 0 ) ⋅ x0 = ( xc − f 0 ) ⋅ x0 ⎪ ⎩ Rc 3 + ∆R3 = R f 3 = ( xc − c0 ) ⋅ x0 + (c0 − f 0 ) ⋅ x0 = ( xc − f 0 ) ⋅ x0 (B. we are able to find the solution.5 Implicit SM and RRSM We demonstrate the response residual space mapping (RRSM) using this multiple cheese-cutting problem in Chapter 4.PhD Thesis – Q.2. Letting the coarse model parameter c of (B. We show the two RRSM stepwise iterations in Fig. x = x1= x2 = x3 = x0. And 128 . When we find the optimal solution of the surrogate. B. We assume that the coarse and fine model gradients are the same.

x (1) ) + ∆R *(1) xc = 12.8349 5 0 0 5 (10) 10 15 0 0 5 (11) 10 15 0 0 5 (12) 10 15 Rs = Rc ( xc .PhD Thesis – Q. x (2) ) + ∆R *(2) x (2) = xc . x (2) ) f x 5 (1) f = 12. x (1) ) x (0) = 11 f 5 5 x (1) = 0. x (2) ) + ∆R *(1) ∆R = R f ( x (1) ) − Rc ( xc . x (1) ) f Rs = Rc ( xc . x (1) ) + ∆R *(1) min Rs = Rc ( xc .. x0 ) *(0) xc = 11.9.4965 f .8 RRSM step-by-step iteration demonstration––different heights for the fine model.4162 5 5 5 0 0 5 (7) 10 15 0 0 5 (8) 10 15 0 0 5 (9) 10 15 *(1) x (1) = xc . Fig.. R f ( x (0) ) f f *(0) Rc ( xc . R f ( x (2) ) f f x *(2) c = 12. x (2) ) + ∆R *(2) min Rs = Rc ( xc . Cheng McMaster – Electrical and Computer Engineering the convergence is shown in Fig. 5 5 5 0 0 5 (1) 10 15 0 0 5 (2) 10 15 0 0 5 (3) 10 15 *(0) Rc ( xc .S.4965 x (2) = 12. R f ( x (1) ) f f *(1) Rs = Rc ( xc .4162 5 x (2) = 0. B. B.8473 5 0 0 5 (4) 10 15 0 0 5 (5) 10 15 0 0 5 (6) 10 15 *(0) ∆R = R f ( x (0) ) − Rc ( xc . x0 = 1 *(0) x (0) = xc . 129 .

B.9 Parameter errors between the RRSM algorithm and minimax direct optimization.5 in the final iteration (different heights for the fine model).5 and xf* = 12. Cheng McMaster – Electrical and Computer Engineering 10 2 10 0 10 -2 10 -4 ||xf .xf*|| 10 -6 10 -8 10 -10 10 -12 10 -14 1 2 3 4 5 6 7 8 9 10 11 12 13 iteration Fig. Here xf = 12. 130 .S.PhD Thesis – Q.

1400 Fountaingrove Parkway. Lyngby. Torczon (1998).BIBLIOGRAPHY First International Workshop on Surrogate Modelling and Space Mapping for Engineering Optimization. J.W. Bakr. and V.H.H. (2000). 2412-2425. 414-416. Agilent Technologies. vol. Technologies.6. Bakr (2000). CA 95403-1799. 16-23. “New efficient full wave optimization of microwave circuits by the adjoint network method. Sorrentino (1993). CA 95403-1799. Technical University of Denmark. Thesis.H. Jr. vol. R. J. M. M. Chen. Agilent EEsoft EDA. Hamilton. and R. pp. ON. Alessandri.D. CA 95403-1799.” Structural Optimization and Engineering. Chen (1997). Microwave Theory Tech. Santa Rosa. Santa Rosa. Lewis. M. Bandler. Version 4. M. Agilent HFSS (2000).” Report SOS-97-1-R. Santa Rosa. “Design of a three-section 3:1 microstrip transformer using aggressive space mapping. and K. pp.W.H. 1400 F. Version 2003A.M. Canada.M. Alexandrov.M. Agilent Fountaingrove Parkway. Agilent ADS (2003). Santa Rosa. S. R. Department of Electrical and Computer Engineering.E. CA 95403-1799.H. Bakr.. 15. Madsen (1998). 3. Mongiardo. vol. 131 . Biernacki. R. “A trust region aggressive space mapping algorithm for EM optimization. Denmark. J.” IEEE Trans. McMaster University.5.” IEEE Microwave and Guided Wave Letters. Ph. 46. Bandler. 1400 Fountaingrove Parkway.0.. Dennis. 1400 Fountaingrove Parkway. N. and S. Version 5. “A trust region framework for managing the use of approximation models in optimization. Version 1. Agilent Technologies. pp. Agilent ADS (2000). Advances in Space Mapping Optimization of Microwave Circuits. Biernacki. Agilent Momentum (2000).

J. Madsen (1995). Grobelny.H.H. 132 .H. J.” IEEE MTT-S IMS Digest.H. and K. 5.W.H. Phoenix.W. Bandler. J. RF and Microwave CAE. vol. 47.E. pp. and Q. “Fully automated space mapping optimization of 3D structures.. Microwave Theory Tech. 369-384.H. “Electromagnetic optimization exploiting aggressive space mapping. 2307-2315. S. San Francisco. K. S. “SMX—A novel object-oriented optimization system. pp. R. 48.BIBLIOGRAPHY M. M.J.W. Bakr. M. Zhang (2000). J.S. “Space mapping technique for electromagnetic optimization. and K. pp.” IEEE MTT-S IMS Digest. pp. pp. vol.W. Microwave Theory Tech. Grobelny. Bandler. 2083-2086. Moskowitz. M.M. 1.” Optimization and Engineering. Bandler.” IEEE Trans. Biernacki. “A hybrid aggressive space-mapping algorithm for EM optimization. J. Talisa (1995). CA. J.. Madsen. K.H. 2428-2439. and J. J.” IEEE Trans. Microwave Theory Tech.” IEEE Trans. N..W.K. 43. 2. and J. Madsen. R.W. Bandler. vol. K.” IEEE Trans.K. Hemmers. “Review of the space mapping approach to engineering optimization and modeling. vol. 2297-2306. Madsen (1999). 48. Ismail. Chen. Søndergaard (2000). J.. R. Hemmers (1994). Bandler.W. Bandler. and S. C. P.H. “Space-mapping optimization of microwave circuits exploiting surrogate models. pp. Bandler. vol. Biernacki. Chen. vol.M. Rayas-Sánchez (2001). 241-276. Cheng.H. P. “Electromagnetic design of high-temperature superconducting microwave filters.H. Bandler. and S. Chen (1996).W. Microwave Theory Tech. and N.A. Bakr. pp. Microwave Theory Tech. 42.H. M..A. Bakr.. Microwave Theory Tech.M. “An aggressive approach to parameter extraction. J. 2440-2449. Bakr.W. J. Bandler. pp. M. Biernacki. S. J. Bakr.W. R. vol. R.” IEEE Trans. 331-343.H.” Int.” IEEE Trans. “An introduction to the space mapping technique. Søndergaard (2000).E. 753-756. J. AZ. M. Ismail.A. Getsinger. 2874-2882. Bakr. M. Madsen. Bandler. Georgieva.M. Q.J. pp. Rayas-Sánchez. M. W. and J. vol. Georgieva (1999). “Neural space-mapping optimization for EM-based design. Søndergaard (2001).E.A. Biernacki. Rayas-Sánchez. Bakr.” Optimization and Engineering. Chen.H. 47. pp. 2536-2544. J. Bandler.W. pp.H. and J. and R. J. vol.

M. Biernacki.” IEEE Trans. Dakroury.A.S. 444-455. Microwave Theory Tech.” IEEE Trans. Madsen.E.S.K. pp. Zhang (1999). Chen. 1565-1568.M. Søndergaard (2003). K. “EM-based surrogate modeling and design exploiting implicit.H. Q. pp. and J.H. “An implementable space mapping design framework. D. 67-79. 1003-1006. 424-443. J.K. J. D. 52. Huang (1997). Bandler..J. Q.H. 761-769. Q. and J. Bandler and S. Omeragic (1999).A. Pedersen (2004).H. 36. Microwave Theory Tech. Bakr. Microwave Theory Tech.” IEEE Trans.W. Rayas-Sánchez. Microwave Theory Tech. Bandler. Madsen (1988).S.M. S. J.. “Space mapping optimization of waveguide filters using finite element and modematching electromagnetic simulators. M. Munich. vol. Fort Worth. Cheng. Ismail. R. N..W.H. Pedersen.W.BIBLIOGRAPHY J. 36. M. Fort Worth. Chen. vol. Q.M. J. Nikolova.A. Philadelphia.” IEEE Trans. Ismail. 52. and F. J. “A generalized space mapping tableau approach to device modeling. 9. 378-385.W. Madsen. “A generalized space mapping tableau approach to device modeling. 337-361. Søndergaard (2004). PA.W. J. Zhang (2001). Cheng. J. J. Hailu. Bandler. Cheng. Chen. and Y. J. vol. “Space mapping: the state of the art. pp. Georgieva. vol.. pp. and M. Bandler.S.” 29th European Microwave Conf. TX. pp. Bandler. Biernacki. Bandler. S. Ismail (2004).K. N. Bandler. Mohamed. K. vol. Germany.S. and D. S.E.W. A. Rayas-Sánchez. pp. A. 231-234. D.. pp. RF and Microwave Computer-Aided Engineering. “Circuit optimization: the state of the art. J. Bandler. and K. and N.” IEEE Trans.A. Q.W. J. Microwave Theory Tech. S. vol. Cheng. pp. Madsen. Bandler. N.” IEEE MTT-S IMS Digest.J. “Implicit space mapping optimization exploiting preassigned parameters. J. pp. Chen (1988).W.W. K.K. “Design optimization of interdigital filters using aggressive space mapping and decomposition. J. Dakroury.” IEEE MTT-S IMS Digest. S.H. 45. frequency and output space mappings.M. pp. TX. Nikolova (2004).W. Gebre-Mariam. R. 54-70. “Space mapping interpolating surrogates for highly optimized EM-based design of microwave devices.” IEEE Trans. 133 . Georgieva. Hailu.” Int. Daijavad.A.F.S. 703-706. F. S. pp. vol.” IEEE MTT-S IMS Digest. and Q. Mohamed. and Q..S. Microwave Theory Tech.W.. Bandler. Cheng. “Efficient optimization with integrated gradient approximations. 49.

“A rigorous framework for optimization of expensive functions by surrogates. “Neuromodeling of microwave circuits exploiting space mapping technology.” Structural Optimization and Engineering. Verdeyme. “Next generation optimization methodologies for wireless and microwave circuit design.” Optimization and Engineering. A. Torczon. “EM-based optimization exploiting partial space mapping and exact sensitivities. M. Bandler. pp. Guillon (1998). Rayas-Sánchez.” IEEE Trans. Mohamed. Bandler and K. and J. Madsen (2001). Vancouver. 1771-1774.A. Madsen (1985). and J. RF and Microwave Computer-Aided Engineering. vol. J. J.J. vol. Biernacki (1990). 58.B.E. pp..E. pp. Q. “A unified theory for frequency-domain simulation and sensitivity analysis of linear and nonlinear circuits. “Neural inverse space mapping (NISM) optimization for EM-based microwave design.W. Zhang. Bila. Baltimore. vol.H. Bandler. Dennis. Zhang. Søndergaard (2002). “A superlinearly convergent minimax algorithm for microwave circuit design.W. A. 2417-2427. vol. 1-13.W. pp. Circuits and Systems—I. Microwave Theory Tech..J. Bandler and Q. “Editorial—surrogate modelling and space mapping for engineering optimization.M. “Expanded space mapping EM-based design framework exploiting preassigned parameters. J. 134 .” IEEE Trans. Topical Symp. 13. Madsen.J. J. Bandler.D. J. 2. pp. Ismail.W. and P. pp. 49. and R. Biernacki (1988). Jr. M.W. Microwave Theory Tech. Microwave Theory Tech. V. and Q. MD. 1701-1710. Song. K.E. 1661-1669.BIBLIOGRAPHY J. J. D. Kellermann.” IEEE MTT-S IMS Digest. 38.. Frank. Microwave Theory Tech.W. Booker. Zhang (1999).W. vol.” Int.” IEEE Trans. J.A.” IEEE Trans. J.” IEEE Trans. on Technologies for Wireless Applications. Microwave Theory Tech.J.S. MTT-33. 367-368. S. and K. and M. Ismail. Bakr. Bandler. Rayas-Sánchez.E. Rayas-Sánchez (2002). 50. 1519-1530. vol. J. J. and R. Trosset (1999). Zhang (2003). P. 2741-2750. D.” IEEE MTT-S Int.. 1833-1838. Bandler. Bandler. J. pp. J.J. “FAST gradient based yield optimization of nonlinear circuits. pp.. Q. 36. Zhang (1999). BC.A. J. pp. vol.M. vol. M. M.W.J.W. Bandler. vol.W. pp.” IEEE Trans. S. Ismail. Serafini.. 136-147. W. 17. Baillargeat. 47. pp. and Q. “Automated design of microwave devices using full em optimization method.

J.6. Trust—Region Methods. Zhou. Devabhaktuni. PA: SIAM and MPS. 77005-1892.-N. and space mapping. “CAD of integrated passives on printed circuit boards through utilization of multiple material domains. Computational and Applied Mathematics. H. Dennis. 19. Agilent Technologies. B.H. and P. V. Version 5.-R. Magnetics. M.” Journal of Lightwave Technology. 21. (2001. 1400 Fountaingrove Parkway.-N. 1822-1833. D. “A Summary of the Danish Technical University November 2000 Workshop. Park. J. Santa Rosa.” Dept.R.G. N.I. 2097-2100. 37. N. Yagoub. Rice University. “private discussions. vol. P. Sonnet em (2001). Empipe3D (2000). Computational and Applied Mathematics.E.G. pp. Toint (2000). Feng and W. Gentili. USA. Sonnet Software. 77005-1892. 577-593. “Space mapping technique for design optimization of antireflection coatings in photonic devices. I.. vol.-S. Seattle. and M. and W.” IEEE Trans. 171-174. Texas. N.0b. NY 13212. Zhang (2003).. Hahn (2001). 3627-3630. Microwave Theory Tech. vol.K. Dennis. Broyden (1965). pp. CA 95403-1799. WA.-P. 281-285. Agilent EESof EDA.” Journal of Lightwave Technology. 51. pp. vol. Conn. USA. Inc. vol.C. Jr.M. “A space-mapping technique for the design of comb filters. G. 2002). G. Chattaraj. Feng.L. “Advanced microwave modeling framework exploiting automatic model generation. Jr. 135 . Choi. G.” IEEE MTT-S IMS Digest. Draxler (2002). Munich.J. pp. “A class of methods for solving nonlinear simultaneous equations. Texas. Houston. (2000). Rice University. Huang (2003). Houston.. “A new design technique of magnetic systems using space mapping algorithm. 11.Y. Politi (2003). Gould.BIBLIOGRAPHY C. Comp. Huang (2003). 1562-1567. Kim. pp. “Modeling and simulation of photonic devices by generalized space mapping technique. North Syracuse.E. and S. Philadelphia. Germany. Macchiarella. pp. A. Version 7.H.” 33rd European Microwave Conference.E. pp. knowledge neural networks. 21.” Math. pp.” IEEE Trans.-P.” Dept. and Q. 100 Elwood Davis Road.

2751-2758. Inc. New York: McGraw-Hill. K. S. NY: John Wiley and Sons. Young. pp. E.” IEEE Trans. 5. Keane (2001).J.. and E. pp.A. 385-398. Natick.5.L. Microstrip Filters For RF/Microwave Applications. L. Y. pp. M. MA 01760-2098.. pp.K. Ph. Hong and M. S. First ed. Ismail (2001). Microwave Theory Tech. S. N. M.. Space Mapping Framework for Modeling and Design of Microwave Circuits. 50. ON. Bandler (2002a). Yu (2004). Department of Electrical and Computer Engineering. and J. Seattle. vol. and S. Søndergaard (2004).J. A. 2.H. and J.W. vol.” IEEE Trans. Microwave Filters. Georgieva. “EM-simulator based parameter extraction and optimization technique for microwave and millimeter wave filters. vol.” IEEE MTT-S IMS Digest. Waveguide Handbook.W. Bakr. Vahldieck.” Optimization and Engineering. Panariello. 52.. The MathWorks.” ACES'2002. and M.” Workshop on Microwave Component Design Using Space Mapping Methodologies. CA. N. 136 . Jones (1964). J. WA. Microwave Theory Tech.M. Harscher. vol. M. J. “EM-based design of large-scale dielectric-resonator filters and multiplexers by space mapping.J. IEEE MTT-S IMS. A. 386-392. Glavic. First ed. Bakr. Smith. Matthaei. Canada. M. Madsen and J. MATLAB (2002). Thesis. WA. P. Ismail. Glavic. Wang. 221.BIBLIOGRAPHY N. New York. Version 6. 145-156. Ofli.H. Bhaskar. G.T.. D. Marcuvitz (1951). “Space mapping in the design of cellular PA output matching circuits.-W. “Convergence of hybrid space mapping algorithms. Leary. Monterey. and A.A. 295-299. pp. Seattle. 3 Apple Hill Drive.-S.K. Lobeek (2002). Bandler (2002b). 195-201. R. “A constraint mapping approach to the structural optimization of an expensive model using surrogates.2. 1113-1116. “Feasible adjoint sensitivity technique for EM design optimization. pp. Georgieva. Lancaster (2001). Amari (2002). pp. New York: McGraw-Hill. pp.D. McMaster University. Hamilton. “Adjoint variable method for design sensitivity analysis with the method of moments.” Optimization and Engineering. Impedance-Matching Network and Coupling Structures.

Harrington (1987a). WA. pp. 137 . P. Pedersen (2001).O.” Workshop on Microwave Component Design Using Space Mapping Methodologies.” IEEE Trans. 403-419. “The optimization of complex multilayer microwave circuits using companion models and space mapping techniques. 35. now Agilent EEsof EDA. Informatics and Mathematical Modelling (IMM).” IEEE MTT-S IMS Digest. pp. Anaheim. NV. J. A. Bandler. 2nd Edition: New York: AddisonWesley. and B.” IEEE MTT-S IMS Digest.C.” Workshop on Microwave Component Design Using Space Mapping Methodologies. Santa Rosa. A. Gimeno (2003). vol. Lyngby.C. F.M. Ontario. Rautio and R. Technical University of Denmark (DTU).” IEEE Trans. Programming Windows: Microsoft Press.W. P. Las Vegas. CA. J. formerly Optimization Systems Associates Inc. Cogollos. “Automated design of waveguide filters using aggressive space mapping with a segmentation strategy and hybrid optimization techniques. Morro.F. pp. Microwave Theory Tech. Box 8083. Boria. D.M. “Adjoint techniques for sensitivity analysis in high-frequency structure CAD. Estes.V. Canada L9H 5E7. D.BIBLIOGRAPHY J.F. Harrington (1987b). H. V.H. 1215-1218. 295-298. “An electromagnetic time-harmonic analysis of shielded microstrip circuits. Esteban. Microwave Theory Tech. S. 1400 Fountaingrove Parkway. Nikolova. Soto.. Bakr (2004). WA. Seattle. and M. and L. “An efficient electromagnetic analysis of arbitrary microstrip circuits. J. C. IEEE MTT-S IMS. Pavio (1999). “The electromagnetic optimization of microwave circuits using companion models. Denmark.” Workshop on Novel Methodologies for Device Modeling and Circuit CAD. IEEE MTT-S IMS. OSA90/hope (1997). J. Philadelphia. Petzold (1990). Masters Thesis. Pozar (1998). Seattle. vol. Zhao (2002). Pavio. Bachiller. Advances on the Space Mapping Optimization Method. CA 95403-1799. Microwave Engineering.E. pp. 726-730. Rautio and R.M. pp. C. Pelz (2002). PA. “Coupled resonator filter realization by 3D-EM analysis and space mapping.. USA. Dundas. 162-163. 52. N.K.. IEEE MTT-S IMS.

” European Congress on Computational Methods in Applied Science And Engineering. Swanson. “Using space mapping and surrogate models to optimize vehicle crashworthiness design.E. pp.D. Department of Electrical and Computer Engineering. Lyngby. and A. Masters Thesis.” Workshop on Statistical Design and Modeling Techniques For Microwave CAD. Systems. “Automated design of inductivity coupled rectangular waveguide filters using space mapping optimization. Neural Space Mapping Methods for Modeling and Design of Microwave Circuits. J.Combline Design Version 3. Linköping University. Simulation Based Design-Structural Optimization at the Early Design Stages. R. Lehmensiek. Denmark. Steyn.” Workshop on Microwave Component Design Using Space Mapping Methodologies. Snel (2001). Bandler. “Space mapping models for RF components. J. Canada. and H. GA. and P. Informatics and Mathematical Modelling (IMM). Ph. Optimization Using Surrogate Models-by the Space Mapping Technique. Master Thesis. Technical University of Denmark (DTU). 996-1005. IEEE MTT-S IMS. Barcelona. Søndergaard (1999). Borji (2002). Steer. McMaster University.F. pp.” IEEE Trans. J.E. P. WA. Division of Solid Mechanics. Redhe (2001). Nilsson (2002).G. A. J. vol. Damavandi. Phoenix. Snowden (2002).” 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. J.W. W.M. 138 . 1163-1166. Denmark. Redhe and L. Seattle. Safavi-Naeini. Meyer (2001). “Integrated CAD procedure for IRIS design in a multi-mode waveguide environment. Dept.K. AZ. M. Informatics and Mathematical Modelling (IMM).0. AZ. N. V. S. Boria. Atlanta. Jr. Non-linear Optimization Using Space Mapping. Hamilton. M.D. Esteban (2000). M. Technical University of Denmark (DTU). S. Thesis. (2001) CLD . of Mechanical Engineering. Søndergaard (2003). Spain. Bartley R. 50. Ph.. AIAA-2002-5536. Soto.BIBLIOGRAPHY J. Bergner.B. Lyngby. Thesis. Rayas-Sánchez (2001). Chaudhuri. Microwave Theory Tech. Phoenix. Amesbury.L. and C. IEEE MTT-S IMS. “Computer-aided design of RF and microwave circuits and systems. MA. D.” IEEE MTT-S IMS Digest. ON. “A multilevel design optimization strategy for complex RF/microwave structures. Gomez.

sensitivities. and M. 52.N. pp.co.pt/~lnv/ talks/ima-sm2. “An explicit knowledgeembedded space mapping technique and its application to optimization of LTCC RF passive circuits. Wenzel (2001). Jr. Young and B. Mansour (1997). sensitivities. Temes and D. Canada. J. J. Zhao. G. vol.K. Minneapolis. pp. Wu. Philadelphia.” IEEE MTT-S IMS Digest.. “Space mapping: models.mat. K.cragsystems.C. 55.K.uc. and Q.N. and trust-regions methods. Wu. and applications. Q.J. 393-402. vol. M.N.G. R.” Proc. vol. pp. pp. Vicente (2003a). “A useful high-pass filter design. IEEE. Ye and R. S.M. “Computer-aided network optimization the state-of-the-art. vol. vol.” IEEE Trans. 159-175. Ehlert. Chapter 9. “Space mapping: models.R. Ding. PA. Fang (2003). Toronto.” Optimization and Engineering. L. L.” Microwave J. algorithms. R. UML Training in Object Oriented Analysis http://www. University of Minnesota. Y.E. L. Wang. 78-80. 139 . 11591162. Zhang. pp.-L.” IEEE MTT-S IMS Digest. AZ. and R. Zhang. Norwood. pp. and Design page at L.. 4. “An effective dynamic coarse model for optimization design of LTCC RF circuits with aggressive space mapping.. L. 45. Calahan (1967). K. Phoenix.J. 1832-1863. (2003). Yagoub. “Space mapping: models.J. pp. 173-176.pdf. 399-406. Schiffman (1963). “Fast analysis and optimization of combline filters using FEM.T.-G.C.J. MN http://www. Cheng (2004). M.” IEEE Trans. 780-786. MA: Artech House Publishers. 26.A. Xu. “An innovative CAD technique for microstrip filter design. and D. Microwave Theory Tech.-J.BIBLIOGRAPHY D.uk/uml_training_080. Neural Networks For RF and Microwave Design.” IEEE Trans. Vicente (2002). Components and Packaging Technologies. Zhang and K. Gupta (2000). vol. SIAM Conference on Optimization. pp.htm. Vicente (2003b). and trust-regions methods.” IMA. Zhang. Swanson.” Workshop on Optimization in Engineering Design.C. Microwave Theory Tech. “Neurospace mapping technique for nonlinear device modeling and large signal simulation.-L. 6.

BIBLIOGRAPHY 140 .

50. 84. 16. 27. 19. 55. 8. 3. 17. 28. 15. 47. 46. 107. 24. 72. 12. 39. 15. 6. 7. 107. 9. 69. 85. 10.AUTHOR INDEX A Alessandri Alexandrov Amari 25 27 94 B Baillargeat Bakr 19 2. 71. 16. 81. 3. 37. 25. 24. 57. 119 Bergner 97 141 . 67. 93. 47. 92. 23. 52. 38. 27. 10. 109. 63. 31. 11. 26. 40. 66. 38. 11. 6. 111. 54. 40. 29. 19. 8. 13. 64. 93. 23. 76. 22. 109 Bandler 2. 85. 20. 14. 53. 25. 12. 20. 50. 76. 63.

64. 27. 8. 3.K. 18. 29. 25. 46. 26. 12. 57. 9. 112. 123 C Calahan Chattaraj Chaudhuri Chen 1 93 95 2. 10. 84. 119 Cheng. 111. 119 Bila Booker Boria Borji Broyden 19 107 97 95 4. 3. 55. 66. 55. 50. 47. 67. 25. 16. Choi Conn 95 97 12 142 . 40. 23. 85. 27. M. 81. 16. 28. 93. 9. 17. 40. Q. 47. 10. 11. 22. 22. 13. 54. 50. 37.S. 40. 37. 69. 67. 85. 46. 19. 120. 17. 72. 57. 23. 101. 35. 92. 13. 10. 20.AUTHOR INDEX Bhaskar Biernacki 98 2. 19. 26.M. 71. 67. 64. 92. 37. 3. 81. 107 Cheng. 24. 2. 7. 63. 17. 84. 24.

10. 28. 65. 63 95 27. 8. 3. 29. 40.AUTHOR INDEX D Daijavad Dakroury Damavandi Dennis Devabhaktuni Ding Draxler 19 2. 107 93 97 96 E Ehlert Esteban Estes 95 97 94 F Fang Feng Frank 95 98 107 G Gebre-Mariam Gentili (see Hailu) 97 143 .

F. 84. 119 Hong Huang.-P. 16. 67. Y. 9. 3. 13. 46. 55. 3. W. 71. 67.AUTHOR INDEX Georgieva Getsinger Glavic Gomez Gould Grobelny (see Nikolova) 55. 16. 24. 22. 26. 40. 57. 17. 23 144 . 37. Huang. 28. 13. 22. 37. 12 98 10. 40. 26. 112 Guillon Gupta 19 11. 72 1 94 2. 84. 9. 57. 55. 64. 40. 112 25 97 12 2. 47. 12 H Hahn Hailu (formerly Gebre-Mariam) Harrington Harscher Hemmers 97 3.

96. 37. 11. 53. 7. 47. 15. 31. 93. 12. 67. 52. 27. 12. 16. 69. 107 J Jones 71. 6. 10. 76. 23. 145 . 8. 11. 17. 9. 63. 3. 40. 81. 38. 26.AUTHOR INDEX I Ismail 2. 39. 3. 93 K Keane Kellermann Kim 98 7 97 L Lancaster Leary Lehmensiek Lewis Lobeek 12 98 96 27 95 M Macchiarella Madsen 97 2. 19. 25. 92.

71. 63. 15. 67. 93 96 2. 93 71. 69. 63 25 55. 94 77. 57. 76. 37. 19. 63. 26. 112 N Nikolova (formerly Georgieva) 2. 46. 8. 23. 64. 23 P Panariello 96 146 . 38. 23. 40. 29. 19. 119 Mansour Marcuvitz Matthaei Meyer Mohamed Mongiardo Moskowitz 26. 10. 20. 11.AUTHOR INDEX 20. 3. 25. 40. 27. 72. 93. 47. 67. 10. 76. 93 Nilsson 98 O Ofli Omeragic 94 10. 107. 66. 38. 55. 109. 25. 76. 28. 25. 24. 37. 84. 3.

6. 93. 11. 53. 71. 23. 72 96 108 97 45. 65. 50. 107. 47. 76. 37. 25. 63. 28. 39. 47. 72. 109 Redhe 98 S Søndergaard 2. 8. 40. 24. 87 R Rautio Rayas-Sánchez 1 2. 25. 94 3.AUTHOR INDEX Park Pavio Pedersen Pelz Petzold Politi Pozar 97 19. 93 107 96 147 . 12. 24. 109 Safavi-Naeini Schiffman Serafini Smith 95 71. 12. 19. 28. 38. 31. 11. 40. 10. 52. 66. 27. 6. 38. 26. 3. 93. 40. 16. 71. 12. 20. 107.

14 96 96 T Talisa Temes Toint Torczon Trosset 55. 57. 107 107 V Vahldieck Verdeyme Vicente 94 19 12. 7. 13 148 . 7. 112 1 12 27.AUTHOR INDEX Snel Snowden Song Sorrentino Soto Steer Steyn Swanson 94 2. 67. 14 25 25 97 2.

Q. 24. 11. 76. 31. L Zhao. 93 96 Z Zhang. 12. 95 94 95 149 . 47.AUTHOR INDEX W Wang. 93. 97 2. 38. 40. 25. 26. J Wang. 38. 97 Zhang. L Zhang. Y. 94 71. 97 26.-J.J. R Zhao. Y Wenzel Wu 95 96 96 95 X Xu 97 Y Yagoub Ye Young Yu 93.

AUTHOR INDEX Zhou 98 150 .

85. 57. 64. 104. 69. 82. 5. 30. 102. 17. 71. 4. 105. 78. 95. 103. 57. 120 Agilent 2. 56. 96. 91. 103. 37. 98. 105 ADS schematic design framework aggressive SM 82. 58. 86. 91. 40. 82. 105 4. 107. 59. 81. 17. 76. 84. 50. 92. 83. 78. 97. 35. 22. 86. 77. 39. 92. 83. 111. 56. 5. 94. 116 aggressive space mapping 9. 105 . 104. 84. 89. 102. 93.SUBJECT INDEX A ADS 4. 87. 71. 81. 119. 37. 84. 50. 64. 115. 47 2. 68. 60. 85. 113 ANN Ansoft 151 11. 94. 104. 81. 101. 70. 18. 46. 32. 10. 20. 61. 12. 60. 39. 6. 83.

55. 33. 119. 40. 7. 56. 87. 124. 126. 61. 120. 53.SUBJECT INDEX Artificial Neural Networks ASM 11. 28. 35. 37. 115. 124. 78. 127. 86. 120. 27. 41. 23. 81. 68. 49. 51. 72. 26. 50. 115. 74. 75. 69. 48. 47. 71. 18. 52. 82. 127. 85. 54. 94. 95. 128 comb filter 152 97 . 48. 43. 104. 31. 32. 95. 122. 60. 46. 92. 96. 51. 30. 89. 107. 64. 22. 47 9. 5. 15. 118. 12. 73. 9. 98. 119. 2. 122. 22. 8. 128 Automatic Model Generation 93 C CAD cheese-cutting 1. 73. 84. 74. 65. 10. 123. 118. 70. 11. 57. 34. 16. 42. 117. 128 coarse model 4. 79. 44. 114. 116. 112. 29. 20. 103. 59. 77. 5. 21. 49. 50. 101 4. 6. 83. 102. 45. 125. 88. 24. 120. 91. 30. 126. 13. 110. 72. 71. 37. 14. 93. 39. 67. 38. 121.

93 27. 92. 40. 82. 5. 46 153 . 14. 50 101 2. 26. 108 Empipe Empipe3D ESMDF expanded space mapping design framework explicit SM explicit space mapping 81. 59. 107 EM 1. 105. 102. 85. 93. 9. 4. 60. 11. 92. 84. 39. 47 39. 53 27 4. 81. 75. 37. 130 E electromagnetic electromagnetics em 1. 33. 60. 8. 63. 64. 5. 4. 103. 7. 125. 82. 83. 38. 24. 2. 83. 104. 61. 78. 56. 71. 37.SUBJECT INDEX combline filter computer-aided design coupled resonator filter 96 1 96 D dielectric resonator filter and multiplexer direct optimization 96 7. 102. 107 83. 11. 94. 37. 95. 3. 7. 93.

113.SUBJECT INDEX F FEM fine model 1. 127. 21. 15. 55. 81. 27. 75. 83. 41. 35. 126. 25. 96. 69. 105 1 154 . 107. 112. 35. 128. 33. 43. 26. 39. 31. 89. 84. 84. 59. 124. 117. 73. 83. 2. 130 finite element method 1 G gradient 10. 11. 65. 9. 23. 68. 4. 34. 93. 34. 120. 109 H HFSS high frequency structure simulator 2. 74. 114. 32. 88. 116. 10. 64. 5. 51. 13. 87. 17. 103. 60. 14. 85. 19. 44. 53. 71. 92. 86. 63. 96 3. 93. 22. 58. 79. 28. 50. 16. 102. 38. 45. 5. 118. 115. 78. 72. 37. 90. 8. 40. 29. 82. 71. 33. 98. 66. 67. 54. 56. 83. 30. 101. 48. 119. 78. 85. 19. 129. 102. 91.

67. 11. 68. 55. 39. 51. 56. 19. 79. 114 I implicit SM 3. 55. 42. 63. 69. 45. 25. 68. 6. 31. 128 implicit space mapping 4. 34. 52. 124. 11. 74. 33. 112. 80. 64. 40. 42. 126. 46. 63. 18. 64. 103 K key preassigned parameters 155 26 . 64. 37. 102. 4. 103. 37. 128 J Jacobian 15. 85. 76. 43. 127. 5. 78. 78. 54. 104. 49. 64. 76. 59. 44. 125. 48. 66. 113. 77. 111. 41. 58. 79. 124. 40. 71. 57. 61. 90. 6. 32. 64.SUBJECT INDEX H-plane 5. 101. 30. 84. 52. 94. 78. 60. 103. 35. 47. 91. 33. 102 inductively coupled filter integrated passive elements ISM 97 96 4. 38. 5. 37. 93. 70. 69. 70. 97. 54. 50. 103 HTS filter 4. 84. 78.

86. 107 5. 124. 91. 87. 2 4. 130 MoM Momentum 1. 82. 119. 74. 126. 37. 72. 113. 92. 60. 93. 107. 15. 125. 114 Momentum_Driver multiple cheese-cutting 81. 114.SUBJECT INDEX KPP 26. 24. 102. 84. 61. 81. 95 M magnetic systems mathematical motivation Matlab method of moments microstrip filter microstrip impedance transformer minimax 97 3. 118. 90. 78. 89. 75. 94. 70. 27 L Low Temperature Co-firing Ceramic (LTCC) 94. 73. 68. 127. 110. 85 13. 84. 120. 127. 88. 73. 87. 83. 93. 50. 5. 86. 128 156 . 6. 118. 78. 56. 85. 120. 126. 109. 78. 59. 103. 58. 102 50. 92. 10 81. 71. 92 1 12. 124.

115. 124. 32. 28. 88. 42. 102 P parameter extraction 4. 44. 116. 117. 102 81. 107 16. 122. 104 5. 31. 93. 63. 55. 126. 6. 103. 59. 101 33 97 11 O object-oriented original space mapping OSA90 OSM output SM output space mapping 93. 101. 111. 33. 32. 11. 81. 110. 33. 102. 49. 37. 120. 22. 9. 32. 18. 21. 23. 71. 93. 25. 101. 102. 64. 31. 48. 114 5. 52. 72. 35. 112. 46. 128 157 . 104 3. 84. 35. 89. 92. 16. 118. 69. 54. 119.SUBJECT INDEX N neural inverse SM neural space mapping NISM nonlinear device modeling NSM 11 4.

104. 63. 81. 34. 117. 118. 55. 47. 89. 85. 91. 25. 31. 79. 91. 71. 55. 78. 64. 9. 130 S SM-based optimization 158 4. 39. 128 Printed Circuit Board (PCB) 96 R response residual space mapping 5. 75. 92. 78. 122 power amplifier circuits preassigned parameters 94 3. 60. 64. 122. 63. 127 5. 61. 103. 74. 54. 18. 42. 67. 35 . 4. 102. 80. 83. 26. 102. 127. 71. 97. 116. 33. 66. 53.SUBJECT INDEX PE 9. 126. 69. 30. 10. 46. 124.84. 69. 56. 30. 23. 6. 71. 63. 72. 117. 128. 124. 91. 37. 122. 129. 75. 19. 17. 48. 45. 102. 38. 74. 21. 104. 20. 128 response residual SM RRSM 65. 52. 64. 84. 68. 41. 88. 77. 70. 22. 117. 76. 84. 16. 82. 78. 5. 33. 31. 71. 11. 35. 73. 69. 47.

77. 28. 61. 110. 66 10. 37. 60. 56.SUBJECT INDEX SMX 6. 59. 107. 35. 71. 101 V vehicle crashworthiness design 98 W waveguide filter 64. 78. 85. 114 Sonnet 2. 111. 76. 96. 81. 97 159 . 81. 12. 65. 103. 109. 27. 108. 102 96 28 T Taylor approximation Taylor model trust region 10. 84. 92. 82. 93. 5. 4. 12. 80. 83. 113. 102. 93. 63 4. 107 space mapping design framework Surface Mount Technology (SMT) Surrogate Modelling 27. 93.

- RED BRAND CANNERS
- KHASAB Cluster Optimization Report March 2014
- Planning
- M0ses Work
- Annie7
- Goal Programming
- Suspension System
- Computer Engineering Sample Sop
- Hindustan University M.tech MD Syllabus
- Maintenance Yong Naps
- Workforce Management Health Check
- Dynamic stiffness optimization using Radioss.pdf
- Simulated Annealing
- Optimum Coordination of Overcurrent Relays Using SADE Algorithm
- 05ME71104AL1 (1)
- Supply Chain Optimisation in the Coal Industry | BlendOpt
- Lecture_Notes_02.pdf
- De Lara (t)
- Optimal Shift Scheduling With a Global Service Level Constraint
- Structural analysis professional
- population based incremental learning
- joos_cacsd02
- Design Optimization
- 32Vol64No1
- ss2
- Modi Method
- 01338510
- Optimal Design and Analysis System of AC Solenoid Valve
- Optimal Design of a Wound Synchronous Generator Using Genetic Algorithm
- Total Prosim Rt Predictive Monitor

- Comparative study of Conventional and Traditional Controller for Two Equal Area interconnected power plant for Automatic Generation Control
- Emerging Trends of Ant Colony Optimization in Project Scheduling
- UT Dallas Syllabus for opre7330.001.07f taught by Milind Dawande (milind)
- A Review on Various Approach for Process Parameter Optimization of Burnishing Process and TAGUCHI Approach for Optimization
- Surveying, Planning and Scheduling For A Hill Road Work at Kalrayan Hills by Adopting ISS Method
- Weight Optimization of Composite Beams by Finite Element Method
- UT Dallas Syllabus for eco5302.501.07f taught by Chetan Dave (cxd053000)
- Determining the shortest path for Travelling Salesman Problem using Nearest Neighbor Algorithm
- UT Dallas Syllabus for opre7313.001.11s taught by Milind Dawande (milind)
- The Optimizing Multiple Travelling Salesman Problem Using Genetic Algorithm
- Comparative Analysis of Optimization Algorithms Based on Hybrid Soft Computing Algorithm
- tmpBFF4
- Analysis of CAS Using Improved PSO with Fuzzy Based System
- Optimization of Water Distribution Network for Dharampeth Area
- Quantitative Management for Analysis
- Combined Economic and Emission Load Dispatch using PPF using PSO
- Classification of Lung Diseases Using Optimization Techniques
- tmp5056.tmp
- Optimization Approach for Capacitated Vehicle Routing Problem Using Genetic Algorithm
- tmp5DD5.tmp
- Planning of Supply when lead time uncertain
- Mathematics-III
- UT Dallas Syllabus for math6321.521 06u taught by Janos Turi (turi)
- tmpA166.tmp
- A Review on Image Segmentation using Clustering and Swarm Optimization Techniques
- UT Dallas Syllabus for opre7320.001.09s taught by Suresh Sethi (sethi)
- Multi-objective Optimization for Ultrasonic Drilling of GFRP Using Grey Relational Analysis
- tmp82D3.tmp
- Recent Advancements in Scheduling Algorithms in Cloud Computing Environment
- Optimization of Shell and Tube Heat Exchanger

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading