This action might not be possible to undo. Are you sure you want to continue?

**Curves and Surfaces In Geometric Modeling: Theory And Algorithms
**

Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA e-mail: jean@cis.upenn.edu c Jean Gallier Please, do not reproduce without permission of the author March 9, 2011

ii

iii

To my new daughter Mia, my wife Anne, my son Philippe, and my daughter Sylvie.

iv

Contents

1 Introduction 1.1 Geometric Methods in Engineering . . . . . . . . . . . . . . . . . . . . . . . 1.2 Examples of Problems Using Geometric Modeling . . . . . . . . . . . . . . . 7 7 8

**Part I Basics of Aﬃne Geometry
**

2 Basics of Aﬃne Geometry 2.1 Aﬃne Spaces . . . . . . . . . . . . . . 2.2 Examples of Aﬃne Spaces . . . . . . . 2.3 Chasles’ Identity . . . . . . . . . . . . 2.4 Aﬃne Combinations, Barycenters . . . 2.5 Aﬃne Subspaces . . . . . . . . . . . . 2.6 Aﬃne Independence and Aﬃne Frames 2.7 Aﬃne Maps . . . . . . . . . . . . . . . 2.8 Aﬃne Groups . . . . . . . . . . . . . . 2.9 Aﬃne Hyperplanes . . . . . . . . . . . 2.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

13 13 20 22 22 27 32 37 44 46 48

**Part II Polynomial Curves and Spline Curves
**

3 Introduction to Polynomial Curves 3.1 Why Parameterized Polynomial Curves? . . . . . . 3.2 Polynomial Curves of degree 1 and 2 . . . . . . . . 3.3 First Encounter with Polar Forms (Blossoming) . . 3.4 First Encounter with the de Casteljau Algorithm . 3.5 Polynomial Curves of Degree 3 . . . . . . . . . . . . 3.6 Classiﬁcation of the Polynomial Cubics . . . . . . . 3.7 Second Encounter with Polar Forms (Blossoming) . 3.8 Second Encounter with the de Casteljau Algorithm 3.9 Examples of Cubics Deﬁned by Control Points . . . 3.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

63 63 74 77 81 86 93 98 101 106 116

vi 4 Multiaﬃne Maps and Polar Forms 4.1 Multiaﬃne Maps . . . . . . . . . . . . . . . . . . . . 4.2 Aﬃne Polynomials and Polar Forms . . . . . . . . . . 4.3 Polynomial Curves and Control Points . . . . . . . . 4.4 Uniqueness of the Polar Form of an Aﬃne Polynomial 4.5 Polarizing Polynomials in One or Several Variables . 4.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . .

CONTENTS 119 119 125 131 133 134 139 143 143 156 169 174 176 182 187 187 196 209 214 230 233 237 245 252

. . . . . . . . . Map . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

5 Polynomial Curves as B´zier Curves e 5.1 The de Casteljau Algorithm . . . . . . . . . . . . . . . 5.2 Subdivision Algorithms for Polynomial Curves . . . . . 5.3 The Progressive Version of the de Casteljau Algorithm 5.4 Derivatives of Polynomial Curves . . . . . . . . . . . . 5.5 Joining Aﬃne Polynomial Functions . . . . . . . . . . . 5.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . .

6 B-Spline Curves 6.1 Introduction: Knot Sequences, de Boor Control Points . 6.2 Inﬁnite Knot Sequences, Open B-Spline Curves . . . . . 6.3 Finite Knot Sequences, Finite B-Spline Curves . . . . . . 6.4 Cyclic Knot Sequences, Closed (Cyclic) B-Spline Curves 6.5 The de Boor Algorithm . . . . . . . . . . . . . . . . . . . 6.6 The de Boor Algorithm and Knot Insertion . . . . . . . . 6.7 Polar forms of B-Splines . . . . . . . . . . . . . . . . . . 6.8 Cubic Spline Interpolation . . . . . . . . . . . . . . . . . 6.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . .

**Part III Polynomial Surfaces and Spline Surfaces
**

7 Polynomial Surfaces 7.1 Polarizing Polynomial Surfaces . . . . . . . . . . . . . . . . 7.2 Bipolynomial Surfaces in Polar Form . . . . . . . . . . . . 7.3 The de Casteljau Algorithm; Rectangular Surface Patches 7.4 Total Degree Surfaces in Polar Form . . . . . . . . . . . . 7.5 The de Casteljau Algorithm for Triangular Surface Patches 7.6 Directional Derivatives of Polynomial Surfaces . . . . . . . 7.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

259

261 261 270 275 279 282 285 290 295 295 323 333

8 Subdivision Algorithms for Polynomial Surfaces 8.1 Subdivision Algorithms for Triangular Patches . . . . . . . . . . . . . . . . . 8.2 Subdivision Algorithms for Rectangular Patches . . . . . . . . . . . . . . . . 8.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CONTENTS 9 Polynomial Spline Surfaces and Subdivision 9.1 Joining Polynomial Surfaces . . . . . . . . . 9.2 Spline Surfaces with Triangular Patches . . . 9.3 Spline Surfaces with Rectangular Patches . . 9.4 Subdivision Surfaces . . . . . . . . . . . . . 9.5 Problems . . . . . . . . . . . . . . . . . . . . Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii 337 337 343 349 352 368 371 371 379 381 385 388 396 397 397 404 407 409 412 419

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

10 Embedding an Aﬃne Space in a Vector Space 10.1 The “Hat Construction”, or Homogenizing . . . 10.2 Aﬃne Frames of E and Bases of E . . . . . . . 10.3 Extending Aﬃne Maps to Linear Maps . . . . . 10.4 From Multiaﬃne Maps to Multilinear Maps . . 10.5 Diﬀerentiating Aﬃne Polynomial Functions . . . 10.6 Problems . . . . . . . . . . . . . . . . . . . . . .

11 Tensor Products and Symmetric Tensor Products 11.1 Tensor Products . . . . . . . . . . . . . . . . . . . . 11.2 Symmetric Tensor Products . . . . . . . . . . . . . 11.3 Aﬃne Symmetric Tensor Products . . . . . . . . . 11.4 Properties of Symmetric Tensor Products . . . . . . 11.5 Polar Forms Revisited . . . . . . . . . . . . . . . . 11.6 Problems . . . . . . . . . . . . . . . . . . . . . . . .

Part IV Appendices

A Linear Algebra A.1 Vector Spaces . . . . . . . . . . A.2 Linear Maps . . . . . . . . . . . A.3 Quotient Spaces . . . . . . . . . A.4 Direct Sums . . . . . . . . . . . A.5 Hyperplanes and Linear Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

423

425 425 432 436 437 445 447 447 453 456 460 467 467 471 472

B Complements of Aﬃne Geometry B.1 Aﬃne and Multiaﬃne Maps . . . . . . . . . . B.2 Homogenizing Multiaﬃne Maps . . . . . . . . B.3 Intersection and Direct Sums of Aﬃne Spaces B.4 Osculating Flats Revisited . . . . . . . . . . .

C Topology C.1 Metric Spaces and Normed Vector Spaces . . . . . . . . . . . . . . . . . . . . C.2 Continuous Functions, Limits . . . . . . . . . . . . . . . . . . . . . . . . . . C.3 Normed Aﬃne Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

viii

CONTENTS

D Diﬀerential Calculus 475 D.1 Directional Derivatives, Total Derivatives . . . . . . . . . . . . . . . . . . . . 475 D.2 Jacobian Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482

We present some of the main tools used in computer aided geometric design (CAGD). we provide an introduction to aﬃne geometry. but in our opinion. Many problems in the above areas require some geometric knowledge. Thus.Preface This book is primarily an introduction to geometric concepts and tools needed for solving problems of a geometric nature with a computer. we are ﬂirting with the intuitionist’s ideal of doing mathematics from a “constructive” point of view. Such a point of view if of course very relevant to computer science. Readers 1 . books dealing with the relevant geometric material are either too theoretical. in particular the study of curves and surfaces. and motion planning. this book should be of interest to a wide audience including computer scientists (both students and professionals). This book is an attempt to ﬁll this gap. mechanical engineers). This ensures that readers are on ﬁrm grounds to proceed with the rest of the book. Our main goal is to provide an introduction to the mathematical concepts needed in tackling problems arising notably in computer graphics. This book consists of four parts. In this respect. In brief. In particular. triangulations). and engineers interested in geometric methods (for example. geometric modeling. Although we will touch some of the topics covered in computational geometry (for example. or else rather specialized and application-oriented. This is also useful to establish the notation and terminology. This material provides the foundations for the algorithmic treatment of polynomial curves and surfaces. • Part I provides an introduction to aﬃne geometry. which is a main theme of this book. but our goal is not to write another text on CAGD. mathematicians. computer vision. we are writing about Geometric Modeling Methods in Engineering We refrained from using the expression “computational geometry” because it has a well established meaning which does not correspond to what we have in mind. We present a coherent view of geometric methods applicable to many engineering problems at a level that can be understood by a senior undergraduate with a good math background. just to mention some key areas. we are more interested in dealing with curves and surfaces from an algorithmic point of view .

• Part III deals with an algorithmic treatment of polynomial surfaces (B´zier rectangular e or triangular surfaces). Tisseron [83]. Sidler [76]. Hilbert and Cohn-Vossen [42]. and we are happy to thank the authors for providing such inspiration. Hoschek and Lasser [45]. Samuel [69]. Lyle Ramshaw’s remarkably elegant and inspirational DEC-SRC Report. Farin [32. Samuel [69]. This part has been included to make the material of parts I–III self-contained. although we cover aﬃne geometry in some detail. 6]. an exciting and active area of research in geometric modeling and animation. we highly recommend Berger [5. • Part IV consists of appendices consisting of basics of linear algebra. computer vision. 6]. Riesler [68]. and motion planning. Boehm and Prautzsch [11]. there are excellent texts on CAGD. Luckily. we are far from giving a comprehensive treatments of these topics. but as Ramshaw. For example. We have happily and irrevocably adopted the view that the most transparent manner for presenting much of the theory of polynomial curves and surfaces is to stress the multilinear nature (really multiaﬃne) of these curves and surfaces. including Bartels. do Carmo [26]. These readers are advised do some extra reading in order to assimilate some basic knowledge of geometry. Our advice is to use it by need ! Our goal is not to write a text on the many specialized and practical CAGD methods. Pedoe [59]. analysis. Several sections of this book are inspired by the treatment in one of several of the above texts. Berger and Gostiaux [7]. This is in complete agreement with de Casteljau’s original spirit. especially the paper by DeRose et al [24] on the animated character Geri. Pedoe [59]. This is why many pages are devoted to an algorithmic treatment of curves and surfaces. or use it by need . • Part II deals with an algorithmic treatment of polynomial curves (B´zier curves) and e spline curves. As it turns out. we are more explicit . radically changed our perspective on polynomial curves and surfaces. 86]. 36]. and Veblen e and Young [85. geometric modeling. “Blossoming: A connect–the–dots approach to splines” [65]. one of the most spectacular application of these concepts is the treatment of curves and surfaces in terms of control points. certain technical proofs that were omitted earlier. and spline surfaces. complements of aﬃne geometry. a tool extensively used in CAGD. and differential calculus. from the short movie Geri’s game. We also include a section on subdivision surfaces. readers totally unfamiliar with this material will probably have a hard time with the rest of the book. Similarly. and Tisseron [83]. we only provide a cursory coverage of CAGD methods. Fiorot and Jeannin [35. On the other hand. 31]. Beatty. as attested by several papers in SIGGRAPH’98. Dieudonn´ [25].2 CONTENTS proﬁcient in geometry may omit this section. However. and Barsky [4]. just to mention some key areas. Our main goal is to provide an introduction to the concepts needed in tackling problems arising in computer graphics. and Piegl and Tiller [62]. we highly recommend Berger [5. a great classic. For such a treatment.

computer graphics. Again. by a curve or a surface. As the reader will discover. background material or rather technical proofs are relegated to appendices. Thus. much of the algorithmic theory of polynomial curves and surfaces is captured by the three words: Polarize. it is very helpful to be aware of eﬃcient methods for numerical matrix analysis. readers are referred to the excellent texts by Gray [39]. There are other interesting applications of geometry to computer vision.CONTENTS 3 in our use of multilinear tools. The tools and techniques developed for solving the approximation problem will be very useful for solving the other two problems. Generally. • Interpolating a set of points. Loop). For further information on these topics. it is often possible to reduce problems involving certain splines to solving systems of linear equations. and a technical discussion of convergence and smoothness. . there is no fully developed modern exposition integrating the basic concepts of aﬃne geometry as well as a presentation of curves and surfaces from the algorithmic point of view in terms of control points (in the polynomial case). New Treatment. Hoﬀman [43] for solid modeling. Catmull-Clark. New Results This books provides an introduction to aﬃne geometry. spline curves or surfaces. homogenize. We will see how this can be done using polynomial curves or surfaces (also called B´zier e curves or surfaces). Novelties As far as we know. Strang [81]. and to numerical methods in matrix analysis. • Drawing a curve or a surface. Some great references are Koenderink [46] and Faugeras [33] for computer vision. we will see how this can be done using spline curves or spline surfaces. In fact. The material presented in this book is related to the classical diﬀerential geometry of curves and surfaces. and Metaxas [53] for physics-based deformable models. There is also no reasonably thorough textbook presentation of the main surface subdivision schemes (DooSabin. tensorize! We will be dealing primarily with the following kinds of problems: • Approximating a shape (curve or surface). Strang’s beautiful book on applied mathematics is also highly recommended as a general reference [80]. and Ciarlet [19]. and solid modeling.

We e present many algorithms for subdividing and drawing curves and surfaces. The approach (sometimes called blossoming) consists in multilinearizing everything in sight (getting polar forms). we are primarily interested in the repesentation and the implementation of concepts and tools used to solve geometric problems. surfaces. not just triangles or rectangles. Thus. all implemented in Mathematica. we provide Mathematica code for most of the geometric algorithms presented in this book. we try to be rigorous. We also urge the reader to write his own algorithms. The continuity conditions for spline curves and spline surfaces are expressed in terms of polar forms. triangulations. and Charles Loop [50]. A glimpse at subdivision surfaces is given in a new Section added to Farin’s Fourth edition [32]. Subdivision surfaces form an active and promising area of research. Many problems and programming projects are proposed (over 200). which implies that we give precise deﬁnitions and prove almost all of the results in this book. A a general rule. we give a crash course on discrete Fourier transforms and (circular) discrete convolutions. and where the initial control polyhedron consists of various kinds of faces. some are (very) diﬃcult.4). These algorithms were used to prepare most of the illustrations of this book. but in the context of wavelets and multiresolution representation. A clean and elegant presentation of control points is obtained by using a construction for embedding an aﬃne space into a vector space (the so-called “hat construction”. Subdivision surfaces are also brieﬂy covered in Stollnitz. Many algorithms and their implementation Although one of our main concerns is to be mathematically rigorous. 28].4 CONTENTS We give an in-depth presentation of polynomial curves and surfaces from an algorithmic point of view. this is the ﬁrst textbook presentation of three popular methods due to Doo and Sabin [27. They provide an attractive alternative to spline surfaces in modeling applications where the topology of surfaces is rather complex. Catmull and Clark [17]. Subdivision surfaces are the topic of Chapter 9 (section 9. Open Problems . We discuss Loop’s convergence proof in some detail. As a matter of fact. which leads very naturally to a presentation of polynomial curves and surfaces in terms of control points (B´zier curves and surfaces). but we always keep the algorithmic nature of the mathematical objects under consideration in the forefront. As far as we know. 29. DeRose. we devote a great deal of eﬀorts to the development and implemention of algorithms to manipulate curves. and for this. which yields both geometric and computational insights into the subtle interaction of knots and de Boor control points. and Salesin [79]. originating in Berger [5]). We even include an optional chapter (chapter 11) covering tensors and symmetric tensors to provide an in-depth understanding of the foundations of blossoming and a more conceptual view of the computational material on curves and surfaces. and we propose many challenging programming projects. Some are routine. etc.

Andy Hicks. Ron Donagi. Ken Shoemake. and Salesin [79]. What’s not covered in this book Since this book is already quite long. for sharing some of their geometric secrets with me. and a more mathematical presentation in Strang [82]. Will Dickinson. For example. Introduction to Geometric Methods in Computer Science. Acknowledgement This book grew out of lectures notes that I wrote as I have been teaching CIS510. Herman Gluck. . Hany Farid. I wish to thank some students and colleagues for their comments. Deepak Tolani. Gerald Farin. A good reference on these topics is [31]. methods for rendering implicit curves and surfaces. Ioannis Kakadiaris.CONTENTS 5 Not only do we present standard material (although sometimes from a fresh point of view). for the past four years. we state some open problems. many thanks to Eugenio Calabi for teaching me what I know about diﬀerential geometry (and much more!). We are also writing a text covering these topics rather extensively (and more). DeRose. a remarkably clear presentation of wavelets is given in Stollnitz. Rich Pito. Edith Haber. but whenever possible. The ﬁrst two topics are nicely covered in Hoﬀman [43]. we have omitted rational curves and rational surfaces. Jaydev Desai. and David Harbater. Steve Frye. Dianna Xu. Jeﬀ Nimeroﬀ. Hartmut Liefke. and most of all Raj Iyer. Chris Croke. we describe very clearly the problem of ﬁnding an eﬃcient way to compute control points for C k -continuous triangular surface splines. and wavelets. and the ﬁnite element method is the subject of so many books that we will not even attempt to mention any references. We also discuss some of the problems with the convergence and smoothness of subdivision surface methods. who screened half of the manuscript with a ﬁne tooth comb. We also have omitted solid modeling techniques. Charles Erignac. Finally. the ﬁnite elements method. Bond-Jay Ting. Also thanks to Norm Badler for triggering my interest in geometric modeling. Dimitris Metaxas. thus taking the reader to the cutting edge of the ﬁeld. including Doug DeCarlo. and projective geometry. and to Marcel Berger. David Jelinek.

6 CONTENTS .

civil and mechanical engineering. in order to put it on ﬁrmer grounds and to obtain more rigorous proofs. architecture. and Ferguson. the type of geometry used is very elegant. Basically. geometry seems to be making a come-back as a fundamental tool used in manufacturing. mechanics. except for the initiated. Beginning with the ﬁfties. it is a branch of aﬃne geometry.Chapter 1 Introduction 1. with the advent of faster computers. just to mention some key areas. geometry has played a crucial role in the development of many scientiﬁc and engineering disciplines such as astronomy. 7 . B´zier. On the other hand. and motion planning. automobile and aircraft manufacturing. It is interesting to observe that most College textbooks of mathematics included a fair amount of geometry up to the fourties. among others. The consequence of the strive for more rigor and the injection of more algebra in geometry is that mathematicians of the beginning of the twentieth century began suppressing geometric intuitions from their proofs. geodesy. What makes geometry a unique and particularly exciting branch of mathematics is that it is primarily visual . what a glorious subject! For centuries. automobile and plane manufacturers realized that it was possible to design cars and planes using computer-aided methods. starting in the early sixties. ship building. Paradoxically. What happened then is that mathematicians started using more algebra and analysis in geometry. there seems to be an interesting turn of events.1 Geometric Methods in Engineering Geometry. Geometry lost some of its charm and became a rather inpenetrable discipline. the amount of geometry decreases to basically disappear in the seventies. balistics. After being neglected for decades. computer vision. the intuitions behind these fancy concepts almost always come from shapes that can somehow be visualized. it was discovered at the end of the nineteenth century that there was a danger in relying too much on visual intuition. but even when the objects are higher-dimensional and very abstract. Thus. and that this could lead to wrong results or fallacious arguments. e used geometric methods. computer graphics. One might say that this is only true of geometry up to the end of the nineteenth century. These methods pioneered by de Casteljau. and it is very useful from the point of view of applications. stimulated by computer science. Although not very advanced. and in this century.

and volumes. Thus. This book represents an attempt at presenting a coherent view of geometric methods used to tackle problems of a geometric nature with a computer. we list some problems arising in computer graphics and computer vision that can be tackled using the geometric tools and concepts presented in this book. in order to make the book entirely self-contained. 1. We concentrate on methods for discretizing curves and surfaces in order to store them and display them eﬃciently.8 CHAPTER 1. in order to gain a deeper understanding of this theory of curves and surfaces. • Manufacturing • Medical imaging • Molecular modeling • Computational ﬂuid dynamics • Physical simulation in applied mechanics • Oceanography. We will need to understand better how to discretize geometric objects such as curves. we present the underlying geometric concepts in some detail. we provide some appendices where this background material is presented. Furthermore. since this material relies on some algebra and analysis (linear algebra. limbs. surfaces. is increasing fast. However. etc). there are plenty of opportunities for applying these methods to real-world problems. The demand for technology using 3D graphics. and it is clear that storing and processing complex images and complex geometric models of shapes (face. In turn. INTRODUCTION We are convinced that geometry will play an important role in computer science and engineering in the years to come. while having fun. aﬃne geometry. In the next section. etc. but our point of view is algorithmic. since they are the most eﬃcient class of curves and surfaces from the point of view of design and representation. etc) will be required.2 Examples of Problems Using Geometric Modeling The following is a nonexhaustive listing of several diﬀerent areas in which geometric methods (using curves and surfaces) play a crucial role. organs. virtual oceans • Shape reconstruction • Weather analysis . we focus on polynomial curves deﬁned in terms of control points. We believe that this can be a great way of learning about curves and surfaces. in particular. directional derivatives. animation techniques. Our main focus is on curves and surfaces. virtual reality.

such that F (ui ) = xi . for example in manufacturing problems or in medical imaging. . . it turns out that d1 . 0 ≤ i ≤ N. . . . . Other points d−1 . . Even then. In this case. xN and a sequence of N + 1 reals u0 . . In most cases. let us discuss brieﬂy a curve ﬁtting problem. it can be shown that there is a unique B-spline curve solving the above problem (see section 6. . . with ui < ui+1 for all i.) 9 A speciﬁc subproblem that often needs to be solved. What happens is that the interpolating B-spline curve is really determined by some sequence of points d−1 . . γi are easily computed from the uj .2. .. . 0 ≤ i ≤ N − 1. . and ri = (ui+1 − ui−1 ) xi for 1 ≤ i ≤ N − 1 (see section 6. . Indeed. Instead of specifying the tangents at x0 and xN . the coeﬃcients αi . d8 are also shown. .. Problem: Given N + 1 data points x0 . xN (and d0 . the problem is still underdetermined. . dN ) by solving a system of linear equations of the form r0 d0 1 d 1 r1 α1 β1 γ1 d 2 r2 α2 β2 γ2 0 . dN −2 rN −2 0 αN −2 βN −2 γN −2 αN −1 βN −1 γN −1 dN −1 rN −1 rN dN 1 where r0 and rN may be chosen arbitrarily. Lagrange interpolants. Then.8). βi . 3D images.1. . The next ﬁgure shows N + 1 = 8 data points. = . the problem is actually underdetermined. the problem is no longer underdetermined if we specify some “end conditions”. etc). . For simplicity. . dN +1 called de Boor control points (with d−1 = x0 and dN +1 = xN ). . As stated above. and a C 2 -continuous spline curve F passing through these points. . we can specify the control points d0 and dN .. for all i. eﬃciency is the dominant factor. . there are many different types of curves that solve the above problem (deﬁned by Fourier series. . uN . . is to ﬁt a curve or a surface through a set of data points. . dN −1 can be computed from x0 . ﬁnd a C 2 -continuous curve F . . However. and we need to be more speciﬁc as to what kind of curve we would like to use. for a uniform sequence of reals ui . . and it turns out that piecewise polynomial curves are usually the best choice. . for instance the tangents at x0 and xN . . .8). EXAMPLES OF PROBLEMS USING GEOMETRIC MODELING • Computer graphics (rendering smooth curved shapes) • Computer animation • Data compression • Architecture • Art (sculpture. .

x1 . etc. x5 . passing through some intermediate locations. say the heart. plane design. in medical imaging. One may do this by ﬁtting a Bspline curve through the data points. x4 . ship hulls. x3 . (wings. etc). x6 . INTRODUCTION d7 d6 x6 x4 d4 x5 d5 x0 = d−1 x7 = d8 Figure 1. one may want to ﬁnd the contour of some organ. in a smooth manner. fuselage.10 d2 d1 x1 x3 d0 d3 x2 CHAPTER 1. x2 . this problem can be solved using B-splines. In computer animation. one may want to have a person move from one location to another. engine parts. but it is now time to review some basics of aﬃne geometry! . Again. given some discrete data. Let us mention automobile design. ski boots. Indeed. x7 The previous example suggests that curves can be deﬁned in terms of control points.1: A C 2 interpolation spline curve passing through the points x0 . We could go on and on with many other examples. For example. specifying curves and surfaces in terms of control points is one of the major techniques used in geometric design. Many manufacturing problems involve ﬁtting a surface through some data points.

Part I Basics of Aﬃne Geometry 11 .

12 .

aﬃne geometry is crucial to a clean presentation of kinematics. and physical forces. Another reason is that certain notions. curves and surfaces are usually considered to be sets of points with some special properties. one is also interested in geometric properties invariant under certain transformations. for example. given an m × n matrix A and a vector b ∈ Rm . the set U = {x ∈ Rn | Ax = b} of solutions of the system Ax = b is an aﬃne space. whereas the geometric properties of an aﬃne space are invariant under the group of bijective aﬃne maps. After all. projections. One reason is that the point corresponding to the zero vector (0). coordinate systems have to be chosen to ﬁnally carry out computations. But the deeper reason is that vector spaces and aﬃne spaces really have diﬀerent geometries. The geometric properties of a vector space are invariant under the group of bijective linear maps. there are more aﬃne maps than linear maps. this is highly desirable to really understand what’s going on. independently of any speciﬁc choice of a coordinate system. curves. Aﬃne spaces provide a better framework for doing geometry. and other parts of physics (for example. In particular. plays a special role. when there is really no reason to have a privileged origin. Also. 13 . etc. are handled in an akward manner. dynamics. One could model the space of points as a vector space. rotations. that is. but this is not very satisfactory for a number of reasons. but one should learn to resist the temptation to resort to coordinate systems until it is really necessary.1 Aﬃne Spaces Geometrically. Roughly speaking. a rigid motion is an aﬃne map. trajectories. in an intrinsic manner. As in physics. but not a linear map in general. translations. it is possible to deal with points. etc. Thus. living in a space consisting of “points. called the origin.Chapter 2 Basics of Aﬃne Geometry 2.” Typically. among other things. and these two groups are not isomorphic. Aﬃne spaces are the right framework for dealing with motions. elasticity). Of course. surfaces. but not a vector space (linear space) in general. such as parallelism.

realizing that we are forced to restrict our attention to families of scalars adding up to 1. Then. e e 1 2 3 For example. the translations and the central dilatations. .14 CHAPTER 2. we characterize aﬃne subspaces in terms of certain vector spaces called their directions. We begin by deﬁning aﬃne spaces. We take advantage of the fact that almost every aﬃne concept is the counterpart of some concept in linear algebra. but it is easy to infer that a frame is a pair (O. 0). 6]. the reader is referred to Pedoe [59]. addition being understood as addition in R3 . Certain technical proofs and some complementary material on aﬃne geometry are relegated to an appendix (see Chapter B). We also deﬁne convexity. BASICS OF AFFINE GEOMETRY Use coordinate systems only when needed! This chapter proceeds as follows. stressing the physical interpretation of the deﬁnition in terms of points (particles) and vectors (forces). let us identify points with elements of R3 . which coincides with the point itself. and − = (0. Note that → − b = a + ab. 1). b2 . Corresponding to linear subspaces. in the standard frame. a3 ) and b = (b1 . Next. x2 . Then. the position of x is the vector Ox = (x1 . a2 . and it is biased towards the algorithmic geometry of curves and surfaces. We investigate brieﬂy some simple aﬃne maps. and that the position of this point x is determined with respect to a “frame” in R3 by a vector. Corresponding to linear maps. −2 . We show that every aﬃne map is completely deﬁned by the image of one point and a linear map. − = (0. Berger [5. b3 − a3 ). and Hilbert and Cohn-Vossen [42]. we deﬁne aﬃne maps as maps preserving aﬃne combinations. we deﬁne aﬃne independence and aﬃne frames. 0). e ). Corresponding to linear combinations of vectors. 1. given a point − → x = (x1 . we introduce aﬃne subspaces as subsets closed under aﬃne combinations. 0. Samuel [69]. x3 ). If so. (−1 . this deﬁnition seems to be deﬁning frames and the position of a point without deﬁning what a point is! Well. 0) and the basis of three vectors − = (1. the notion of a → → → frame is rarely deﬁned precisely. there is a unique free vector denoted → − → − ab from a to b. Suppose we have a particle moving in 3-space and that we want to describe the trajectory of this particle. For more details. But wait a minute. x2 . Tisseron [83]. the standard frame in R3 has origin O = (0. the vector ab = (b1 − a1 . Our presentation of aﬃne geometry is far from being comprehensive. we deﬁne aﬃne combinations of points (barycenters). 0. Snapper and Troyer [77]. given any two points a = (a1 . x3 ). If one looks up a good textbook on dynamics. 0. points and vectors are identiﬁed. − . The position of a point x is then deﬁned by → → → e1 e2 e3 the “unique vector” from O to x. Curiously. b3 ). one ﬁnds out that the particle is modeled as a point. − )) e e e3 → → → − consisting of an origin O (which is a point) together with a basis of three vectors (− . This allows us to deﬁne a clean notion of parallelism. such as Greenwood [40]. corresponding to linear independence and bases. In the standard frame. b2 − a2 .

v2 . ω3 ). Given two vectors − and − of coordinates u v → → → (u1 . u3) and (v1 . → → → We note that in the second frame (Ω. but such a treatment is not frame invariant. AFFINE SPACES b → − ab a O 15 Figure 2. (− . ω2 . we e e2 e → → can deﬁne the linear combination λ− + µ− as the vector of coordinates u v (λu1 + µv1 . −3 )). − . (−1 . e′2 . e′2 . x2 − ω2 . (−1 . It may be computationally convenient to deal with points using position vectors. e e e This is because − → − → − → − → Ox = OΩ + Ωx and OΩ = (ω1 . let us review → → the notion of linear combination of vectors. −′ → → → → e3 ) over the basis (−1 . − − − → → → − − → → If we choose a diﬀerent basis ( e′1 . − . v3 ) with respect to the basis (−1 .1. An undesirable side-eﬀect of the present approach shows up if we attempt to deﬁne linear combinations of points. ω2 .2. −3 )). − )). ω3 ). the point x = (x1 . − . x3 − ω3 ) in the frame (Ω. e′3 ) and if the matrix P expressing the vectors ( e′1 . x2 . say Ω = (ω1 . −3 ). Inspired by physics. x ) in the frame (O. points and position vectors are no e e2 e longer identiﬁed. which has undesirable eﬀets. and e e e 1 2 3 1 2 3 − → → → → Ωx = (x1 − ω1 . but the same basis → → → − vectors (−1 . a3 b3 c3 . µ. x3 ) is deﬁned by two position vectors: e e − → → → → Ox = (x .1: Points and free vectors What if we pick a frame with a diﬀerent origin. −3 ) is e e e a1 b1 c1 P = a2 b2 c2 . e3 )? This time. it is important to deﬁne points and properties of points that are frame invariant. λu2 + µv2 . x . This gives us evidence that points are not vectors. −2 . u2 . First. −2 . −2 . λu3 + µv3 ). for any two scalars λ.

( e1 e2 e3 (a′1 . a′3 ) = (a1 − ω1 . BASICS OF AFFINE GEOMETRY − → → → → which means that the columns of P are the coordinates of the e′j over the basis (−1 . → → → the coordinates of λa + µb with respect to the frame (O. e′ ) by the matrix equations v 1 2 3 ′ u1 u1 u2 = P u′2 u′3 u3 From the above. a3 ) and (b1 . a3 − ω3 ) and (b′1 . λa′3 + µb′3 ) . u2 . − . − )). b2 . a3 ) and (b1 . − )) to e e e3 − → − .16 CHAPTER 2. the coordinates ′ ′ ′ (λu′1 + µv1 . (−1 . u′ . − )). − ) are given in terms of the coordinates (u′ . e . u3 ) and (v1 . v ′ ) of − → → → → the basis ( e1 e2 e3 u 1 2 3 1 2 3 −′ −′ − → → → → and − with respect to the basis ( e . v ′ . (−1 . λu′3 + µv3 ) − − − → → → → → of λ− + µ− with respect to the basis ( e′1 . a2 . λu′2 + µv2 . − . a2 − ω2 . a2 . −3 )) are e e2 e (λa1 + µb1 . λa3 + µb3 ). ′ u3 v3 λu3 + µv3 λu′3 + µv3 Everything worked out because the change of basis does not involve a change of origin. since → → → ′ ′ ′ ′ ′ ′ (a1 . −2 . we get ′ u1 u1 u′2 = P −1 u2 u3 u′3 and ′ v1 v1 ′ v2 = P v2 . a′2 . b′2 . e′2 . b3 − ω3 ). e e e e e e → → it is easy to see that the coordinates (u1. b′3 ) = (b1 − ω1 . (−1 . ′ v3 v3 and and by linearity. e e e3 since − → − → − → − → − → − → → → → → → → ′ ′ ′ u1 −1 + u2 −2 + u3−3 = u′1 e′1 + u′2 e′2 + u′3 e′3 and v1 −1 + v2 −2 + v3 −3 = v1 e′1 + v2 e′2 + v3 e′3 . v3 ) of − and − with respect to u v − . ( e1 e2 e3 1 2 3 → → → (a1 . but the coordinates (λa′1 + µb′1 . −3 )) and of coordinates e e e − . ω . λa′2 + µb′2 . − . b2 − ω2 . ′ v3 v3 ′ v1 v1 ′ v2 = P −1 v2 . where OΩ = (ω . ω ). b3 ) of with respect to the frame (Ω. − . u′ ) and (v ′ . λa2 + µb2 . −2 . v2 . given two points a and b of coordinates → → → the frame (Ω. b3 ) with respect to the frame (O. b2 . −2 . e′3 ) are given by u v ′ ′ u1 v1 λu1 + µv1 λu1 + µv1 ′ λu′2 + µv2 = λP −1 u2 + µP −1 v2 = P −1 λu2 + µv2 . − ). if we consider the change of frame from the frame (O. → → → On the other hand.

a3 + v3 ). where we forget the vector space structure. are all that is needed to deﬁne the abstract notion of aﬃne space (or aﬃne structure). 17 Thus. unless λ + µ = 1. A clean way to handle the problem of frame invariance and to deal with points in a more intrinsic manner is to make a clearer distinction between points and vectors. This action + : R3 × R3 → R3 satisﬁes some crucial properties.1. The eﬀect − → of applying a force (free vector) u ∈ E to a point a ∈ E is a translation. v we obtain the point → a + − = (a1 + v1 . We duplicate R3 into two copies. a2 . u v u v → − and for any two points a. − → a + 0 = a. . but the notion of linear combination of points is frame dependent. the ﬁrst copy corresponding to points. a3 ) and any vector − = (v1 . By this. b. it is natural that E is a vector space. Since − → translations can be composed.2. − . although trivial in the case of R3 . some restriction is needed: the scalar coeﬃcients must add up to 1. considered as physical particles. we discovered a major diﬀerence between vectors and points: the notion of linear combination of vectors is basis independent. AFFINE SPACES → → → of λa + µb with respect to the frame (Ω. (−1 . and the second copy corresponding to free vectors. we can think of the − → elements of E as forces moving the points in E. The basic idea is − → to consider two (distinct) sets E and E . Furthermore. we make explicit the important fact that the vector space R3 acts → on the set of points R3 : given any point a = (a1 . a2 + v2 . v → which can be thought of as the result of translating a to b using the vector − . we mean − → that for every force u ∈ E . In order to salvage the notion of linear combination of points. → → → → (a + − ) + − = a + (− + − ). λa2 + µb2 − (λ + µ)ω2. where E is a set of points (with no structure) and − → E is a vector space (of free vectors) acting on the set E. v3 ). λa2 + µb2 − ω2 . v2 . It turns out that the above properties. λa3 + µb3 − ω3 ). there is a unique free vector ab such that → − b = a + ab. −3 )) are e e2 e (λa1 + µb1 − (λ + µ)ω1 . λa3 + µb3 − (λ + µ)ω3 ) which are diﬀerent from (λa1 + µb1 − ω1 . We can v − is placed such that its origin coincides with a and that its tip coincides with → imagine that v b. For example. the action of the force u is to “move” every point a ∈ E to the point a + u ∈ E obtained by the translation corresponding to u viewed as a vector. Intuitively. where the vector space structure is important.

Did you say “A ﬁne space”? − → Deﬁnition 2. if we also pick any point a in E. Note that − → − → The dimension of the aﬃne space E. and every u. It is also assumed that all families of vectors and scalars are ﬁnite. It is natural to think of all vectors as having the same origin. The unique → − − → vector u ∈ E such that a + u = b is denoted by ab. or a triple E. forgetting that every pair (a. + consisting − → of a nonempty set E (of points). v ∈ E . The formal deﬁnition of an aﬃne space is as follows. since a(a + v) is the unique vector such that a+v = a+ a(a + v). b ∈ E. − → E . forgetting the points in E. we can choose to look at the points in E. it is assumed that all vector spaces under consideration are deﬁned over the ﬁeld R of real numbers. it is denoted as dim(E). then we can recover all the points in E as the translated points a + u for all u ∈ E . the null vector. BASICS OF AFFINE GEOMETRY For simplicity. a vector space E (of translations. or sometimes by b − a. E . − → (AF3) For any two points a. for every a ∈ E. b) of points deﬁnes a − → − → unique vector ab in E . Indeed. satisfying the following conditions. the second set of glasses depending on the choice of an “origin” in E. + can be interpreted intuitively as saying − → that E and E are two diﬀerent ways of looking at the same object. or we can choose to look at the vectors u in E .18 CHAPTER 2.1. E . b = a + v is equivalent to ab = v. E . (AF1) a + 0 = a. → − Thus. The following diagram gives an intuitive picture of an aﬃne space. or free vectors). we also write → − b = a + ab (or even b = a + (b − a)).1. there is a unique u ∈ E such that a + u = b. Thus. Furthermore. point that can be viewed as an origin − → in E. . − → (AF2) (a + u) + v = a + (u + v). for every a ∈ E. − → This can be formalized by deﬁning two maps between E and E . but wearing diﬀerent sets of glasses. − → The axioms deﬁning an aﬃne space E. + is the dimension dim( E ) of the vector space −− − − −→ a(a + v) = v −− − − −→ −− − − −→ − → for all a ∈ E and all v ∈ E . An aﬃne space is either the empty set. and an − → action + : E × E → E. For simplicity.

we say that we consider E as the vector space obtained by taking a as the origin in E. − → − → where u ∈ E . and consider the mapping from E to E : → − b → ab. Thus. or even as − → E. + is a way of deﬁning a vector space structure on a set of points E. consider the mapping from E to E: u → a + u. in view of (AF3). After all. points are points. E . However.1. in view of (AF3). these compositions are the identity from E to E and the identity from E to E.2: Intuitive picture of an aﬃne space − → For every a ∈ E. an − → aﬃne space E. → − − → When we identify E to E via the mapping b → ab. Nevertheless. we will often denote an aﬃne space E. which. and the mappings are both bijections. E . yields b. rather than reducing E to some isomorphic copy of Rn .2. and we denote it as Ea . The composition of the ﬁrst mapping with the second is −− − − −→ u → a + u → a(a + u). . yields u. and not vectors! For − → − → notational simplicity. we urge the reader to − → think of E as a physical set of points and of E as a set of forces acting on E. where b ∈ E. + as (E. − → − → which. The vector space E is called the vector space associated with E. Thus. we can view E as the vector space Ea . E ). as soon as we commit to an origin a in E. The composition of the second with the ﬁrst mapping is → − → − b → ab → a + ab. without making a commitment to a ﬁxed origin in E. AFFINE SPACES E b=a+u u a c=a+w v w − → E 19 Figure 2.

1 − x − u). Yet. the vector space Rn can be viewed as an aﬃne space denoted as An . 1). the translate a + u of a point a ∈ E by a vector u ∈ E is also well-deﬁned. It immediately veriﬁed that this action makes L into an aﬃne space.2 Examples of Aﬃne Spaces Let us now give an example of an aﬃne space which is not given as a vector space (at least. (x. . Thus. . (a1 . we will see in section 10. The line L can be made into an oﬃcial aﬃne space by deﬁning the action + : L × R → L of R on L deﬁned such that for every point (x. and even mixed linear combinations of points and vectors. we will consider n = 1. 1 − x) on L and any u ∈ R. . 1 − b1 ) on L. The set L is the line of slope −1 passing through the points (1. but addition of points a + b does not make sense. . and − → letting + be addition in the vector space E . un The aﬃne space An is called the real aﬃne space of dimension n. = (a1 + u1 . In order to distinguish between the double role played by members of Rn . 0) and (0. In this respect. we will denote points as row vectors. In most cases. . for any two points a = (a1 . an ) + . y) satisfying the equation x + y − 1 = 0. We will refer to this aﬃne structure on a vector − → space as the canonical (or natural) aﬃne structure on E . an + un ). − → − → Any vector space E has an aﬃne space structure speciﬁed by choosing E = E . 1 − a1 ) and b = (b1 . the notation b − a for the unique vector u such that b = a + u. . as in u + v.20 CHAPTER 2. is given by u1 . points and vectors. since it suggests that points can be substracted (but not added!). Consider the subset L of A2 consisting of all points (x. the action of the vector space Rn over the set Rn simply viewed as a set of points.1 that it is possible to make sense of linear combinations of points. 2. 2. Addition is well− → deﬁned on vectors. the unique (vector) u ∈ R such that b = a + u is u = b1 − a1 . is somewhat confusing. . BASICS OF AFFINE GEOMETRY One should be careful about the overloading of the addition symbol +. and vectors as column vectors. 3. . For example. . not in an obvious fashion). Note that the vector space R is isomorphic to the line of equation x + y = 0 passing through the origin. . 1 − x) + u = (x + u. In particular.

. 0. Aﬃne spaces not already equipped with an obvious vector space structure arise in projective geometry. y. every point (x. (0. x2 + y 2) + u v = (x + u. y. EXAMPLES OF AFFINE SPACES 21 L Figure 2. y. 0. with axis Oz.2. y. This should dispell any idea that aﬃne spaces are dull. consider the subset P of A3 consisting of all points (x. 1 − x − y) on H and any v (x. The set P is paraboloid of revolution. consider the subset H of A3 consisting of all points (x. The set H is the plane passing through the points (1. 0). x2 + y 2 ) on P and any v (x. 1.2. y + v. y. 1 − x − u − y − v).3: An aﬃne space: the line of equation x + y − 1 = 0 Similarly. The plane H can be made into an oﬃcial aﬃne space by deﬁning the action + : H × R2 → H of R2 on u ∈ R2 . H deﬁned such that for every point (x. y. z) satisfying the equation x + y + z − 1 = 0. 1). and (0. z) satisfying the equation x2 + y 2 − z = 0. (x + u)2 + (y + v)2 ). y + v. For a slightly wilder example. 0). 1 − x − y) + u v = (x + u. The surface P can be made into an oﬃcial aﬃne space by deﬁning the action + : P × R2 → P of R2 on P deﬁned such that for u ∈ R2 .

a = a + 0. there is a problem with the naive approach involving a coordinate system. BASICS OF AFFINE GEOMETRY − → E b ab a c bc ac Figure 2.4 Aﬃne Combinations. by (AF3). since by Chasles’ identity → → − − − − → → → ab + bc = ad + dc = − ac. by (AF3). we get aa − = 0.4: Points and corresponding vectors in aﬃne geometry 2. which is known as Chasles’ identity. 2. at the risk of boring . as we saw in section 2. Given any four points a. c ∈ E. letting a = c in Chasles’ identity. → − − → − → − → we have ab = dc iﬀ bc = ad (the parallelogram law ). → aa Thus. However. b. by (AF2). Barycenters A fundamental concept in linear algebra is that of a linear combination. d ∈ E. b.1.3 Chasles’ Identity → − → − → − → → − − c = b + bc = (a + ab) + bc = a + ( ab + bc) → → − − − ab + bc = → ac. since c = a + − b = a + ab. and c = b + bc. we get → − → − ba = −ab. Since this problem is the reason for introducing aﬃne combinations. we get ac. The corresponding concept in aﬃne geometry is that of an aﬃne combination. → − → − → Given any three points a.22 E CHAPTER 2. c. and thus. also called a barycenter . → Since a = a + − and by (AF1).

Thus. −1) and b = (2. Lemma 2. −1).4. and let (λi )i∈I be a family of scalars. For any two points a. let us now consider the new coordinate system with respect to the origin c = (1. a + b corresponds to two diﬀerent points depending on which coordinate system is used for its computation! Thus. then − → → a+ λ − = b+ aa λ ba . Given an aﬃne space E. 1) (and the same basis vectors).5: Two coordinates systems in R2 certain readers. 2). −2). when a = (−1. λa2 + µb2 ). 0) and basis vectors and . the point a + b is the point c = (1. the deﬁnition is intrinsic. 1).4. AFFINE COMBINATIONS. Given any two 0 1 points a = (a1 . b2 ). some extra condition is needed in order for aﬃne combinations to make sense. and the point a + b is the point d of coordinates (−1. However. It turns out that if the scalars sum up to 1. This time. we give another example showing what goes wrong if we are not careful in deﬁning linear combinations of points. 0) of the ﬁrst coordinate system. under its natural 1 0 coordinate system with origin O = (0. let (ai )i∈I be a family of points in E. and the coordinates of b are (1. the coordinates of a are (−2.1. 1). the following properties hold: (1) If i∈I λi = 1. a2 ) and b = (b1 . then → λi − i = aa i∈I i∈I − → λi bai . b ∈ E.2. Thus. . it is natural to deﬁne the aﬃne combination λa + µb as the point of coordinates (λa1 + µb1 . i i i i i∈I i∈I (2) If i∈I λi = 0. it is clear that the point d is identical to the origin O = (0. However. Consider R2 as an aﬃne space. BARYCENTERS 23 b c O=d a Figure 2. as the following lemma shows.

λi ))i∈I . where a ∈ E is a point. λi ))i∈I is the unique point such that − = → ax i∈I → λi − i for every a ∈ E. BASICS OF AFFINE GEOMETRY Proof.24 CHAPTER 2. which is just a pair (a. i∈I In dealing with barycenters. or aﬃne combination) of the points ai assigned the weights λi . we also say that the point i∈I λi ai is the barycenter of the family of weighted points ((ai . and λ ∈ R is a scalar.1. we have a+ i∈I → λi − i = a + aa i∈I → → − − λi ( ab + bai ) → − λi ) ab + − → λi bai i∈I i∈I =a+( → − = a + ab + = b+ i∈I i∈I − → λi bai since i∈I λi = 1 − → λi bai → − since b = a + ab. (2) We also have → λi − i = aa i∈I i∈I → → − − λi ( ab + bai ) → − λi ) ab + − → λi bai . for any family (λi )i∈I of scalars such that i∈I λi = 1. given a family of weighted points ((ai . (1) By Chasles’ identity (see section 2. it is convenient to introduce the notion of a weighted point. by lemma 2. The unique point x is called the barycenter (or barycentric combination. for any family of points (ai )i∈I in E. i∈I =( i∈I − → λi bai = i∈I since i∈I λi = 0. aa .3). λ). where i∈I λi = 1. Note that the barycenter x of the family of weighted points ((ai . Thus.4. Then. the point x=a+ i∈I → λi − i aa is independent of the choice of the origin a ∈ E. λi ))i∈I . and it is denoted as λi ai .

we can deﬁne the curve F : A → A2 such that Such a curve is called a B´zier curve. 2 2 2 2 The point g2 can be constructed geometrically as the point such that the middle 1 b + 1 c of 2 2 the segment (b. b. F (t) = (1 − t)3 a + 3t(1 − t)2 b + 3t2 (1 − t) c + t3 d. −1). regardless of the value of i∈I λi . b.4. Observe that (1 − t)3 + 3t(1 − t)2 + 3t2 (1 − t) + t3 = 1. since g2 = −a + 2 1 1 b+ c . We will see in the next chapter how any point F (t) on the curve can be constructed using an algorithm performing three aﬃne interpolation steps (the de Casteljau algorithm). Note that e the curve passes through a and d. but generally not through b and c. and c. xa i i i∈I 25 In physical terms. 1). 1). d) be a sequence of points in A2 . the vector λ − does not depend on the point a. let (a. Remarks: (1) For all m ≥ 2. the point x is the unique point such that → λ − = 0. and (a. (b. g2 ). since the sum on the left-hand side is obtained by expanding (t + (1 − t))3 = 1 using the binomial formula. Thus. the barycenter is the center of mass of the family of weighted points ((ai . 4 4 2 The point g1 can be constructed geometrically as the middle of the segment joining c to 1 the middle 2 a + 1 b of the segment (a. and (a. b. d) are called its control points. (1 − t)3 a + 3t(1 − t)2 b + 3t2 (1 − t) c + t3 d . 1 . 2 2 Later on. c. so that i∈I λi = 1. Then. since 2 g1 = 1 1 1 1 a + b + c. i∈I i i∈I i i The ﬁgure below illustrates the geometric construction of the barycenters g1 and g2 of the weighted points a. and we may aa denote it as i∈I λi ai . and negative masses are allowed). AFFINE COMBINATIONS. 1 . λi ))i∈I (where the masses have been normalized. This observation will be used in Chapter 10. it is easy to prove that the barycenter of m weighted points can be obtained by repeated computations of barycenters of two weighted points.1 to deﬁne a vector space in which linear combinations of both points and vectors make sense. b). For example. 1 . c) is the middle of the segment (a. is a well-deﬁned aﬃne combination. and (c. we will see that a polynomial curve can be deﬁned as a set of barycenters of a ﬁxed number of points.2. c. BARYCENTERS and setting a = x. → (2) When λ = 0.

BASICS OF AFFINE GEOMETRY c g1 a b c g2 a b 1 1 Figure 2. 4 . g2 = −a + b + c.26 CHAPTER 2.6: Barycenters. g1 = 4 a + 1 b + 2 c.

5 Aﬃne Subspaces In linear algebra.e. + . i. yi ) ∈ U. An aﬃne subspace is also called a ﬂat by some authors.1. yi ) ∈ U and any m scalars λi such that λ1 + · · · + λm = 1. consider the subset U of A2 deﬁned by U = {(x. Given an aﬃne space E. i=1 i=1 λi y i = i=1 λi (xi . yi ) ∈ U. In aﬃne spaces. the barycenter i∈I λi ai belongs to V . (xi . λi ))i∈I in V such that i∈I λi = 1. It is natural to deﬁne an aﬃne subspace as a subset of an aﬃne space closed under aﬃne combinations. Indeed. and every intersection of aﬃne subspaces is an aﬃne subspace. the empty set is trivially an aﬃne subspace. the notion corresponding to the notion of (linear) subspace is the notion of aﬃne subspace. the set of solutions of the equation ax + by = c. .5. and if we multiply both sides of this equation by λi and add up the resulting m equations. − → Deﬁnition 2. and since λ1 + · · · + λm = 1.5. which shows that m m m λi xi . where it is assumed that a = 0 or b = 0. As an example. we get m m m a i=1 λi xi +b i=1 λi y i = i=1 λi c = c. Given any m points (xi . AFFINE SUBSPACES 27 2. yi ) ∈ U means that axi + byi = c.5. a subset V of E is an aﬃne subspace if for every family of weighted points ((ai . a (linear) subspace can be characterized as a nonempty subset of a vector space closed under linear combinations.2. y) ∈ R2 | ax + by = c}.1. According to deﬁnition 2. we get m m (λi axi + λi byi ) = i=1 i=1 λi c. E . we claim that m i=1 λi (xi .

y0 + u2 ) | (u1 . and it is just a usual line in R2 . − → − → First. since ax0 + by0 = c and au1 + bu2 = 0 for all (u1 . In fact. the same calculation as above yields that m i=1 − → λi (xi . if (x. since the right-hand side of the equation is − → − → null. by subtraction. yi ) ∈ U . − → U = (x0 . This line can be identiﬁed with a line passing through the origin of A2 . given any point (x0 . y0 ) + U ⊆ U. y) ∈ (x0 . u2 ) ∈ U . BASICS OF AFFINE GEOMETRY Thus. y0 ) + U . we claim that − → U = (x0 . The above example shows that the aﬃne line U deﬁned by the equation ax + by = c − → is obtained by “translating” the parallel line U of equation ax + by = 0 passing through the origin. y) ∈ U. y0 ) is any point in U. y − y0 ) ∈ U .e. the set of solution of the homogeneous equation ax + by = 0 obtained by setting the right-hand side of ax + by = c to zero. Now. and since we also have ax0 + by0 = c. where − → − → (x0 . and U = (x0 . − → − → which shows that (x − x0 . Thus. this time without any restriction on the λi . y) ∈ R2 | ax + by = 0}. U is an aﬃne subspace of A2 . In fact. we also have − → − → U ⊆ (x0 . y0 ) + U . we get a(x − x0 ) + b(y − y0 ) = 0. if (x0 . u2 ) ∈ U }. In fact. i. y0 ) ∈ U. .28 CHAPTER 2. Indeed. y0 ) + U . y0) + U . then ax + by = c. and thus (x. y0 ) + U . for any m scalars λi . U is a subspace of R2 . U is one-dimensional. (x0 . line which is parallel to the line U of equation ax + by = c. it is just a usual line in A2 . It turns out that U is closely related to the subset of R2 deﬁned by − → U = {(x. y0) + U = {(x0 + u1 . Second. Hence.

the subset U of An deﬁned by U = {x ∈ Rn | Ax = c} is an aﬃne subspace of An . Aﬃne subspaces can be characterized in terms of subspaces of − → E . λn ). the set − → U = {x ∈ Rn | Ax = 0} is a subspace of Rn .5.7: An aﬃne line U and its direction More generally. λ1 ). Furthermore. n x=a+ i=1 → λi − i aa is the barycenter of the family of weighted points n ((a1 . 1 − λi )). (an . for any family (λ1 . (a. . . . AFFINE SUBSPACES 29 U U Figure 2. i=1 . . For every family (a1 . Given any m × n matrix A and any vector c ∈ Rm . This is a general situation. . . . . λn ) of scalars. . it is easy to prove the following fact. an ) in V . we have − → U = x0 + U . for every point a ∈ V . observe that for every x ∈ E.2. Let V be a nonempty subset of E. and for any x0 ∈ U. . if we consider the corresponding homogeneous equation Ax = 0. . .

. and thus. let a + V denote the following subset of E: a + V = {a + v | v ∈ V }. V = a + Va . → − → − Proof. for every point a ∈ V . the set V = a + V is an aﬃne subspace. λi ))1≤i≤n . for every a ∈ V . where V is a subspace of E . Thus. V = Va . for every x ∈ E. we have V ⊆ Va . it is a subspace of E . Then. it is clear − → that Va is closed under linear combinations. since V is a subspace of E . . then. Consequently. the set → V = {− | x ∈ V } ax a − → is a subspace of E and Va = V for all a ∈ E. − → (2) For any subspace V of E . where a ∈ V . + be an aﬃne space. . and by the remark before the lemma. i=1 − → Given any point a ∈ E and any subspace V of E . E . (1) A nonempty subset V of E is an aﬃne subspace iﬀ. → V = {− | x. 1 − x=a+ i=1 −−− −−→ λi a(a + vi ) = a + n λi vi i=1 − → is in V . ((a + vi . Furthermore.2. − → and since Va is a subspace of E . Conversely. λn ). the barycenter x being given by n of the family of weighted points ((a1 . − → Lemma 2. xy ax since − =− −− → → → xy ay ax. (an .5. − → (2) If V = a + V . it is clear that Va ⊆ V . If V is an aﬃne subspace. Let E. then V is closed under barycentric combinations.30 since n CHAPTER 2. (1) Clearly. − = → ax iﬀ x is the barycenter n n − → is a subspace of E . Thus. where vi ∈ V . 0 ∈ V . . and λ1 + · · · + λn = 1. for every family of weighted points. V = a + V . for any a ∈ E. BASICS OF AFFINE GEOMETRY n i=1 λi + (1 − λi ) = 1. y ∈ V } xy → λi − i aa n i=1 i=1 λi ai + (1 − λi )a i=1 n i=1 λi )). λ1 ). . (a. y ∈ V } and Va = {− | x ∈ V }. Since V = → → {− | x.

since U = V . for λ. c are ac → − → distinct.5.8: An aﬃne subspace V and its direction V In particular. such that − = λ ab. c → − → are collinear . We say that four points → − − − → a.2 shows that every aﬃne subspace of E is of the form u + U. for any family (ai )i∈I of points in E. see Appendix A). Given an aﬃne space E.5. a line is the set of all points of the form a + λu. b.2.5. then there is a unique λ ∈ R. d are coplanar . a plane is the set of all points of the form a + λu + µv. AFFINE SUBSPACES E − → E 31 a V =a+V V − → Figure 2. and an aﬃne subspace of dimension 2 is called a plane. be characterized a little later. It is − → also clear that the map + : V × V → V induced by + : E × E → E confers to V. + . If two of the points a. if the vectors ab and − are linearly dependent. Equivalently. if the vectors ab. → and ad. say a = b. − → Lemma 2. V. we mean the dimension of V . when E is the natural aﬃne space associated with a vector space E.5. We say that two aﬃne subspaces U and V are parallel if their directions are identical. b. the direct sum of F and G. V is obtained from U by the translation → − ab.e. the set V of barycenters i∈I λi ai (where i∈I λi = 1) is the smallest aﬃne subspace containing . i. and we deﬁne the ac → − ac ratio − = λ. and V = b + U.3. and thus. We say that three points a.2. for any a ∈ U and any b ∈ V . are linearly dependent.e. c. i. v ∈ E . for a subspace U of E. An aﬃne subspace of codimension 1 is called a hyperplane (recall that a subspace F of a vector space E has codimension 1 iﬀ there is some subspace G of dimension 1 such that E = F ⊕ G. a line is speciﬁed by a point a ∈ E and a nonzero vector v ∈ E . − → By lemma 2. for λ ∈ R. lemma 2. b. µ ∈ R. → ab − → A plane is speciﬁed by a point a ∈ E and two linearly independent vectors u. The subspaces of E are the aﬃne subspaces that contain 0. E . + an aﬃne structure. By the dimension of the subspace V . Hyperplanes will ac. we have U = a + U. The subspace V associated with an aﬃne subspace V is called the direction of V . An aﬃne subspace of dimension 1 is called a line.

etc.6 Aﬃne Independence and Aﬃne Frames Corresponding to the notion of linear independence in vector spaces. b ∈ V . If the family (−→)j∈(I−{i}) is linearly independent for some i ∈ I.6. or even (a. then V = ∅. ka j∈(I−{k}) Since − → = −→ + −→. a line speciﬁed by two distinct points a and b is denoted as a. + . the following lemma a− j ia shows that it suﬃcient to consider only one of these families. E . a− j ia Let k ∈ I with k = i. the smallest aﬃne subspace of E generated by S is often denoted as S . a nonempty subset V of E is an aﬃne subspace iﬀ for every two points a. b . If (ai )i∈I is nonempty. Given an aﬃne space E. For example. we will reduce the notion of (aﬃne) independence of these points to the (linear) independence of the families (−→)j∈(I−{i}) of vectors obtained by chosing any ai as an origin.1. Remark: Since it can be shown that the barycenter of n weighted points can be obtained by repeated computations of barycenters of two weighted points. the set V contains the line determined by a and b. we have the notion of aﬃne independence. that is. it is enough to show that V is closed under aﬃne combinations. a− j a− i a− j ka ka ia . If (ai )i∈I is empty. then (−→)j∈(I−{i}) is linearly a− j a− j ia ia independent for every i ∈ I. λ ∈ R.32 (ai )i∈I . Assume that the family (−→)j∈(I−{i}) is linearly independent for some speciﬁc i ∈ I. − → Lemma 2. the set of all points (1 − λ)a + λb. then the smallest aﬃne subspace containing (ai )i∈I must contain the set V of barycenters i∈I λi ai . the set V contains all barycentric combinations of a and b. which is immediately veriﬁed. CHAPTER 2. BASICS OF AFFINE GEOMETRY Proof. 2. and thus. b). and assume that there are some scalars (λj )j∈(I−{k}) such that − → a− j λj − → = 0 . because of the condition i∈I λi = 1. Proof. let (ai )i∈I be a family of points in E. Given a family (ai )i∈I of points in an aﬃne space E. and similarly for planes. If V contains at least two points. V is an aﬃne subspace iﬀ for any two distinct points a. Given a nonempty subset S of E. First. b ∈ V .

6.k}) = j∈(I−{k}) λj −→ + a− i ka λj −→ − a− j ia = j∈(I−{i. where m λi = 1. E . u − → Lemma 2.1. Recall that x= i=0 m i=0 m λi ai iﬀ − x = a→ 0 m λi −→. . E . . . . . which implies that λj = 0 for all j ∈ (I − {k}). A similar result holds for aﬃnely independent points. However. − → Deﬁnition 2. a− j ia λj −→. . then the λi are unique. + .6. λm ) such that −x = a→ 0 m λi −→ a− i 0a i=1 . a− j ia Deﬁnition 2.6. AFFINE INDEPENDENCE AND AFFINE FRAMES we have λj − → = a− j ka j∈(I−{k}) j∈(I−{k}) 33 λj −→ + a− i ka j∈(I−{k}) λj −→. + . . let (a0 . .2. Then.2. Given an aﬃne space E. . am ) be a family of m + 1 points in E. . . . we must have λj = 0 for all j ∈ a− j ia (I − {i.2 is reasonable. . Let x ∈ E. a− j ia j∈(I−{i. a family (ai )i∈I of points in E is aﬃnely independent if the family (−→)j∈(I−{i}) is linearly independent for some i ∈ I. a− k ia → λi −i u i=1 → of the −i .6. . .3. and assume that x = m λi ai . . the independence of the family a− j a (−→)j∈(I−{i}) does not depend on the choice of ai . Proof. .6. Given an aﬃne space E. . since by Lemma 2. λm ) such that x = i=0 λi ai is unique iﬀ the family (a0 a1 . . a− i 0a i=1 λi = 1.k}) Since the family (−→)j∈(I−{i}) is linearly independent. the family i=0 i=0 m −→ − −→ − (λ0 . . We deﬁne aﬃne independence as follows. it is a well known result of linear algebra that the family where (λ1 . a0 am ) is linearly independent.k}) j∈(I−{k}) and thus j∈(I−{i. k}) and j∈(I−{k}) λj = 0. A crucial property of linearly independent i → → → vectors (−1 . − ) is that if a vector − is a linear combination u um v − = → v m a− j λj −→ − ia j∈(I−{k}) − → λj −→ = 0 . a− k ia λj −→.

. am ) in E. . .10). . . see Chapter A. if (− →. (− →. . .3 suggests the notion of aﬃne frame. and since λ0 = 1 − m λi . . . . Let E. . . . Thus. The family (a0 . . λm ) such that i=0 −x = a→ 0 m λi −→. − am ) is a basis of E . .1. − am ) is linearly independent. (− →. . 1 ≤ i ≤ m. . . . am ) in − → E. where ai = a0 + ui . .34 E a2 CHAPTER 2. . such that − a0 → a− 1 x = a0 + x1 − → + · · · + xm − am . . . − am ) in E . since x = a0 + − x. . E . a− i 0a i=1 − and by lemma A. xm ) are coordinates with respect to (a0 . . + be a nonempty aﬃne space.10. λ0 is also unique. a− 1 a0 → 0a Lemma 2. the uniqueness of i=1 (λ0 . Since a− 1 a0 → 0a m x = a0 + i=1 xi −→ iﬀ x = (1 − a− i 0a m xi )a0 + i=1 i=1 xi ai . . . . . λm ) such that x = m λi ai implies the uniqueness of (λ1 . . . and a pair (a0 . . . − am ) is linearly independent (for a proof. (λ1 . .9: Aﬃne independence and linear independence − is unique iﬀ (− →. . . . . . for any m ≥ 1. − → − When (− →. it is equivalent to consider a family of m + 1 points (a0 . and let (a0 . for every x ∈ E. Conversely. . then.6. Lemma a− 1 a0 → 0a − A. . . (u1. . . there a− 1 a0 → a→ 0a 0 is a unique family (x1 . we obtain the family of m+ 1 points (a0 . . . Conversely. . Aﬃne frames are the aﬃne analogs − → of bases in vector spaces. . where the ui are vectors in E . . . − am ) is linearly independent. BASICS OF AFFINE GEOMETRY − → E a0 a2 a0 a1 a0 a1 Figure 2. . given a point a0 in E and a family of m vectors a− 1 a0 → 0a − → (u1 . − am )). . am ) be a family of m + 1 points in E. Thus. . . . . . . . 0a m − The scalars (x1 . um) in E . . . . λm ) a− 1 a0 → 0a is unique. . xm ) of scalars. . by lemma A.1. . . . . um )). am ) determines the family of m − → − vectors (− →.10 again. .1.

a1 . . b. . for 1 ≤ i ≤ m. . where λ0 + · · · + λn = 1 and λi ≥ 0 (λi ∈ R). . . c) is the set of all points (1 − λ − µ)a + λb + µc. a2 . . . A family of → − → − → four points (a. . am ). → − A family of two points (a. λm ) of scalars such that λ0 +· · ·+λm = 1 called the barycentric coordinates of x with respect to the aﬃne frame (a0 . a. . . . . the aﬃne frame − (a0 . and λi = xi for 1 ≤ i ≤ m. which means that a. ac. . If a = b.6. . In this case. . c. . The scalars i=1 i=0 (λ0 . . b. In this case. b. . − a )) is also called an aﬃne frame with origin a . . an )). . − am )). The ﬁgure below shows aﬃne frames for |I| = 0. . . b. an ) in E. . 1. . . 2. . b) in E is aﬃnely independent iﬀ ab = 0. . which is the unique plane containing a. . xm ) of scalars. . every x ∈ E can be a− a a− 0 0 1 0 m 0 expressed as − x = a0 + x1 − → + · · · + xm − am a− 1 a0 → 0a for a unique family (x1 . λm ) are also certain kinds of coordinates with respect to (a0 . . b. . . which is the unique line passing through a and b. am ). c. When n = 1.r. we get the interior of the triangle whose vertices are a0 . − and ad are linearly independent. − am ) is a basis of E . c. Such aﬃne combinations are called convex combinations. . c) in E is aﬃnely independent → − → iﬀ ab and − are linearly independent.4. . . . an ) (or n-simplex spanned by (a0 . . . xm ) and the barycentric coordinates (λ0 . iﬀ a = b. and where λ0 = 1 − m xi . . .t. are the vertices of a tetrahedron. am ) of m + 1 points in E such that (− →. When n = 2. called the coordinates of x w. The coordinates (x1 . (− →. This set is called the convex hull of (a0 . . All this is summarized in the following deﬁnition. . Given an aﬃne space E. . b. . 3. every x ∈ E can be written as a− 1 a0 → 0a x = λ0 a0 + · · · + λm am for some unique family (λ0 .6. .2. and c are not on a same line ac (they are not collinear). . Furthermore. . and c. the aﬃne subspace generated by a and b is the set of all points (1 − λ)a + λb. . AFFINE INDEPENDENCE AND AFFINE FRAMES x ∈ E can also be expressed uniquely as m 35 x= i=0 λi ai with m λi = 1. λm ) are related by the equations λ0 = 1 − m xi and λi = xi . . Then. d) in E is aﬃnely independent iﬀ ab. we get the line segment between a0 and a1 . . . . . the aﬃne subspace generated by (a. A family of three points (a. including a0 and a1 . an aﬃne frame with origin a0 is a − → − family (a0 . Given n+1 aﬃnely independent points (a0 . . which means that a. An aﬃne frame is called an i=1 aﬃne basis by some authors. and d. and d are not in a same plane (they are not coplanar). E . The pair a− 1 a0 → 0a → (a . + . (− →. − → Deﬁnition 2. . . b. . we can consider the set of points λ0 a0 + · · · + λn an .

a parallelotope is also called a parallelogram. b ∈ V . and when E has dimension 3. a1 . a2 d a0 a1 Figure 2. More generally. an ). with 0 ≤ λ ≤ 1 (λ ∈ R).36 CHAPTER 2. we have c ∈ V for every point c = (1 − λ)a + λb. . Let a. . including boundary points (faces and edges). A parallelotope is shown in ﬁgure 2. BASICS OF AFFINE GEOMETRY a2 a0 a0 a1 a3 a0 a1 a0 a2 a1 Figure 2. When n = 3. Points are not vectors! The following example illustrates why treating points as vectors may cause problems. a parallelepiped . b. we say that a subset V of E is convex if for any two points a. a− 1 a− n 0a 0a is called the parallelotope spanned by (a0 . When E has dimension 2. a2 . c) can . b. we get the interior of the tetrahedron whose vertices are a0 . c be three aﬃnely independent points in A3 .11: it consists of the points inside of the parallelogram (a0 . The set {a0 + λ1 − → + · · · + λn − → | where 0 ≤ λi ≤ 1 (λi ∈ R)}. d). a3 . a2 . Any point x in the plane (a. including its boundary.10: Examples of aﬃne frames. . . a1 .11: A paralellotope including boundary points (the edges).

E ′ . i∈I i∈I In other words. a function f : E → E ′ is an aﬃne map iﬀ for every family ((ai . x in the standard frame of A3 . λi ))i∈I of weighted points in E such that i∈I λi = 1. b2 . we have f( λi ai ) = λi f (ai ). a3 ). a3 b3 c3 λ2 x3 However. a2 . For simplicity of notation.7 Aﬃne Maps Corresponding to linear maps. b.2. c2 . the same symbol + is used for both aﬃne spaces (instead of using both + and +′ ). It can be shown that barycentric coordinates correspond to various ratios of areas and volumes. and any linear map h : E → E ′ . How can we compute λ0 . 2. b = (b1 . c). x3 ) be the coordinates of a. the matrix is not invertible! What we should really be doing is to solve the system − → − → − → − → λ0 Oa + λ1 Ob + λ2 Oc = Ox. since in this case. + and E ′ . c. Aﬃne maps can be obtained from linear maps as follows. we have the notion of an aﬃne map. we claim that the map f : E → E ′ deﬁned such that f (a + v) = b + h(v) is an aﬃne map.7. f preserves barycenters. An alternative is to use certain well chosen cross-products. x2 . λ1 . any point b ∈ E ′ . − → − → Given any point a ∈ E.7. Given two aﬃne spaces E. λi (a + vi ) = a + λi a(a + vi ) = a + i∈I i∈I i∈I i∈I λi = 1. b3 ). there is a problem when the origin of the coordinate system belongs to the plane (a. AFFINE MAPS be expressed as x = λ0 a + λ1 b + λ2 c. c3 ). where O is any point not in the plane (a. c). it is tempting to solve the system of equations a1 b1 c1 λ0 x1 a2 b2 c2 λ1 = x2 . c = (c1 . for any family (λi )i∈I of scalars such that → family (−i )i∈I . and x = (x1 . see the Problems. for any . An aﬃne map is deﬁned as a map preserving aﬃne combinations. +′ . since v −−− −−→ λi vi .1. E . b. b. λ2 ? Letting a = (a1 . − → − → Deﬁnition 2. Indeed. 37 where λ0 + λ1 + λ2 = 1.

The eﬀect of this shear on the square (a. b. BASICS OF AFFINE GEOMETRY d′ c′ d c a′ a b′ b Figure 2.4. the map x1 x2 → 1 2 0 1 x1 x2 + 3 1 deﬁnes an aﬃne map in A2 . Note that the condition 2. d) is the parallelogram (a′ . b. d) is shown in ﬁgure 2. c′ . The image of the square (a.38 CHAPTER 2. It is a “shear” followed by a translation.1) in deriving that λi (a + vi ) = a + i∈I i∈I λi = 1 was implicitly used (in a hidden call to Lemma λi vi and i∈I λi (b + h(vi )) = b + i∈I λi h(vi ). b′ .12: The eﬀect of a shear −−−− −−−→ λi b(b + h(vi )) = b + i∈I i∈I and λi (b + h(vi )) = b + i∈I λi h(vi ). we have f( i∈I λi (a + vi )) = f (a + i∈I λi vi ) λi vi ) i∈I = b + h( = b+ i∈I λi h(vi ) λi (b + h(vi )) = i∈I = i∈I λi f (a + vi ). i∈I As a more concrete example. c. c.12. Let us consider one more example. The map x1 x2 → 1 1 1 3 x1 x2 + 3 0 . d′ ).

. such that f (a + v) = f (a) + f (v). followed by a translation.1. If a linear map f : E → E ′ satisfying the condition of the lemma exists.2.2. but very technical. c. followed by √ a magniﬁcation of ratio 2. The image of the square (a. there is a unique linear map f : E → E ′ . d) is the parallelogram (a′ . The proof is not diﬃcult. this map must satisfy the equation −− − − − − − − −→ f (v) = f (a)f (a + v) − → for every v ∈ E . b.7. 0 1 this aﬃne map is the composition of a shear. Given an aﬃne map f : E → E ′ . − → − → The unique linear map f : E → E ′ given by lemma 2.13: The eﬀect of an aﬃne map is an aﬃne map. The following lemma shows the converse of what we just showed.2 is called the linear map associated with the aﬃne map f . Let a be any point in E. b. followed by a rotation of angle π/4.7. AFFINE MAPS c d′ ′ 39 d c b′ b a′ Figure 2. It can be found in Appendix B. Section − → − → B. We then have to check that the map deﬁned in such a way is linear and that its deﬁnition does not depend on the choice of a ∈ E. − → for every a ∈ E and every v ∈ E . − → − → Lemma 2. Every aﬃne map is determined by the image of any point and a linear map. d′ ). We simply sketch the main ideas. b′ .13. d) is shown in ﬁgure 2.7. Since we can write 1 1 1 3 = √ √ a 2 2 2 √ 2 2 − √ 2 2 √ 2 2 1 2 . c. Proof. The eﬀect of this map on the square (a. c′ .

− → − → for all v ∈ E (just let b = f (a). Indeed. Aﬃne maps for which f is the identity − → map are called translations. − → for every a ∈ E and every v ∈ E can be stated equivalently as −−− −−→ → → f (x) = f (a) + f (− ax).14: An aﬃne map f and its associated linear map f Note that the condition f (a + v) = f (a) + f (v).7. there are points − → − → a ∈ E. x ∈ E. and a unique linear map f : E → E ′ . or f (a)f (x) = f (− ax). Lemma 2. for all a.40 CHAPTER 2. ax) ax xa ax xa xa and so −− −→ −− −→ xf (x) = af (a).2 shows that for any aﬃne map f : E → E ′ . . BASICS OF AFFINE GEOMETRY E a a+v v − → E f f E′ f (a) f (a) + f (v) = f (a + v) f (v) −′ → E − → Figure 2. −→ → −→ → −− −→ − → → → → −− → −− f (x) = f (a) + f (− = f (a) + − = x + − + af (a) + − = x + − + af (a) − − = x + af (a). b ∈ E ′ . if f = id. such that f (a + v) = b + f (v). −− −→ which shows that f is the translation induced by the vector af (a) (which does not depend on a). for any a ∈ E).

15: An aﬃne map mapping a0 .2. an ) in E. for any other aﬃne space F . given aﬃne maps f : E → E ′ and g : E ′ → E ′′ . which shows that (g ◦ f ) = g ◦ f . It is easy to show that an aﬃne map f : E → E ′ is injective − → − → − → − → iﬀ f : E → E ′ is injective. bm ) of m + 1 points in F . So. . for 0 ≤ i ≤ m. The following diagram illustrates the above result when m = 2. the image f (V ) of V is an aﬃne subspace in E ′ . . . and that f : E → E ′ is surjective iﬀ f : E → E ′ is surjective. a line. . a2 to b0 . a1 . or a plane. . b2 . . there is a unique aﬃne map f : E → F such that f (ai ) = bi . am ) is an aﬃne frame for E. . aﬃne maps can be represented in terms of matrices. the more . Since an aﬃne map preserves barycenters. Using aﬃne frames. Also. am ) is an aﬃne frame for E. for example. the image of a plane is either a point. and since an aﬃne subspace V is closed under barycentric combinations. − → − → An aﬃne map f : E → E ′ is constant iﬀ f : E → E ′ is the null (constant) linear map equal − → to 0 for all v ∈ E . . . the image of a line is a point or a line. . a1 . we have g(f (a + v)) = g(f (a) + f (v)) = g(f (a)) + g(f (v)). since (a0 . a1 . . It is easily veriﬁed that the composition of two aﬃne maps is an aﬃne map. for any sequence (b0 . AFFINE MAPS b1 b2 41 a2 λ0 b0 + λ1 b1 + λ2 b2 λ0 a0 + λ1 a1 + λ2 a2 a0 a1 b0 Figure 2. b1 . . Indeed. . We explain how an aﬃne map f : E → E is represented with respect to a frame (a0 . . where λ0 + · · · + λm = 1. and this deﬁnes a unique aﬃne map on the entire E. b1 . If E is an aﬃne space of dimension m and (a0 .7. f must be such that f (λ0 a0 + · · · + λm am ) = λ0 b0 + · · · + λm bm . . .

followed by a stretch (by a factor of 2 along the x-axis. . a− 1 a− n 0a 0a −− − − − − − − −→ → a0 f (a0 + − ) = y1 − → + · · · + yn − →. (y1 .42 CHAPTER 2. 0 This map is a reﬂection about the x-axis followed by a translation along the x-axis. Aﬃne maps do not always have a ﬁxed point. − →). . f is generally not a linear transformation. . xn ). . . . . letting x. nonnull translations have no ﬁxed point. bn ). . and a0 f (a0 + − ). x a− 1 a− n 0a 0a − → a− n a− 1 if A = (ai j ) is the n×n-matrix of the linear map f over the basis (− →. . The vector b is the “translation part” of the aﬃne map. x x −− − − − − − − −→ −− → −−→ → Since − . followed by a . . .e. The aﬃne map √ 1 − 3 x1 x1 1 → √3 + 1 x2 x2 1 4 4 can also be written as x1 x2 → 2 0 1 0 2 1 2 √ 3 2 √ 1 2 3 2 − x1 x2 + 1 1 which shows that it is the composition of a rotation of angle π/3. . . an ) in E and (b0 . . . → − − x 1 a0 a1 n a0 an −−→ −− a0 f (a0 ) = b1 − → + · · · + bn − →. Obviously. . . can be expressed as x x − = x − → + · · · + x − →. and b denote the column vectors of components (x1 . yn ). . −− − − − − − − −→ − − → − −− → → → a0 f (a0 + − ) = a0 f (a0 ) + f (− ) x x is equivalent to y = Ax + b. there is a point a0 such that f (a0 ) = a0 . Note that b = 0 unless f (a0 ) = a0 . i. 0a 0a y. .. Thus. unless it has a ﬁxed point. Since − → → → f (a0 + − ) = f (a0 ) + f (− ) x x → → − for all − ∈ E . we have x −− − − − − − − −→ − − → − −− → → → a0 f (a0 + − ) = a0 f (a0 ) + f (− ). BASICS OF AFFINE GEOMETRY general case where an aﬃne map f : E → F is represented with respect to two aﬃne frames (a0 . . . . . . and by a factor of 1/2 along the y-axis). bm ) in F being analogous. A less trivial example is given by the aﬃne map x1 x2 → 1 0 0 −1 x1 x2 + 1 . a0 f (a0 ). and (b1 .

f (c) are collinear in E ′ . with a = b. i. The converse states that given any bijective function f : E → E ′ between two real aﬃne spaces of the same dimension n ≥ 2. the aﬃne map 8 6 −5 x1 1 x1 5 → 3 + 2 x2 1 x2 10 5 has no ﬁxed point. c in E. we have f (c) = (1 − λ)f (a) + λf (b). We add 1 as the (n + 1)th component to the vectors x. b. f (b). and we deﬁne the ratio of the sequence a. which is simpler to state when the ground ﬁeld is K = R. b. Given three collinear points where a. c as −ratio(a. c. b. c) = ∞. c) = − . c. then f is aﬃne. 1 This trick is very useful in kinematics and dynamics. b. Since aﬃne → bc maps preserves barycenters. and form the (n + 1) × (n + 1)-matrix A b 0 1 so that y = Ax + b is equivalent to y 1 = A b 0 1 x . we have b = (1 − β)a + βc for some unique β. On the other hand. . c) = =−. we agree that ratio(a. 4 5 3 and sin θ = 5 . as → − β ab ratio(a. There is a converse to this property. The trick is to consider an (n + 1) × (n + 1)-matrix. → (1 − β) bc provided that β = 1. When b = c. even though 8 5 3 10 −6 5 2 5 = 2 0 1 0 2 4 5 3 5 −3 5 4 5 . c = (1 − λ)a + λb. b. b. Such aﬃne maps are called rigid motions. where say. that b = c. given any three collinear points a. The proof is rather long (see Berger [5] or Samuel [69]).7. b.e. We warn − → ba our readers that other authors deﬁne the ratio of a. AFFINE MAPS 43 translation.2. There is a useful trick to convert the equation y = Ax + b into what looks like a linear equation. If f : E → E ′ is a bijective aﬃne map. since f preserves barycenters. where a = c. see the problems. y. where A is a rotation matrix. It is easy to show that this aﬃne map has a unique ﬁxed point. which shows that f (a). if f maps any three collinear points to collinear points. it is clear that aﬃne maps preserve the ratio of three points. and b. For more and the second matrix is a rotation of angle θ such that cos θ = on ﬁxed points of aﬃne maps.

Lemma 2. The kernel of this map is the set of translations on E. Since f is aﬃne. it is clear that Ha. Ha.λ (a) = a.λµ . Given an aﬃne space E. Samuel uses homothety for a central dilatation. When λ = 0.1. c) is magniﬁed to the triangle (a′ . if f = λ idE .8. The elements of DIL(E) are called aﬃne dilatations. The subset of all linear maps of the form λ idE . Then. the map f → f deﬁnes a group homomorphism L : GA(E) → GL(E). Given any aﬃne space E.8 Aﬃne Groups We now take a quick look at the bijective aﬃne maps.16 shows the eﬀect of a central dilatation of center d.λ (x) is on the line deﬁned by → a and x. for every x ∈ E. The triangle (a.λ (x) = a + λ− ax. a direct translation of the French “homoth´tie” [69]. Recall that the group of bijective linear maps of the vector space E is denoted as GL(E). and any scalar λ ∈ R. b′ . then there is a unique point c ∈ E such that f = Hc. Remark: The terminology does not seem to be universally agreed upon. Ha.λ = λ idE .λ ◦ Ha. and is obtained by “scaling” − by λ. ax Figure 2. called the aﬃne group of E. for some λ ∈ R∗ with λ = 1. Snapper and Troyer use the term dilation for an aﬃne dilatation and magniﬁcation for a central dilatation [77]. perhaps we use use that! Observe that Ha. we have → − → − f (b) = f (a + ab) = f (a) + λ ab . Note that Ha. Choose some origin a in E. Since e dilation is shorter than dilatation and somewhat easier to pronounce. and denoted as GA(E).λ deﬁned such that → Ha. for any aﬃne bijection f ∈ GA(E). c′ ). and is denoted as R∗ idE (where λ idE (u) = λu. and R∗ = R − {0}). It turns out that it is the disjoint union of the translations and of the dilatations of ratio λ = 1. Given any point a ∈ E. and when λ = 0 and x = a. When λ = 1.λ. The subgroup DIL(E) = L−1 (R∗ idE ) of GA(E) is particularly interesting. where λ ∈ R − {0}. Proof.1 is the identity. The terms aﬃne dilatation and central dilatation are used by Pedoe [59].λ is an aﬃne bijection. Note how every line is mapped to a parallel line.44 CHAPTER 2. It is immediately veriﬁed that Ha. the set of aﬃne bijections f : E → E is clearly a group.µ = Ha. We have the following useful result. BASICS OF AFFINE GEOMETRY 2. b. a dilatation (or central dilatation or homothety) of center a and ratio λ is a map Ha. is a subgroup of GL(E).

2. iﬀ iﬀ iﬀ → − = ac that is. . . 1. there is a point c ∈ E such that f (c) = c iﬀ → c = f (a) + λ− ac. AFFINE GROUPS a ′ 45 a d b c b′ c′ Figure 2. .16: The eﬀect of a central dilatation for all b ∈ E. given any basis B = (− . which shows that f = Hc. c= −→ → af (a) − = − − + λ− → ac ac. we have f (b) = c + λ cb for all b ∈ E. Indeed. . − ) of the vector space E associated u u 1 m λ 1 f (a) − a. 1−λ → − Taking this unique point c as the origin. . Thus. Clearly. if f = idE . −→ → −− (1 − λ)− = af (a). am ). we can . the aﬃne map f is a translation. Another point worth mentioning is that aﬃne bijections preserve the ratio of volumes of − → → → parallelotopes.8. . given any m + 1 aﬃnely independent points (a0 .λ . Aﬃne dilatations can be given a purely geometric characterization. Thus. the group of aﬃne dilatations DIL(E) is the disjoint union of the translations and of the dilatations of ratio λ = 0. . 1−λ 1−λ with the aﬃne space E. ac −→ 1 −− af (a). .

. . since − − → − − → → − → − detB ( f (− →). . λm are not all null. only depends on f . We can interpret the equation λ1 x1 + · · · + λm xm = µ in terms of the map f : Rm → R deﬁned such that f (x1 . xm ) = λ1 x1 + · · · + λm xm − µ . . . . . . f (− am )) a0 a1 a0 = det( f ) − →. . this ratio is independent of the choice of the parallelotopes of unit volume. we conclude that the ratio − − → − − → → − → detB ( f (− →). we observed that the set L of solutions of an equation ax + by = c is an aﬃne subspace of A2 of dimension 1. In section 2. . . . . In particular. More generally. . the basis B. it turns out that it is a subspace of dimension m − 1 called a hyperplane. c are not all null). am ). . . and not on the particular basis B). . .9 Aﬃne Hyperplanes We now consider aﬃne forms and aﬃne hyperplanes. 2. . . . . − m ) has unit volume (see Berger [5]. f (− am )) = det( f )detB (− →. . Section 9. For any bijective aﬃne a− 1 a0 → 0a map f : E → E. . .e. It would be equally easy to show that the set P of solutions of an equation ax + by + cz = d is an aﬃne subspace of A3 of dimension 2. In fact.r. in fact a plane (provided that a. where the parallelotope spanned by any point a and the vectors → → (− 1 .12). . . and if λ1 . . . the set H of solutions of an equation λ1 x1 + · · · + λm xm = µ is an aﬃne subspace of Am .. . .5. These aﬃne maps form a subgroup SA(E) of GA(E) called the special aﬃne group of E. b. .t. . . . . Since detB (− →. . . − a ) − − → detB (a0 a1 a0 m − is independent of the basis B. we see that aﬃne bijections u u preserve the ratio of volumes of parallelotopes.46 CHAPTER 2. − am ) a0 a1 a0 a− 1 a0 → 0a − → and the determinant of a linear map is intrinsic (i. in fact a line (provided that a and b are not both null). the aﬃne bijections f ∈ GA(E) such − → that det( f ) = 1 preserve volumes. − am ) w. − am ) is the volume of the parallelotope a− 1 a0 → 0a spanned by (a0 . . BASICS OF AFFINE GEOMETRY − compute the determinant detB (− →. .

The second part of (b) is as in lemma A. for which Ker f ∗ is never empty (since it always contains the vector 0). The relationship between aﬃne hyperplanes and aﬃne forms is given by the following lemma. H = Ker f ∗ for some nonnull linear form f ∗ . we have H = f −1 (0). there is some a ∈ E such that f (a) = 0. The following properties hold: (a) Given any nonconstant aﬃne form f : E → R. and is left an exercise. Since f (a + v) = f (v) for all v ∈ E . Lemma 2. every hyperplane H ′ parallel to H is deﬁned by a nonconstant aﬃne form g such that g(a) = f (a) − λ. there is some λ ∈ R such that g = λf (with λ = 0). of the aﬃne map f : Am → R. . . or kernel. f : E → R is not identically null. (c) Given any hyperplane H in E and any (nonconstant) aﬃne form f : E → R such that H = Ker f . xm ) ∈ Rm . it is interesting to consider aﬃne forms. xm ). − → Proof. (b) For any hyperplane H in E. Let E be an aﬃne space. and since by lemma A.1.1 (c). which implies that f is surjective. .5. f −1 (0) = a + H is a hyperplane H is E. it is surjective. f (a + v) = f ∗ (v). . and thus. we have − → f (a + v) = 0 iﬀ f (v) = 0. there is a nonconstant aﬃne form f : E → R such that H = Ker f . for any a ∈ H. where x = (x1 . . For any other aﬃne form g : E → R such that H = Ker g. for some λ ∈ R. (a) Since f : E → R is nonconstant. Unlike linear forms f ∗ . .2. . Recall that an (aﬃne) hyperplane is an aﬃne subspace of codimension 1. for all a ∈ E. .5. Given an aﬃne map f : E → R. and we call it the kernel of f . it is possible that f −1 (0) = ∅. (b) If H is an aﬃne hyperplane. − → where H is a hyperplane of E .1.5. since f ∗ is nonnull.9. By lemma A. AFFINE HYPERPLANES 47 for all (x1 . (c) This follows easily from the proof of (b). its kernel H = Ker f is a hyperplane.5. clearly.1. and f is non constant. It is immediately veriﬁed that this map is aﬃne. which are just aﬃne maps f : E → R from an aﬃne space to R. for an aﬃne form f . Thus.9. we have H = a+ H. in the sense that H = f −1 (0) = {x ∈ Am | f (x) = 0}. . − → Thus. we also denote f −1 (0) as Ker f . by a previous observation. Ker f is a hyperplane H in E . then by lemma 2. letting f : E → R be the aﬃne form deﬁned such that. and since f is a linear form. and the set H of solutions of the equation λ1 x1 + · · · + λm xm = µ is the null set.2.

b. y ∈ {a. mc. . Prove that the barycenter g of the weighted points (a. recall from deﬁnition 2.d ). Give a geometric construction of the barycenter of the weighted points (a. b. (ma. . . and λ1 . mb. an hyperplane is the set points whose coordinates (x1 . . and (c. given an aﬃne frame (a0 . . . (b.y be the middle of the edge (x.4 that every point of E can be expressed uniquely as x = → → a0 + x1 −1 + · · · + xn −n . and (ma. is a bijection. for some λ1 . Thus. 1/4). xn ) satisfy the (aﬃne) equation λ1 x1 + · · · + λn xn + µ = 0. .48 CHAPTER 2. . (b. xn ) are the coordinates of x with respect to the aﬃne u u → → frame (a0 . and assume that → − − → there is a function Φ : E × E → E . xn ) = λ1 x1 + · · · + λn xn + µ. 1/3) then g is the barycenter of (d. λn ∈ R. . c). 2. . . . b. −n )) of E with origin u u a0 . .b . 3/2). we see that we have f (x1 .c . b. (c.d ).d . . (b. . 1/4). Φa (b) = ab. y). We are now ready to study polynomial curves. Since an aﬃne form f : E → R satisﬁes the u property f (a0 + x) = f (a0 ) + f (x). 1/4) and (gd . the following properties hold: → → → − − (1) ab + bc = − for all a. . 3/2). 1/3). BASICS OF AFFINE GEOMETRY → → When E is of dimension n. 3/4). → − − → (2) For every a ∈ E. give a geometric construction of the barycenter of the weighted points (a. given any two distinct points x. (−1 . where µ = f (a0 ) ∈ R. is the common intersection of the line segments (ma. . Problem 2 (20 pts). Let E be a nonempty set. for every → → u x = x1 −1 + · · · + xn −n . −n )). −2). and E a vector space. . (−1 . mb.c ). − → Problem 3 (20 pts). . u u Also recall that every linear form f ∗ is such that f ∗ (x) = λ1 x1 + · · · + λn xn . d). 1/4). . 1/4). ac. . . d}. (a) Given a tetrahedron (a. Given a triangle (a. Show that if gd is the barycenter of the weighted points (a. . 1/4). and (d. xn ). c. − → Prove that the function + : E × E → E deﬁned such that a + u = Ψa (u) . 1/3). . . such that if we denote Φ(a. and (c. . . (b. the map Φa : E → E deﬁned such that for every b ∈ E. 1/2). . b) as ab. λn ∈ R. . let let mx. denoting f (a0 + x) as f (x1 .10 Problems Problem 1 (10 pts). (c. . − → − → Let Ψa : E → E be the inverse of Φa : E → E . 1/4). c. .6. . c ∈ E. where (x1 .

t2 . t3 ). The function f : A → A3 . with respect to the standard aﬃne frame for A2 .. makes (E. we obtain an equivalent characterization of aﬃne spaces. (c1 . Given any three points a. . b3 ). letting (a1 . b. a2 ). b. E . c. b2 . c.e. and (c0 . prove that a. − → Note: We showed in the text that an aﬃne space (E. a3 ). b2 . c3 ). b2 ). (b0 . d in the aﬃne space A3 . c2 . a2 ). and (c1 . c are collinear iﬀ a1 b1 c1 a2 b2 c2 = 0 1 1 1 i. Letting (a0 . +) satisﬁes the properties stated above. be the barycentric coordinates of a. (b1 . b. d2 .e. prove that a. c2 ). b. (c0 . d. letting (a1 . with respect to the standard aﬃne frame for A3 . c3 ). d are coplanar iﬀ a1 b1 c1 d1 a2 b2 c2 d2 =0 a3 b3 c3 d3 1 1 1 1 i. with respect to the standard aﬃne frame for A2 . prove that a. b1 . prove that a. d3 ). b. b. b2 ). d3 ).10. and (d1 . c2 ). d are coplanar iﬀ a0 a1 a2 a3 b0 b1 b2 b3 c0 c1 c2 c3 d0 d1 =0 d2 d3 Problem 5 (10 pts). E . c2 . given by t → (t. Problem 4 (20 pts). PROBLEMS 49 − → − → for all a ∈ E and all u ∈ E . d2 . (b1 . b3 ). d1. Letting (a0 . c are collinear iﬀ a0 b0 c0 a1 b1 c1 = 0 a2 b2 c2 Given any four points a. c. a1 . a2 . c. and (d0 . be the coordinates of a. c. b. (b0 .2. a1 . c1 . +) into an aﬃne space. the determinant is null. c. a3 ). b1 . the determinant is null. c in the aﬃne plane A2 . c1 . a2 . d. Thus. b. with respect to the standard aﬃne frame for A3 . be the coordinates of a. be the barycentric coordinates of a. b. b. c..

1 1 1 (a) Prove that D and D ′ have a unique intersection point iﬀ d = 0. w) for some λ ∈ R (with λ = 0). wu′ − uw ′. a2 ) and (b0 . f (t2 ). b determined by a and b is a0 b0 x a1 b1 y = 0. and f (t4 ). b2 ) with respect to any given aﬃne frame (O. a1 . b in A2 of barycentric coordinates (a0 . where u = v. f (t3 ). Have you heard of the Vandermonde determinant? Problem 6 (20 pts). y. w ′) (as deﬁned in problem 6). show that the equation of the line a. w) and (u′ . b1 . t3 . the barycentric coordinates of this intersection point are 1 (vw ′ − wv ′. v. let u v w d = u′ v ′ w ′ = vw ′ − wv ′ + wu′ − uw ′ + uv ′ − vu′ . v ′ . v. Given any two distinct points a. v ′. w) where u = v. are not coplanar. prove that the points f (t1 ). uv ′ − vu′ ). where (x. a2 b2 z or equivalently (a1 b2 − a2 b1 )x + (a2 b0 − a0 b2 )y + (a0 b1 − a1 b0 )z = 0. and that when it exists.50 CHAPTER 2. BASICS OF AFFINE GEOMETRY deﬁnes what is called a twisted cubic curve. Given two lines D and D ′ in A2 deﬁned by tangential coordinates (u. v. or u = w. z) are the barycentric coordinates of the generic point on the line a. Given any four pairwise distinct values t1 . Hint. t2 . A triple (u. or v = w. or u = w. j). Prove that the equation of a line in barycentric coordinates is of the form ux + vy + wz = 0. d . b . Problem 7 (30 pts). w ′ ) = λ(u. Show that two equations ux + vy + wz = 0 and u′ x + v ′ y + w ′ z = 0 represent the same line in barycentric coordinates iﬀ (u′ . or v = w. i. is called a system of tangential coordinates of the line deﬁned by the equation ux + vy + wz = 0. t4 .

the aﬃne frame (O. prove that for u = v. and D ′′ are parallel or have a unique intersection point iﬀ u v w u′ v ′ w ′ = 0. i. D ′ . i. the aﬃne frame (A.t. (a) Assuming that M = C. (n. i. C) be a triangle in A2 . at least two of which are distinct. j) be any aﬃne frame for A2 . v.2. and P = B. w ′ ). Let M. z) such that x + y + z = 0 is called the barycentric coordinates of the vector − → − → y Oi + z Oj w. y. P be three points respectively on the line BC. w ′′ ). y. the line of equation ux + vy + wz = 0 in barycentric coordinates (x.r. show that − → −→ − − − → MB NC P A m′′ np′ = − ′ ′′ . w. Given any aﬃne frame (O. recall that when x + y + z = 0. D ′ . C). j). w). The triple (x. or u = w. and D ′′ . wu′ − uw ′. N = A. m′ . show that the common direction of D and D ′ is deﬁned by the vector of barycentric coordinates (vw ′ − wv ′ . P are collinear iﬀ m′′ np′ + m′ n′′ p = 0. the vector − → → − → − xaO + y ai + z aj is independent of a and equal to − → − → y Oi + z Oj = (y. (c) Given three lines D. or v = w.10. .r. p′ . if D = D ′ .e. n′′ ). and deﬁned by tangential coordinates (u. In this case. N. uv ′ − vu′ ). Let (A. v ′′. z). B. m′ n′′ p = 0. and AB. for any point a. z) such that ux + vy + wz = 0 (where x + y + z = 0). N. CA. u′′ v ′′ w ′′ Problem 8 (30 pts). i. 0. and (p. − → −→ − − − − → mn p MC NA P B (b) Prove Menelaus’ theorem: the points M. m′′ ). Prove that D and D ′ are parallel iﬀ d = 0.t. z) (where x + y + z = 1) has for direction the set of vectors of barycentric coordinates (x. y. 0). B. and (u′′ . (u′. j). v ′ . prove that D. PROBLEMS 51 (b) Letting (O. of barycentric coordinates (0.

c) in A2 . x+y+z With this convention. we will say that the barycentric coordinates (x. 7. it is natural to extend the notion of barycentric coordinates of a point in A2 as follows. n ) as the intersection of D1 and D1 . let (m. and P = B. D0 ). We call M the intersection of D0 and D0 . and P = B. and P = (p. p ) as the ′ intersection of D2 and D2 . z) such that x + y + z = 0 is also called a system of barycentric coordinates for the point of normalized barycentric coordinates 1 (x. When the above is a vector. D2 ) be three pairs of six distinct lines. let M denote a nonnull vector deﬁning the common ′ direction of D0 and D0 . in both cases of barycentric coordinates (vw ′ − wv ′ . (D1 . N = A. ′ ′ ′ Let (D0 . this −→ − MB −→ − MC is equivalent to −→ − − → NC P A −→ − = −1. we can think of it as a point at inﬁnity (in the direction of the line deﬁned by that vector). and 8. When M = C. y. In either case. p . y. m′′ ) be the barycentric coordinates of M. z) of a point M. − − − → MC NA P B (c) Prove Ceva’s theorem: the lines AM. N = A. any triple (x. z). and if D0 and D0 are parallel. Given any aﬃne frame (a. D1 ). where x + y + z = 1. − − → NA P B Problem 9 (20 pts). BASICS OF AFFINE GEOMETRY When M = C. ′ as explained at the beginning of the problem.52 CHAPTER 2. uv ′ − vu′ ). If D0 and D0 have a unique intersection point. m′ . b. BN. such that the four lines belonging to any union of two of the above pairs are neither parallel not concurrent ′ (have a common intersection point). let M be ′ this point. Then. ′ ′′ ′ ′ ′′ Similarly. y. In view of (a) and (b) of problem 7. wu′ − uw ′. This problem uses notions and results from problems 6. Prove that m n p m′ n′ p′ = 0 m′′ n′′ p′′ . are the normalized barycentric coordinates of M. this is equivalent to − → −→ − − − → MB NC P A − → −→ − = 1. and (D2 . n . the intersection of the two lines D and D ′ is either a point or a vector. CP have a unique intersection point or are parallel iﬀ m′′ np′ − m′ n′′ p = 0. deﬁne N = (n.

D1 ). Dj ) (with j = i) has a unique intersection point. y. CB ′ ). (CA′ . (D1 . and (D2 . If these points are all distinct from the intersection of D and D ′ (if it exists). A′ . 2) has a unique intersection point. C ′ A′ ). Problem 10 (20 pts). and CC ′ are parallel or collinear iﬀ the intersection points M. where α = 0. AC ′ ). C. and (AB. D0 ). Di ) (i = 0. B. z If we now use barycentric coordinates (x. In the aﬃne plane A2 . or ′ (iii) Each pair (Di . we can write and . The purpose of this problem is to prove Pascal’s Theorem for the nondegenerate conics. Di ) are parallel. C. 1) γ β λ y = 0. B. C ′ be distinct points respectively on D and D ′ . Prove the following version of Pappus’ Theorem. y) such that αx2 + βy 2 + 2γxy + 2δx + 2λy + µ = 0. then the intersection points (in the sense problem 7) of the pairs of lines (BC ′ . P (in the sense of problem 7) of the pairs of lines (BC. δ λ µ 1 x 1 0 0 x y = 0 1 0 y . or β = 0. B ′ . B ′ C ′ ). A′ B ′ ) are collinear in the sense of problem 9. Problem 12 (30 pts). 1 1 1 1 z Let α γ δ B = γ β λ . and A′ . BB ′ . or 53 ′ ′ (ii) The lines of some pair (Di . and (AB ′ . (CA. z) (where x + y + z = 1). We can write the equation of the conic as x α γ δ (x. PROBLEMS iﬀ either ′ ′ ′ (i) (D0 . If no three of these points are collinear. a conic is the set of points of coordinates (x. and let A. Prove the following version of Desargues’ Theorem. D2 ) are pairs of parallel lines. 1. N. each pair (Dj . and these points M. Let D and D ′ be distinct lines.2. y. BA′ ) are collinear in the sense of problem 9. Let A. C ′ be six distinct points of A2 .10. Di ). δ λ µ 1 0 0 C = 0 1 0 1 1 1 x X = y . then the lines AA′ . N. B ′ . P are distinct and collinear. Problem 11 (20 pts). and these two intersection points are distinct and determine a line ′ parallel to the lines of the pair (Di . or γ = 0.

if no three of the above points are collinear. then V contains the entire line (1 − λ)x + λy. and (AB ′ . (CA′ . BASICS OF AFFINE GEOMETRY (a) Letting A = C ⊤ BC. Given any six distinct points A. and let (a. a′′ b′′ ). an assigned some appropriate weights. (0. B. the conic is the union of two lines. with λ1 + · · · + λn = 1. Let (λ1 . Problem 15 Extra Credit (20 pts). and show that M.54 CHAPTER 2. y ∈ V . (b. 9)} to the vertices of triangle ∆2 = {(1. and that X ⊤ AX is homogeneous of degree 2. (c. We say that a conic of homogeneous equation X ⊤ AX = 0 is nondegenerate if det(A) = 0. 0). does it also take the centroid of ∆1 to the centroid of ∆2 ? Justify your answer. a′′ b′ . 1)}. If an aﬃne map takes the vertices of triangle ∆1 = {(0. B ′ . 1 ≤ i ≤ n. prove that the equation of the conic becomes X ⊤ AX = 0. Use the aﬃne frame (A. B. show that the barycenter of any n ≥ 3 points can be determined by repeated computations of barycenters of two points. . b′ . Problem 14 (20 pts). 1/3). c′′ a′ ). . C ′ respectively. a′′ ). C ′ . 4). are collinear in the sense of problem 9. The equation X ⊤ AX = 0 is called the homogeneous equation of the conic. Let E be an aﬃne space over R. λn ) be any n scalars in R. Prove that a conic containing more than one point is degenerate iﬀ it contains three distinct collinear points. The centroid of a triangle (a. C). N. P (in the sense of problem 7) of the pairs of lines (BC ′ . Deduce from the above that a nonempty subset V of E is an aﬃne subspace iﬀ whenever V contains any two points x. c) is the barycenter of (a. assume that λ1 = 1. and let (a1 . BA′ ). CB ′ ). B ′ . Hint. such that λi = 1. From this. . c′′ ). 0). Assume that K is a ﬁeld such that 2 = 1 + 1 = 0. B. To simplify the notation. N. 1). P have barycentric coordinates (bc. Show that there must be some i. and then the barycenter of a1 and b assigned the weights λ1 and λ2 + · · · + λn . In this case. C). . (3. (6. . A′ . . λ ∈ R. and (c. b. C. . c′ . c′ a′ . (c′ a. cb′ . . and degenerate if det(A) = 0. be the barycentric coordinates of A′ . Show that the barycenter λ1 a1 + · · · + λn an can be obtained by ﬁrst determining the barycenter b of the n − 1 points a2 . . b′′ ). Problem 13 (10 pts). c′′ b). Show that this condition does not depend on the choice of the aﬃne frame. 1/3). for . prove that any conic passing through A. . that det(A) = det(B). B. an ) be any n ≥ 3 points in E. C has an equation of the form ayz + bxz + cxy = 0. and let E be an aﬃne space over K. then a nondegenerate conic passes through these six points iﬀ the intersection points M. (c) Prove Pascal’s Theorem. Prove that A is symmetric. In the case where λ1 + · · · + λn = 1 and λi = 1. (b) Given any aﬃne frame (A. AC ′ ). . a′ . 1/3). . (5. (ab′′ . (b.

c). b. → −. where (λ0 . show that the barycenter a1 + a2 + · · · + an can still be computed by repeated computations of barycenters of two points. a. bc) λ1 = λ2 = λ0 = → − − → . c). and assume that (a. ac. d). − − ac. c. λ1 . Finally. c). c. λ1 + · · · + λn = 1. but 2 = 1 + 1 = 0 is possible). (x. ad) det( ab. λ1 = → → → − − − . c). b. ac) Conclude that λ0 . → −. Ob. and (x. ac. λ2 = → → → − − − . c. y ∈ V . − det(− − ad) ax. ad) det( ab. λ2 . if x = λ0 a + λ1 b + λ2 c. c) are not collinear. For any point x ∈ A3 . show that → − → → − − → → ac) det(− − ax.10. (x. λ2 = → → → − − − . a. b. ac) det( ab. λ3 = → − → − → − . b. Oc) det(Oa. note that µ−1 and (1 − µ)−1 both exist. b). bc. b. b. λ3 ) are the barycentric coordinates of x with respect to (a. ad) . Oc) det(Oa. where (λ0 . det( ab. b. b. → det( ab. assume that the ﬁeld K contains at least three elements (thus. b. d) are not coplanar. ad) det( ab. c). ab × → ac ab × → ac ab × → ac Given any point O not in the plane of the triangle (a. d) be four points in A3 . λ1 = → − → → − → − → − . λ2 ) are the barycentric coordinates of x with respect to (a. c). Oc) det(Oa. Oc) det(Oa. and use the fact that −µ 1 + = 1. − ad) ax. then V contains the entire line (1 − λ)x + λy. show that → → − − → → − − − ×− → ac ax → ab × ax xb × bc λ1 = − λ2 = − λ0 = − → −. show that → − → − → − → → − → − → → − → → → ac. det( ab. λ1 . there is some µ ∈ K such that µ = 0 and µ = 1. 1−µ 1−µ Problem 16 (20 pts). Prove that a nonempty subset V of E is an aﬃne subspace iﬀ whenever V contains any two points x. For any point x in the plane determined by (a. Ob. λ1 . and assume that (a. bd) λ0 = − → − . if x = λ0 a + λ1 b + λ2 c. Ox) λ0 = − − − . λ1 . Ox. − ax) det(xb. Hint. → det( ab. λ2 are certain signed ratios of the areas of the triangles (a.2. b. c). Since there is some µ ∈ K such that µ = 0 and µ = 1. and assume that (a. b. where (λ0 . and that the problem reduces to computing the barycenter of three points in two steps involving two barycenters. and λi = 1. Ob. λ2 ) are the barycentric coordinates of x with respect to (a. → → → det(Oa. Ob. Ob. for 1 ≤ i ≤ n. For any point x ∈ A2 . → → − − . → − − → . ac. b. PROBLEMS 55 1 ≤ i ≤ n and n ≥ 3. When 2 = 0. det( ab. λ ∈ K. show that n must be odd. Oc) (iii) Let (a. (ii) Let (a. ac. if x = λ0 a + λ1 b + λ2 c + λ3 d. (i) Let (a. ac) det( ab. Prove that the barycenter of any n ≥ 3 points can be determined by repeated computations of barycenters of two points. c) be three points in A2 . c) are not collinear. ax) det(xb. b. prove that − − − → → → − − − → → → − − − → → → det(Ox. c) be three points in A3 .

am ) be m+1 points in Am . b. − ai+1 . c). . if x = λ0 a0 + · · · + λm am . . . . . d). − x. . . d). − ai+1 . . . (x. . c. . . 4 1 x′ = x. f3 deﬁned by √ 3 1 3 ′ x =− x− y+ . a. show that −→ −→ − det(− →. Sn+1 = f1 (Sn ) ∪ f2 (Sn ) ∪ f3 (Sn ). 1 ≤ i ≤ m. . a. Problem 17 (30 pts). λm ) are the barycentric coordinates of x with respect to (a0 . .56 CHAPTER 2. Can you describe geometrically what their action is (rotation. and λ0 = − → a a1 → xa a− 2 det(− 1 . . . λ3 are certain signed ratios of the volumes of the ﬁve tetrahedra (a. − ai−1 . x. c. 4 √ 3 . b. and (x. . . d). . . ai . consider the three geometric transformations f1 . . . . . . . (x. . λ2 . With respect to the standard aﬃne frame for the plane A2 . . d). b. − am ) 1 − . where (λ0 . . BASICS OF AFFINE GEOMETRY Conclude that λ0 . − am ) a− 1 a0 − a− i a0 − a0 → 0a 0 for every i. . . . . . am ). deﬁne the following sequence of poygonal lines: S0 = L. am ). − am ) a− 1 a0 − a→ a0 − a0 → 0a 0 λi = −→ a −→ − det(− →. scaling?) (b) Given any polygonal line L. . 4 √4 √4 3 1 3 x− y+ . 2 √ 3 1 ′ . y′ = 4 4 4 √ 1 3 x =− x+ y− 4 4 √ 3 1 x− y+ y′ = − 4 4 ′ 3 . . . . . . λ1 . . y = y+ 2 2 (a) Prove that these maps are aﬃne. . . and assume that they are aﬃnely independent. . − ai−1 . . . −→. c. . . −→. det(− →. . . translation. . (x. . . f2 . . a. where 0 ≤ i ≤ m. (iv) Let (a0 . − →. − am ) a− 1 a− i a0 → 0a 0a Conclude that λi is the signed ratio of the volumes of the simplexes (a0 . am ) and (a0 . b. For any point x ∈ Am . .

b2 Prove that such maps have a unique ﬁxed point c if θ = 2kπ. PROBLEMS Construct S1 starting from the line segment L = ((−1. The purpose of this problem is to study certain aﬃne maps of A2 . Consider the set of geometric transformations of the form z → az + b. these aﬃne maps are represented by rotation matrices. Show that these are rotations of center c. where a. In the plane A2 . there is some angle θ for which either there is no ﬁxed point. (c) Consider the set of geometric transformations of the form z → az + b or z → az + b. Describe what these maps do geometrically. b2 Prove that such maps have a unique ﬁxed point iﬀ (λ + µ) cos θ = 1 + λµ. (1. (b) Prove that the above set of maps is a group under composition. which means that with respect to a frame with origin c (the unique ﬁxed point). Problem 19 (20 pts). y) can be represented as the complex number z = x + iy. 0)).10. where a. or there are inﬁnitely many ﬁxed points. b are complex numbers such that a = 0. Prove that these maps are aﬃne and that this set of maps is a group under composition. with respect to the standard aﬃne frame. for all integers k. (2) Consider aﬃne maps of the form x1 x2 → λ cos θ −λ sin θ µ sin θ µ cos θ x1 x2 + b1 .2. and where z = x − iy if z = x + iy. Do you think that Sn has a limit? Problem 18 (20 pts). Prove that if λµ = 1 and λ > 0. (a) Prove that these maps are aﬃne. a point of coordinates (x. Describe what these maps do geometrically. 0). (1) Consider aﬃne maps of the form x1 x2 → cos θ − sin θ sin θ cos θ x1 x2 + b1 . 57 Can you ﬁgure out what Sn looks like in general? (you may want to write a computer program). (3) Prove that the aﬃne map x1 x2 → 8 5 3 10 6 −5 2 5 x1 x2 + 1 1 . b are complex numbers such that a = 0.

Let (E. Let (c1 . − → i. . b. cn ) be n ≥ 3 points in Am (where m ≥ 2). . let F ix(f ) = {a ∈ E | f (a) = a} be the set of ﬁxed points of f . for any two points Ω.58 has no ﬁxed point. Clarify under which conditions there are inﬁnitely many solutions. Investigate whether there is a closed polygon with n vertices (a1 . f (u) = u iﬀ u = 0. and assume that they are not collinear. there is a unique solution. E) be any aﬃne space of ﬁnite dimension For every aﬃne map f : E → E. Problem 20 (20 pts). . and when n is even.e. a0 ). Hint. (i) Prove that if F ix(f ) = ∅. (ii) Prove that F ix(f ) contains a unique ﬁxed point iﬀ Ker (f − id) = {0}. a ∈ E. (i) What is the intersection of the plane H and of the solid triangle determined by a. there are no solutions or inﬁnitely many solutions. Let H be the plane of equation αx + βy + γz + δ = 0. b. When n is odd. an ) such that ci is the middle of the edge (ai . Let a. . b. F ix(f ) = b + Ker (f − id). BASICS OF AFFINE GEOMETRY (4) Prove that an arbitrary aﬃne map x1 x2 → a1 a2 a3 a4 x1 x2 + b1 b2 has a unique ﬁxed point iﬀ the matrix a1 − 1 a2 a3 a4 − 1 is invertible. Problem 21 (20 pts).. ai+1 ) for every i with 1 ≤ i ≤ n − 1. Hint. then F ix(f ) is an aﬃne subspace of E such that for every b ∈ F ix(f ). . Problem 22 (20 pts). CHAPTER 2. The parity (odd or even) of n plays an important role. be any distinct points in A3 . c)? . . c (the convex hull of a. and cn is the middle of the edge (an . . c. Show that −− −→ − −− → −−→ − → − → Ωf (a) − Ωa = Ωf (Ω) + f (Ωa) − Ωa. .

PROBLEMS 59 (ii) Give an algorithm to ﬁnd the intersection of the plane H and of the triangle determined by a.10. b. etc). Matlab. .2. Mathematica. Maple. c. (iii) (extra credit 20 pts) Implement the above algorithm so that the intersection can be visualized (you may use.

BASICS OF AFFINE GEOMETRY .60 CHAPTER 2.

Part II Polynomial Curves and Spline Curves 61 .

62 .

or as the zero locus of this function (i. and render curves and surfaces. it is very important to know how to deal with these simpler building blocks eﬀectively. and it works well for very simple shapes. Such design problems are usually handled by breaking the curve or surface into simpler pieces. a combinatorial explosion usually occurs which makes this approach impractical. Furthermore.1 Why Parameterized Polynomial Curves? In order to be able to design. and by specifying how these pieces join together. such an approach is not modular. and the shape has some dimension s ≤ n (s = 1 for a curve. and we will now concentrate on modelling a single curve or surface. This is usually mathematically simpler.Chapter 3 Introduction to the Algorithmic Geometry of Polynomial Curves 3. when a shape S is modeled by a system of equations. In CAGD jargon. with some degree of smoothness. which is used to determine which points lie on S and which do not. s = 2 for a surface). The two models just mentioned deﬁne S either in terms of 63 . possibly composite. since composite shapes are decomposed into simpler pieces. The kind of model that we will adopt consists of specifying a curve or surface S in terms of a certain system of equations.e. we face a number of questions which are listed below. or several pieces joined together? We can model the entire shape S via a single system of equations. Later on. Oversimplifying a bit.. Given that we adopt such a model. the shape S lives in some object space E of some dimension n (typically. and the shape S is viewed either as the range of this function. But for complex shapes. the set of points that are mapped to “zero” under F ). we view these equations as deﬁning a certain function F . and unsuitable to tackle problems for which it may be necessary to modify certain parts and leave the other parts unchanged. we model composite shapes with splines. we will come back to splines. A single piece. or the 3D aﬃne space A3 ). manipulate. the aﬃne plane A2 . Nevertheless. Parametric or Implicit Model? Mathematically. we have to choose some mathematical model for deﬁning curves and surfaces.

Every point lying on the shape S is represented as F (a). represents a straight line in the aﬃne plane. the function F : A → A2 deﬁned such that F1 (t) = 2t + 1. Let us examine each model. F2 (t) = t2 . a shape S of dimension s speciﬁed by a function F : P → E.1: A parabola a function F : P → E. For example. for some parameter value a ∈ P (possibly many parameter values). In the parametric model . . and P = A2 for a surface. where A denotes the aﬃne line.64 CHAPTER 3. in which case P doesn’t have a standard name. The function F : A → A2 deﬁned such that F1 (t) = 2t. where E is of dimension n ≥ s. In the ﬁrst case where F : P → E. the parameter space P also has dimension s ≤ n. we say that we have an implicit model . represents a parabola in the aﬃne plane. is deﬁned as the range F (P ) of the function F . we say that we have a parametric model . Thus. where A2 denotes the aﬃne plane). where P is another space called the parameter space (typically P = A for a curve. and in the second case where F : E → P . INTRODUCTION TO POLYNOMIAL CURVES Figure 3. or a function F : E → P . F2 (t) = t − 1.

v) = v. For an example of a surface. F3 (u.2: A hyperbolic paraboloid For a fancier example of a space curve. it looks like a saddle (an inﬁnite one!). v) = u2 − v 2 . represents what is known as a hyperbolic paraboloid . WHY PARAMETERIZED POLYNOMIAL CURVES? y 0 -1 -2 4 2 -2 1 x -1 0 1 2 65 2 z 0 -2 -4 Figure 3. represents a curve known as a twisted cubic. v) = 2u2 + v 2 . v) = u. . The function F : A2 → A3 deﬁned such that F1 (u. v) = u. the function F : A → A3 deﬁned such that F1 (t) = t. F2 (t) = t2 .3. F2 (u. in the 3D aﬃne space A3 . F3 (t) = t3 . v) = v F3 (u. F2 (u. Roughly speaking. the function F : A2 → A3 deﬁned such that F1 (u. represents what is known as an elliptic paraboloid .1.

For example. v) = v. In order to avoid such situations. a shape S of dimension s speciﬁed by a function F : E → P . F (a) = 0}. we could assume that our spaces are deﬁned over the ﬁeld of complex numbers (or more generally. F2 (u. we will make the working assumption that we are only considering functions that deﬁne nonempty shapes. INTRODUCTION TO POLYNOMIAL CURVES 2 y 0 -1 -2 4 1 3 z 2 1 0 -2 -1 0 x 1 2 Figure 3. if F : A2 → A is the function deﬁned such that F (x. Of course. y) = x2 + y 2 + 1. v) = u. since the equation x2 + y 2 = −1 has no real solutions. is deﬁned as the zero locus F −1 (0) of the function F . Thus. an algebraically closed ﬁeld). since we are primarily interested in the real part of shapes (curves and surfaces). but does not help us much for visualizing the shape.66 CHAPTER 3. F3 (u. the function F : A2 → A3 deﬁned such that F1 (u. represents what is known as the monkey saddle. where E is of dimension n ≥ s. that is. F deﬁnes the empty curve. the space P is a vector space. as the set of zeros of the function F : S = F −1 (0) = {a | a ∈ E. . and it has some dimension d ≥ n − s. In the implicit model . it possible that F −1 (0) = ∅. v) = u3 − 3v 2 u.3: An elliptic paraboloid For a more exotic example. In this case. This would have certain mathematical advantages.

the function F : A2 → A deﬁned such that F (x. y) = 2y − x + 3 for all x. However. z) = z − xy. y. deﬁnes the same parabola as the above parametric deﬁnition. it is not clear that it can be decided whether an arbitrary algebraic equation has real solutions or not. y ∈ A. where implicit equations of shapes are directly available. the same line as deﬁned above parametrically).5 1 67 1 z 0 -1 -2 Figure 3. deﬁnes an ellipse in the aﬃne plane. As a simple example of a curve deﬁned implicitly. y ∈ A. for all x. y) = 2x2 + y 2 − 1. These complications are one of the motivations for paying more attention to the parametric model.5 -1 2 -1 1 0. y) = 4y − x2 . F2 (x. WHY PARAMETERIZED POLYNOMIAL CURVES? y 0 -0. . since we immediately see that y = x2 /4 for every point on this parabola.5 0 0. Although this is not entirely obvious. the implicit model is more natural to deal with certain classes of problems. The function F : A2 → A deﬁned such that F (x. the twisted cubic is deﬁned implicitly by the function F : A3 → A2 .4: A monkey saddle There are some serious problems with such an assumption.3. deﬁned such that F1 (x. y. for all x.1. z) = y − x2 .5 x -0. deﬁnes a straight line (in fact. For example. y ∈ A. The function F : A2 → A deﬁned such that F (x.

. instead of saying that S is the zero locus of the function F : E → P . z) = x2 + y 2 + z 2 − 1. y. . we may want to use a simpler class of functions. When polynomials are insuﬃcient. Fd = 0. y = sin θ. This last example leads us to another major question: What Class of Functions for F ? Although trigonometric functions are perfectly ﬁne. As a matter of fact. . or that S is the zero locus of the above system of equations. deﬁned such that F (x. and it has the parametric representation x = cos θ. we can turn to the class of functions deﬁnable by rational funtions (fractions of polynomials). The elliptic paraboloid discussed in the parametric model is deﬁned by the function F (x. If P has dimension d ≥ n − s. using polynomials is rarely a restriction (continuous functions on reasonable domains can . y. z) = z − x2 + y 2 . y. the unit circle is deﬁned implicitly by the equation x2 + y 2 − 1 = 0. since the function F corresponds to d scalar-valued functions (F1 . we usually say that S is deﬁned by the set of equations F1 = 0. Certainly. and turns out to be suﬃcient for most purposes. F2 = 0. z) = z − 2x2 − y 2 .. we should only consider continuous functions which are suﬃciently diﬀerentiable. . . From a practical point of view. to yield shapes that are reasonably smooth.68 CHAPTER 3. INTRODUCTION TO POLYNOMIAL CURVES The unit sphere is deﬁned implicitly by the function F : A3 → A. The hyperbolic paraboloid discussed in the parametric model is deﬁned by the function F (x. which is suﬃcient for most computational applications. dealing with rational fractions turns out to be largely reduced to dealing with polynomials. for computational (algorithmic) reasons. . The class of functions deﬁnable by polynomials is very well-behaved. For another familiar example. Fd ).

since polynomials are diﬀerentiable at any order.3. Given a point b ∈ E. What Degree for F ? In most practical applications involving many small together (splines). In the case where F is given by rational functions. given a parametric deﬁnition F : P → E for a shape S. we will focus primarily on the study of shapes modelled parametrically. In general. studying shapes S deﬁned by systems of polynomial equations is essentially the prime concern of algebraic geometry. where the function F : E → P does not provide immediate access to points in S. and very diﬃcult subject. and is done using the algebraic concept of a resultant. WHY PARAMETERIZED POLYNOMIAL CURVES? 69 be approximated by polynomial functions). For example. This is not so for the implicit model. We will make a very modest use of elementary notions of diﬀerential geometry. and for m = 3 for rational functions. way beyond the scope of this course. degree depends on how many pieces are involved and how complex each pieces joined small pieces. In fact. and an even more modest use of very elementary notions of algebraic geometry. for example. venerable.1 This is called implicitization. the function F : P → E provides immediate access to points on the shape S: every parameter value a ∈ P yields a point F (a) lying on the shape S. In the implicit model. This is a fascinating. Having decided that we will use polynomial (or rational) functions. as we shall see very soon.1. It should be noted that it is not always possible to go from an implicit algebraic deﬁnition G : E → P for S to a parametric algebraic deﬁnition F : P → E for S. what we use primarily is some elements of multilinear algebra. 1 . Because the study of shapes modelled as zero loci of polynomial equations is quite diﬃcult. it is often useful to ﬁnd an implicit deﬁnition G : E → P for S. In the parametric model. we will be dealing with polynomial functions (and rational functions). functions that are only continuous. which may be very diﬃcult. another venerable subject. Nevertheless. It should also be noted that if we consider parametric curves F : A → E deﬁned by functions that are more general than polynomial or rational functions. There are other advantages in dealing with the parametric model. Also. the degree m = 2 or m = 3 is suﬃcient for the However. for a single piece. there is one more question. the degree could be quite high. there are some subtleties regarding over which ﬁeld we view the curves or surfaces as being deﬁned. then some unexpected objects show up as the traces of Actually. Thus. it is easier to “carve out” portions of the shape. the choice of piece is. The discrepancy shows up for m = 2 for polynomial functions. to determine whether b ∈ S usually requires solving a system of algebraic equations. studying shapes S deﬁned by systems of parametric polynomials (or rational fractions) is in some sense subsumed by diﬀerential geometry. it can be shown that it is always possible to ﬁnd an algebraic implicit deﬁnition G for S. Bold readers are urged to consult Fulton [37] or Harris [41]. by simply considering subsets of the parameter space.

where f (x. which deﬁnes a straight line. we have the equation ax + by + c = 0. is called the directrix of the parabola. It is possible to give more geometric characterizations of the conics. Polynomials certainly qualify! Before we launch into our study of parametric curves. the conics can be classiﬁed in three types (after some suitable change of coordinates): Parabolas. 0) is called the focus of the parabola. For example. of equation x2 y 2 − 2 = 1. Peano and Hilbert (among others) showed that there are space ﬁlling curves. y) is a polynomial in x. x2 y 2 + 2 =1 a2 b (Without loss of generality. what sort of curves are deﬁned by polynomial equations f (x. of equation y 2 = 4ax.. For any point M in the plane. then the parabola of equation y 2 = 4ax is the set of points in the plane such that MH = MF . and the line D of equation x = −a (parallel to the y-axis). Hyperbolas. The point F of coordinates (a.2 The above deﬁnition of the conics assumes a Euclidean structure on the aﬃne plane. of total degree m ≤ 2. Let us begin with the parabola. i. if H is the intersection of the perpendicular through M to D. y) = 0. For m = 1. This is a good motivation for assuming that the functions F are suﬃciently diﬀerentiable. a2 b Ellipses.e. 1] → A2 whose trace is the entire unit square (including its interior)! See the problems. y. It is very easy to see that straight lines are also deﬁnable in the parametric model. with real coeﬃcients. continuous curves F : [0. 2 . Such an equation deﬁnes a (plane) conic (because it is the curve obtained by intersecting a circular cone. we can assume a ≥ b). The general curve of degree 2 is deﬁned by an equation of the form ax2 + bxy + cy 2 + dx + ey + f = 0. we believe that it may be useful to review brieﬂy what kind of plane curves are deﬁned in the implicit model. of equation The above deﬁnitions are algebraic. that is. Except for some degenerate cases. INTRODUCTION TO POLYNOMIAL CURVES curves. and AB denotes the Euclidean length of the line segment AB. with a plane).70 CHAPTER 3. and we brieﬂy mention the classical bifocal deﬁnition.

one can draw the ellipse.1. . Let F and F ′ be the points of coordinates (−c. since we are assuming that a ≥ b. make two knots at distance 2a from each other. using a stick. let c = a2 − b2 (so that a2 = b2 + c2 ). Then. Indeed. 0). on the ground.3. WHY PARAMETERIZED POLYNOMIAL CURVES? D H M 71 F Figure 3.5: A Parabola For example. Each of F. 0). and drive a nail through each knot in such a way that the nails are positioned on the foci F and F ′ . Then. F ′ is a focus of the ellipse. 0) and (c. This is the “gardener’s deﬁnition” of an ellipse. one can take a string. and directrix D of equation x = −1: √ In the case of an ellipse. the parabola of equation y 2 = 4x has focus F = (1. by moving the stick along the string (perpendicular to the ground) in such a way that the string is kept stiﬀ. the ellipse deﬁned by x2 y 2 + 2 =1 a2 b is the set of points in the plane such that MF + MF ′ = 2a (note that c ≤ a).

that is. 0). 0). Let F and F ′ be the points of coordinates (−c. and the foci are F = √ √ (− 2. INTRODUCTION TO POLYNOMIAL CURVES M F F′ Figure 3. corresponds to a circle. and F ′ = (4. 0) and (c. F ′ is a focus of the hyperbola.72 CHAPTER 3. a = b. or hyperbola (where c > 0). parabola. √ In the case of an hyperbola. c = a2 + b2 = 1 + 1 = 2. b = 3. 0): In the case of an ellipse. the hyperbola deﬁned by x2 y 2 − 2 =1 a2 b is the set of points in the plane such that | MF − MF ′ | = 2a (note that c > a). we let c = a2 + b2 (so that c2 = a2 + b2 ). Each of F. and thus. For example. and thus c = a2 − b2 = 25 − 9 = 16 = 4. the constant c e= >0 a . The foci are F = (−4. the hyperbola x2 − y 2 = 1 √ √ √ is such that a = b = 1. 0). Then. the ellipse x2 y 2 + =1 25 9 √ √ √ is such that a = 5. 0): The case where c = 0.6: An Ellipse For example. and F ′ = ( 2.

y) = 0. parabolas. and the equation of the cubic becomes ϕ3 (x. y) + ϕ1 (x.7: A Hyperbola is called the exentricity of the conic. involving the exentricity. where each ϕi (x. We will also characterize those cubics that can be deﬁned parametrically in terms of polynomials.3. On the other hand. 3. y) + ϕ2 (x. The curve deﬁned by such an equation is called a (plane) cubic. 2. y) + ϕ0 = 0. For more details. things become a lot tougher! The general curve of degree 3 is deﬁned by an equation of the form ϕ3 (x. using polynomials). WHY PARAMETERIZED POLYNOMIAL CURVES? 73 M F F′ Figure 3. There is also a monofocal deﬁnition of ellipses. the reader is referred to standard geometry texts. all the conics can be deﬁned parametrically using rational fractions. and ϕ0 is a scalar. y) + ϕ2 (x. It can be shown that the cubics that have rational representations are characterized by the fact that ϕ1 (x. we can pick any point of the cubic as the origin. y) is a homogenous polynomial of total degree i. Since we assumed that we are only considering nonempty curves. We will see very soon that only the parabolas can be deﬁned parametrically using polynomials. y) + ϕ1 (x. and hyperbolas. y) is the null polynomial.1. . for example. In this case ϕ0 = 0. a plane cubic cannot be deﬁned parametrically using rational functions (and a fortiori. with i = 1. For m = 3. In general. Berger or Pedoe.

Fn deﬁning a polynomial curve F of degree m. . .74 CHAPTER 3. blossoming.2 Polynomial Curves of degree 1 and 2 x(t) = F1 (t) = a1 t + a0 . Also. For example. . in most cases. For simplicity of notation. will in fact be convenient later on for CAGD applications. and every Fi (X) is a polynomial in R[X] of degree ≤ m. It should be noted that if d is the maximum degree of the polynomials F1 . we will introduce a major technique of CAGD. and it will be convenient to “raise” the degree of some of these curve segments to a common degree.1. the set of points F ([r. 1). 1 ≤ i ≤ n. it is now time to focus on the parametric model for deﬁning curves. On the way. We will now try to gain some insight into polynomial curves by determining the shape of the traces of plane polynomial curves (curves living in E = A2 ) of degree m ≤ 3. s) for A with r < s. → → F (t) = a + F (t)− + · · · + F (t)− . we view t ∈ A as the real number t ∈ R. a1 = 0 or b1 = 0. A parameterized polynomial curve is deﬁned as follows. As a matter of fact. and it is only required that d ≤ m. and similarly. Intuitively. Otherwise. Given any aﬃne space E of ﬁnite dimension n and any aﬃne frame → → (a0 . The set of points F (A) in E is called the trace of the polynomial curve F . For reasons that will become clear later on. . we will need to join curve segments of possibly diﬀerent degrees.1. s]) in E is called the trace of the polynomial curve segment F [r. y(t) = F2 (t) = b1 t + b0 . which may seem unusual. INTRODUCTION TO POLYNOMIAL CURVES After there general considerations. . . and we begin with a rigorous deﬁnition. b0 ). a (parameterized) polynomial curve of degree at most m is a map e en F : A → E. . a (parameterized) polynomial curve segment F [r. We begin with m = 1. − )) for E. s]. 3. a polynomial curve is obtained by bending and twisting the aﬃne line A using a polynomial map. and write F (t) instead of F (t). such that for every t ∈ A. A polynomial curve F of degree ≤ 1 is of the form If both a1 = b1 = 0. Recall that every point t ∈ A is expressed as t = (1 − t)0 + t1 in terms of the aﬃne basis (0. Deﬁnition 3. s] → E of a polynomial curve F : A → E of degree at most m. then d = m is possible. many diﬀerent parametric representations may yield the same trace. s] of degree (at most) m is the restriction F : [r. and we can eliminate t between x and y. Given any aﬃne frame (r. . e e 0 1 1 n n where t = (1 − t)0 + t1. sometimes called the geometric curve associated with F : A → E. it is preferable to consider the parameter space R as the aﬃne line A. . the trace of F reduces to the single point (a0 . (−1 . we are more interested in the trace of a curve than in its actual parametric representation. getting the implicit equation a1 y − b1 x + a0 b1 − a1 b0 = 0. This decision.

2. y(t) = F2 (t) = b2 t2 + b1 t + b0 . it is convenient to make a change of parameter. We will come back to this point later on. quadratic curves. and 2 2 consider the matrix R given below: R= b2 ρ a2 ρ − aρ2 b2 ρ Under the change of coordinates x1 y1 we get x1 (t) = a1 b2 − a2 b1 a0 b2 − a2 b0 t+ . We ﬁrst show that by a change of coordinates (amounting to a rotation). 75 Let us now consider m = 2. t= u− 2ρ2 . we can always assume that either a2 = 0 (or b2 = 0). Thus. before doing so. Since we already considered the case where a2 = b2 = 0. whereas the parametric representation yields the upper half line. Note that the 2ρ2 implicit equation a0 b2 − a2 b0 x1 = ρ gives too much! The above equation yields the entire line. then we have a degenerate case where x1 (t) is equal to a constant and y1 (t) ≥ y1 (t0 ). with t0 = − (a1 a2 +b1 b2 ) . y1 (t) = ρt2 + ρ ρ =R x . that is. If a1 b2 = a2 b1 . there is a mismatch between the implicit representation and the parametric representation. A polynomial curve F of degree ≤ 2 is of the form x(t) = F1 (t) = a2 t2 + a1 t + a0 . to suppress the term of degree 1 in t in y1 (t). ρ ρ a0 a2 + b0 b2 a1 a2 + b1 b2 t+ . let ρ = a2 + b2 . y The eﬀect of this rotation is that the curve now “stands straight up” (since ρ > 0). let us assume that a2 = 0 or b2 = 0. by letting (a1 a2 + b1 b2 ) . which corresponds to a half line. If a2 = 0 and b2 = 0. then we can eliminate t between x1 and y1 . If a1 b2 − a2 b1 = 0. POLYNOMIAL CURVES OF DEGREE 1 AND 2 which is the equation of a straight line. However.3.

every parabola is deﬁned by the implicit equation Y = aX 2 . the previous degenerate case corresponds to b a2 Conversely. with b > 0. Intuitively. In summary. Thus. On the other hand. and having the Y -axis as axis of symmetry. Y (u) = bu2 . hyperbolas) are precisely the curves deﬁnable as rational curves of degree 2 (parametric deﬁnitions involving rational fractions of quadratic polynomials). The corresponding implicit equation is Y = b 2 X . The diagram below shows the parabola deﬁned by the following parametric equations F1 (t) = 2t. y1 (u − µ) = bu2 + b′0 . the nondegenerate polynomial curves of true degree 2 are the parabolas. INTRODUCTION TO POLYNOMIAL CURVES Then. Y (u) = y1 (u − µ) − b′0 .76 CHAPTER 3. every parabola can be deﬁned as the parametric polynomial curve X(u) = u. since by an appropriate change of coordinates. . it can be shown easily that ellipses and hyperbolas are not deﬁnable by polynomial curves of any degree. with b > 0. it will be shown later that the conics (parabolas ellipses. = ∞. Finally. we can change coordinates again (a translation). a2 This is a parabola. ellipses and hyperbolas are not deﬁnable as polynomial curves of degree 2. we have a parametric representation of the form x1 (u − µ) = au + a′0 . passing through the origin. F2 (t) = t2 . In fact. and we get a parametric representation of the form X(u) = au. letting X(u) = x1 (u − µ) − a′0 . Y (u) = au2 .

The general philosophy is to linearize (or more exactly. where r = s. where i = 1. . Given any aﬃne frame (r. let us begin with straight lines.3.3 First Encounter with Polar Forms (Blossoming) As a warm up. In this case. if we view our curves as the real trace of complex curves. The parametric deﬁnition of a straight line F : A → A3 . x2 (t) = a2 t + b2 . We now show that there is another way of specifying quadratic polynomial curves which yields a very nice geometric algorithm for constructing points on these curves. Thus. the mismatch between the parametric representation and the implicit representation can be resolved.3. is of the form x1 (t) = a1 t + b1 . 3. 2. since every polynomial has roots (possibly complex numbers). x3 (t) = a3 t + b3 . 3. s) for A. Observe that each function t → ai t + bi is aﬃne. F : A → A3 is itself an aﬃne map. FIRST ENCOUNTER WITH POLAR FORMS (BLOSSOMING) 77 Figure 3. every t ∈ A can be writen uniquely as t = (1 − λ)r + λs. multilinearize) polynomials. the mismatch goes away.8: A parabola Remark: In the degenerate case of a half line. we must have t = (1 − λ)r + λs = r + λ(s − r). In fact.

For this. then F (0. given any aﬃne frame (r. for every point b. F (t) is indeed between F (r) and F (s). every point F (t) on the line deﬁned by F is obtained by a single interpolation step F (t) = s−t s−r F (r) + t−r s−r t−r s−r = 1: 3 F (s). . and the ratio of interpolation λ. For example. we have F (t) = F ((1 − λ)r + λs). s) for A. and t = 0. if r = 0. where We would like to generalize the idea of determining the point F (t) on the the line deﬁned by F (r) and F (s) by an interpolation step. Furthermore. Substituting the value of λ in the above. since F is aﬃne. at “ s−r of the way from F (r)”). we have −− −→ −−− −− −→ −−− F (r)F (t) = λF (r)F (s). = (1 − λ) F (r) + λ F (s). by several interpolation steps from some (ﬁnite) set of given points related to the curve F . as illustrated in the following diagram. s−r Now. maps that are aﬃne in each of their arguments. s = 1. INTRODUCTION TO POLYNOMIAL CURVES (and 1 − λ = s−t ).25) is a fourth of the way from F (0) between F (0) and F (1). λ = t−r s−r CHAPTER 3. t−r which shows that F (t) is on the line determined by F (r) and F (s). in the case of an aﬃne map F : A → A3 . when r ≤ t ≤ s.25. we have −− −→ −−− F (r)F (t) = t−r s−r −− −→ −−− F (r)F (s).78 and so. where r = s. and picking b = F (r). This means that F (t) is completely determined by the two points F (r) and F (s). to determining the point F (t) on a polynomial curve F . we know that −− −→ −− −→ −− −→ bF (t) = (1 − λ)bF (r) + λbF (s). Thus. at “ s−r of the way from t−r F (r)” (in particular. We now show how to turn a quadratic polynomial into a biaﬃne map. that is. it is ﬁrst necessary to turn the polynomials involved in the deﬁnition of F into multiaﬃne maps. since F (t) is the barycenter of F (r) and F (s) assigned the weights 1 − λ and λ.

due to the presence of the term 2x1 and of the constant −3. simply form f (x1 . X). 2 The symmetric biaﬃne function f (x1 . Note that f2 (x1 . FIRST ENCOUNTER WITH POLAR FORMS (BLOSSOMING) F (s) 79 F (t) F (r) Figure 3.3. It would be tempting to say that f1 is linear in each of x1 and x2 . for all X ∈ R. for all X ∈ R. x1 ). x1 ) = x1 x2 + x1 + x2 − 3. x2 ) + f1 (x2 . Observe that the function of two variables f1 (x1 . x2 ) = x1 x2 + 2x1 − 3 gives us back the polynomial F (X) on the diagonal. x2 ) = x1 x2 + x1 + x2 − 3 is called the (aﬃne) blossom. but this is not true. for all X ∈ R. consider the polynomial F (X) = X 2 + 2X − 3. for all x1 .3.9: Linear Interpolation As an example. We say that a function f of two arguments is symmetric iﬀ f (x1 . x2 ) = x1 x2 + 2x2 − 3 is also biaﬃne. To make f1 (and f2 ) symmetric. X). X). or polar form. but f1 is also aﬃne in each of x1 and x2 . It would be nicer if we could ﬁnd a unique biaﬃne function f such that F (X) = f (X. and f1 is only biaﬃne. x2 ) = f (x2 . in the sense that F (X) = f1 (X. of F . such a function should satisfy some additional property. and of course. It turns out that requiring f to be symmetric is just what’s needed. and F (X) = f2 (X. x2 ) = f1 (x1 . For an arbitrary polynomial F (X) = aX 2 + bX + c . x2 .

We can compute their polar forms f1 . called control points. there is a nice algorithm for determining any point on the curve from these control points. where λ ∈ R. f (r. of F . we obtain a unique symmetric. since f being biaﬃne preserves barycentric combinations in each of its arguments. t2 ) = f ((1 − λ1 )r + λ1 s. s). and they add up to 1. s) and f (s. assume that r < s. such that F (X) = f (X. (1 − λ2 )r + λ2 s) = (1 − λ1 )(1 − λ2 ) f (r. X). F2 . 2. f2 . The coeﬃcients of f (r. Let us pick an aﬃne frame for A. This is all ﬁne. Since f is symmetric and biaﬃne. s) + λ1 λ2 f (s. X). r) s−r s−r t2 − r t1 − r s − t1 + + s−r s−r s−r t1 − r t2 − r + f (s. Note that the fact that f is symmetric allows us to view the arguments of f as a multiset (the order of the arguments x1 . and that furthermore. t2 ) = f ((1 − λ1 )r + λ1 s. but this is not necessary). s) .80 CHAPTER 3. say t = (1 − λ)r + λs. s ∈ A. (we can if we wish. for all X ∈ R. assume for simplicity that we have a quadradic curve F : A → A3 . for all X ∈ A. (1 − λ2 )r + λ2 s). that is. As we said already. (1 − λ2 )r + λ2 s) = (1 − λ1 ) f (r. or blossom. r) + ((1 − λ1 )λ2 + λ1 (1 − λ2 )) f (r. given by three quadratic polynomials F1 . s−r s−r f (t1 . every t ∈ A can be expressed uniquely as a barycentric combination of r and s. r). f3 as we just explained. x2 is irrelevant). This had to be expected. simply using three linear interpolation steps. biaﬃne map x1 + x2 f (x1 . (1 − λ2 )r + λ2 s) + λ1 f (s. s). s) are obviously symmetric biaﬃne functions. but what have we gained? What we have gained is that using the fact that a polar form is symmetric and biaﬃne. F3 . we get f (t1 . x2 ) = ax1 x2 + b +c 2 such that F (X) = f (X. t2 ) = s − t2 s−r f (r. we can show that every quadratic curve is completely determined by three points. Since λi = ti −r . INTRODUCTION TO POLYNOMIAL CURVES of degree ≤ 2. and we get a symmetric biaﬃne map f : A2 → A3 . Let us compute f (t1 . Thus. two distinct points r. as it is easily veriﬁed by expanding the product (1 − λ1 + λ1 )(1 − λ2 + λ2 ) = 1. we get s − t1 s − t2 f (r. called the polar form. s−r for i = 1.

b2 . However. Let t = (1 − λ)r + λs. b. we get F (t) = f (t. they play a major role in the de Casteljau algorithm and its extensions. 1) + t2 f (1. and thus. Conversely. r). f (t. or B´zier control points. t2 ) = f ((1 − λ1 )r + λ1 s. If we let r = 0 and s = 1. 1). and that f (r. the plane determined by the control points b0 . where r = s are elements of A. this also shows that a quadratic curve is necessarily contained in a plane. t) on the polynomial curve F . b1 = f (r. r) = a. f (0. that we performed above. The points f (r. b1 . Thus. s). 1). 0) + 2(1 − t)t f (0. are called control points. c ∈ A3 . t2 are known as the Bernstein polynomials of degree 2. t2 ) → s − t2 s − t1 a s−r s−r s − t1 t2 − r + + s−r s−r t2 − r t1 − r c + s−r s−r t1 − r s−r s − t2 s−r b is symmetric biaﬃne. known as the de Casteljau algorithm. t) = (1 − t)2 f (0. f (r. s) and f (s. the polynomial function corresponding to f (t1 . r). f (r. we will show how to construct geometrically the point F (t) = f (t. by the three control points b0 = f (r. s) and b2 = f (s. 3. r). Incidently.4.3. f (s. it is better to observe that the computation of f (t1 . Then. f (r. e and as we shall see. s). we showed that every symmetric biaﬃne map f : A2 → A3 is completely determined by the sequence of three points f (r. s) = c. FIRST ENCOUNTER WITH THE DE CASTELJAU ALGORITHM 81 Thus. then t1 = λ1 and t2 = λ2 . or equivalently as we just showed. Given any t ∈ A. 2(1 − t)t. 1). 0). t2 ) being obtained by letting t1 = t2 = t. The polynomials (1 − t)2 . it is clear that given any sequence of three points a. the map (t1 . s) in A3 . can be turned into an algorithm. s) and f (s. and the Bernstein polynomials. F (t) is also determined by the control points f (0. s) = b. and f (1.4 First Encounter with the de Casteljau Algorithm Let us assume that we have a quadratic polynomial curve F given by its polar form f : A2 → A3 . (1 − λ2 )r + λ2 s). t) is computed as follows: .

r) + λ f (r. f (s. Geometrically. s). r). s) = F (s) are on the curve.82 CHAPTER 3. s)) and (f (r. s) in a moment. r) = F (r) and f (s. by three linear interpolation steps. s) = f ((1 − λ)r + λs. and f (t. Note that the two control points f (r. t) is computed from the two control points f (r. f (r. but f (r. we obtain the point F (t) on the curve. f (t. . s) computed during the ﬁrst stage. r) + λ f (t. s) The algorithm consists of two stages. s) 1 f (r. by linear interpolation. r) and f (r. s)). s)). t) = f (t. r) f (r. INTRODUCTION TO POLYNOMIAL CURVES 0 f (r. s) and f (s. t) 2 f (t. If r ≤ λ ≤ s. s) + λ f (s. s) f (s. t) = f (t. r). s) = (1 − λ) f (r. We will give a geometric interpretation of the polar value f (r. r) and f (t. and the second one of the single line segment (f (t. λ= s−r Since by symmetry. During the ﬁrst stage. r). s). where f (r. s). we compute the two points f (r. t) f (t. during the second stage. we compute the point f (t. s) is computed from the two control points f (r. s). the algorithm consists of a diagram consisting of two polylines. t) = f (r. the ratio of interpolation being t−r . the ﬁrst one consisting of the two line segments (f (r. s) is not. (1 − λ)r + λs) = (1 − λ) f (t. (1 − λ)r + λs) = (1 − λ) f (r. f (r. s). from the points f (t. s) and f (t. then only convex combinations are constructed. the ratio of interpolation also being t−r λ= . s−r Thus.

t2 ) = t1 + t2 . f2 (t1 .4. s) f (r. t) determined by λ. but the shells are not nicely nested. This example also shows the construction of another point on the curve. on the curve F . s].10: A de Casteljau diagram f (s. The ﬁrst polyline is also called a control polygon of the curve. The polar forms are f1 (t1 . The example below shows the construction of the point F (t) corresponding to t = 1/2. t2 ) = −t1 t2 . Actually. r) f (t. t2 ): . FIRST ENCOUNTER WITH THE DE f (r. for r = 0. t) f (t. The de Casteljau algorithm can also applied to compute any polar value f (t1 . F2 (t) = −t2 . r) Figure 3. when t is outside [r. Each polyline given by the algorithm is called a shell . The parabola of the previous example is actually given by the parametric equations F1 (t) = 2t. s = 1. s) with the desired point f (t. and the resulting diagram is called a de Casteljau diagram. assuming diﬀerent control points. s) CASTELJAU ALGORITHM 83 f (t. we still obtain two polylines and a de Casteljau diagram.3. Note that the shells are nested nicely.

During the ﬁrst stage. t) f (t. s) f (t1 . and f (t1 . s) = (1 − λ1 ) f (r. by linear interpolation. where f (r. s) F (r) = f (r. r) + λ1 f (r. t1 ) f (r. s). s). to compute . s) is computed from the two control points f (r. r) and f (r.11: The de Casteljau algorithm 1 f (r. During the second stage. s) = f ((1 − λ1 )r + λ1 s. t2 ) = f (t1 . t1 ) = f (r. s) CHAPTER 3. (1 − λ1 )r + λ1 s) = (1 − λ1 ) f (r. s). r) + λ2 f (t1 . INTRODUCTION TO POLYNOMIAL CURVES f (r.84 f (r. we use the scalar λ2 such that t2 = (1 − λ2 )r + λ2 s. s). the ratio of interpolation being t1 − r . to compute the two points f (r. s) and f (s. we use the scalar λ1 such that t1 = (1 − λ1 )r + λ1 s. s) Figure 3. λ1 = s−r f (t1 . s) and f (t1 . t2 ) The only diﬀerence is that we use diﬀerent λ’s during each of the two stages. s) 2 f (t1 . s) f (s. t1 ) is computed from the two control points f (r. s) + λ1 f (s. r) f (r. t) f (t. (1 − λ2 )r + λ2 s) = (1 − λ2 ) f (t1 . r) F (s) = f (s.

Thus. We recommend reading de Casteljau’s original presentation in de Casteljau [23]. also called blossom values. 2 y = bt1 t2 . some hidden geometric information is revealed. can be viewed as meaningful labels for the node of de Casteljau diagrams. The equation of the tangent to the parabola at (x(t). It is in this sense that the term blossom is used: by forming the blossom of the polynomial function F . To ﬁnd the intersection of the two tangents to the parabola corresponding to t = t1 and t = t2 . the polar values f (t1 . that is. we solve the system of linear equations ay − 2bt1 x + abt2 = 0 1 ay − 2bt2 x + abt2 = 0. t1 ) and f (t1 . the ratio of interpolation being t2 − r λ2 = . we need to look closely at the intersection of two tangents to a parabola. y(t)) is x′ (t)(y − y(t)) − y ′ (t)(x − x(t)) = 0. Turning this property around. the coordinates of the point of intersection of any two tangents to the parabola are given by the polar forms of the polynomials expressing the coordinates of the parabola. or ay − 2btx + abt2 = 0. a(y − bt2 ) − 2bt(x − at) = 0. we can say that the polar form f (t1 . r) = f (r. t2 ) can be obtained. 2 and we easily ﬁnd that x=a t1 + t2 . . t2 ) of the polynomial function deﬁning a parabola gives precisely the intersection point of the two tangents at F (t1 ) and F (t2 ) to the parabola. FIRST ENCOUNTER WITH THE DE CASTELJAU ALGORITHM 85 from the points f (t1 . s−r Thus.4.3. s) computed during the ﬁrst stage. but unfortunately. There is a natural generalization of this nice geometric interpretation of polar forms to cubic curves. it does not work in general. For this. A nice geometric interpretation of the polar value f (t1 . Let us consider the parabola given by x(t) = at y(t) = bt2 . t2 ).

we ﬁrst show that by a change of coordinates . 1). s) is the tangent at F (r). also has the property that the line determined by f (r. for curves not contained in a plane (it involves intersecting the osculating planes at three points on the curve).5 Polynomial Curves of Degree 3 x(t) = F1 (t) = a3 t3 + a2 t2 + a1 t + a0 . t) is the tangent at F (t) (this will be shown in section 5. t2 ) = t1 + t2 . 1). b1 ). 3. b2 = (2. f2 (t1 . 0). and that the line determined by F (s) and f (r. The control points b0 = f (0. A polynomial curve F of degree ≤ 3 is of the form Since we already considered the case where a3 = b3 = 0.4).86 CHAPTER 3. Consider the parabola given by F1 (t) = 2t. Example. that is. Let us now consider m = 3. t) and f (s. F2 (t) = t2 . b2 ). have coordinates: b0 = (0. is the middle of the line segment joining the middle of the segment (b0 . The polar forms of F1 and F2 are f1 (t1 . y(t) = F2 (t) = b3 t3 + b2 t2 + b1 t + b0 . 1). 0). s) is the tangent at F (s). cubic curves. let us assume that a3 = 0 or b3 = 0. t2 ) = t1 t2 . For t = 1/2. As in the case of quadratic curves. the point F (1/2) = (1. b1 = (1. 0). INTRODUCTION TO POLYNOMIAL CURVES and when it does. 1/4) on the parabola. in addition to having the nice property that the line determined by F (r) and f (r. and b2 = f (1. The de Casteljau algorithm. Let us give an example of the computation of the control points from the parametric deﬁnition of a quadratic curve. b1 = f (0. to the middle of the segment (b1 .

5. a2 b3 = a3 b2 . we get y x1 (t) = a2 b3 − a3 b2 2 a1 b3 − a3 b1 a0 b3 − a3 b0 t + t+ . and consider the matrix R given below: 3 3 R= b3 ρ a3 ρ − aρ3 b3 ρ Under the change of coordinates x1 y1 =R x .3. and we get the straight line a0 b3 − a3 b0 X= .12: A parabola (amounting to a rotation). y1 (t) = ρt3 + ρ ρ ρ The eﬀect of this rotation is that the curve now “stands straight up” (since ρ > 0). let ρ = a2 + b2 . If a1 b3 = a3 b1 also holds. we can always assume that either a3 = 0 (or b3 = 0). If a3 = 0 and b3 = 0. Case 1. Then we have a degenerate case where x1 (t) is equal to a linear function. since its leading term is ρt3 . ρ . then x1 (t) is a constant and y1 (t) can be arbitrary. ρ ρ ρ a0 a3 + b0 b3 a2 a3 + b2 b3 2 a1 a3 + b1 b3 t + t+ . POLYNOMIAL CURVES OF DEGREE 3 87 F (1/2) b0 b1 b2 Figure 3.

let us assume that a1 b3 − a3 b1 > 0. and Y (X) is strictly increasing with X. we can eliminate t between x1 (t) and y1 (t). since Y ′′ = 6aX. and Y ′′ (0) = 0. which means that the origin is an inﬂexion point. and increasing again when X varies from X2 to +∞. If b = 0. If b < 0. INTRODUCTION TO POLYNOMIAL CURVES If a1 b3 − a3 b1 = 0. In all three cases. The curve still has a ﬂat S-shape. the other case being similar. Y (X) is increasing when X varies from −∞ to X1 . Case 2. then Y ′ (X) has two roots. with a > 0. The following diagram shows the cubic of implicit equation y = 3x3 − 3x. where Y = y − c. then Y ′ (0) = 0. and we get an implicit equation of the form y = a′ x3 + b′ x2 + c′ x + d′ . This is the reason why we get a parametric representation. Then. the origin is an inﬂexion point. X1 = + −b . . the slope b of the tangent at the origin being positive. we can suppress the term b′ X 2 by the change of coordinates b′ x = X − ′. The curve has an S-shape. 3a X2 = − −b . decreasing when X varies from X1 to X2 . 3a We get an implicit equation of the form y = aX 3 + bX + c. It has a ﬂat S-shape. As in the case of quadratic curves. then Y ′ (X) is always strictly positive. and 0 is a double root of Y ′ . If b > 0. with a > 0. a2 b3 − a3 b2 = 0.88 CHAPTER 3. the slope b of the tangent at the origin being negative. we get the implicit equation Y = aX 3 + bX. This curve is symmetric with respect to the origin. note that a line parallel to the Y -axis intersects the curve in a single point. Its shape will depend on the variations of sign of its derivative Y ′ = 3aX 2 + b. By one more change of coordinates. with a′ > 0. 3a Then. Also. and the tangent at the origin is the X-axis.

which yields parametric equations of the form x(t) = F1 (t) = a2 t2 . POLYNOMIAL CURVES OF DEGREE 3 89 Figure 3. getting the implicit equation (y1 ) = a2 x1 2 1 0 −b2 a2 =S x . We can now eliminate t between x1 (t) and y1 (t) as follows: ﬁrst. we can make the constant terms disapear. y(t) = F2 (t) = b3 t3 + b2 t2 + b1 t. with b3 > 0. we say that we have a nondegenerate cubic (recall that ρ > 0). y1 (t) = a2 t(b3 t2 + b1 ).3. 2 and express t2 in terms of x1 (t) from x1 (t) = a2 t2 . and by a change of coordinates. We now apply a bijective aﬃne transformation that will suppress the term b2 t2 in y(t).5. square y1 (t). y b3 x1 + b1 a2 2 . getting (y1(t))2 = a2 t2 (b3 t2 + b1 )2 . with b3 > 0. by a change of parameter. we can suppress the term of degree 1 in t in x1 (t).13: “S-shaped” Cubic In this case. Consider the matrix S below S= and the change of coordinates x1 y1 We get x1 (t) = a2 t2 . . First. as in the quadratic case.

is equal to the trace of the polynomial curve F . b3 and we get the implicit equation a2 b2 a2 Y − X b3 b3 2 + b1 a2 2 X = X 3. It is straightforward and not very informative. . Then. We leave it as an exercise. INTRODUCTION TO POLYNOMIAL CURVES In terms of the original coordinates x. where b3 > 0. b3 b1 b2 y=Y − . if b1 ≤ 0. excluding the origin (X. The origin (X. Lemma 3. Proof. we can show the following lemma. b3 Furthermore.e. and when b1 > 0. b3 b1 b2 y=Y − .. 0). y(t) = F2 (t) = b3 t3 + b2 t2 + b1 t.5. b3 the trace of F satisﬁes the implicit equation a2 b2 a2 Y − X b3 b3 2 + b1 a2 2 X = X 3. as it will be clearer in a moment. Y ) = (0. Given any nondegenerate cubic polynomial curve F . Y ) = (0. after the translation of the origin given by x=X− b1 a2 . In fact.90 CHAPTER 3. any polynomial curve of the form x(t) = F1 (t) = a2 t2 . we have the implicit equation (a2 y − b2 x)2 = a2 x b3 x + b1 a2 2 . y. it is preferable to make the change of coordinates (translation) x=X− b1 a2 . i. then the curve deﬁned by the above implicit equation is equal to the trace of the polynomial curve F .1. 0) is called a singular point of the curve deﬁned by the implicit equation. b3 with b3 > 0. the curve deﬁned by the above implicit equation.

Again. a of the trace of F (also on the axis of symmetry) is vertical. Lemma 3. The reason for choosing the origin at the singular point is that if we intersect the trace of the polynomial curve with a line of slope m passing through the singular point.2. this mismatch can be resolved if we treat these curves as complex curves. Y ) = (0. POLYNOMIAL CURVES OF DEGREE 3 91 Thus. such that F and G have the same trace. 0) intersects the trace of the polynomial curve F in a single point other than the singular point (X.5. Lemma 3. unless it is a tangent at the origin to the trace of the polynomial curve F (which only happens when d ≤ 0). Y ) = (0. 0) must be excluded from the trace of the polynomial curve. a (X.1 shows that every nondegenerate polynomial cubic is deﬁned by some implicit equation of the form c(aY − bX)2 + cdX 2 = X 3 . lemma 3. 0).1 shows that every nondegenerate polynomial cubic is deﬁned by some implicit equation of the form c(aY − bX)2 + cdX 2 = X 3 . The tangent at the point bcd (X.3. in the sense that for any two points (X. excluding the origin (X. Furthermore. there is some parametric deﬁnition G of the form X(m) = c(a m − b)2 + cd. we discover a nice parametric representation of the polynomial curve in terms of the parameter m. Y1 ) belongs to the trace of F iﬀ (X. For every nondegenerate cubic polynomial curve F . 0). Y2 ) such that 2b Y1 + Y2 = X. the singular point (X. The line aY −bX = 0 is an axis of symmetry for the curve. Y (m) = m(c(a m − b)2 + cd). Since every line of slope m through the origin has . Proof. Y ) = (0. every line of slope m passing through the origin (X.5. Y ) = (0. Y1 ) and (X. the singular point (X. when d > 0. which is also the set of points on the curve deﬁned by the implicit equation c(aY − bX)2 + cdX 2 = X 3 . with the exception that when d > 0. Y ) = (0. The case where d > 0 is another illustration of the mismatch between the implicit and the explicit representation of curves.5. Y ) = cd. 0) must be excluded from the trace of the polynomial curve. with the exception that when d > 0.5. Y2 ) belongs to the trace of F .

Y (m) = m(c(a m − b)2 + cd). to see if it has roots. Y ) = on the trace of F .92 CHAPTER 3. This value of m corresponds to the point (X. to ﬁnd the intersection of this line with the trace of the polynomial curve F . or c(a m − b)2 + cd − X = 0. we obtain the following classiﬁcation for the nondegenerate polynomial cubic curves deﬁned by X(m) = c(a m − b)2 + cd. the roots of the equation (a m − b)2 + d = 0 give us the slopes of the tangent to the trace of F at the origin. Since X(m) = c(a m − b)2 + cd. cd. the slope of the axis of symmetry. distinct from the origin. INTRODUCTION TO POLYNOMIAL CURVES the equation Y = mX.e (a m − b)2 + d = 0. on the trace of F . b we have X ′ (m) = 2ac(a m − b). we have c(a m − b)2 + cd = 0. We have ∆ = 16a2 b2 c2 − 12a2 c(b2 c + cd) = 4a2 c2 (b2 − 3d). Then. and thus. by studying the changes of sign of the derivative of Y (m). If X = 0. Thus. we get a unique point X(m) = c(a m − b)2 + cd. i. iﬀ d ≤ 0. which is a double root. In this case. we have Y ′ (m) = 3a2 c m2 − 4abc m + (b2 c + cd). Now. then X = c(a m − b)2 + cd. X ′ (m) = 0 for m = a . we substitute mX for Y in the implicit equation. bcd a . Y ′ (m) has roots iﬀ b2 − 3d ≥ 0. Let us compute the discriminant ∆ of the polynomial Y ′ (m). Thus. We can now specify more precisely what is the shape of the trace of F . either X = 0. getting X 2 (c(a m − b)2 + cd − X) = 0. Since Y (m) = m(c(a m − b)2 + cd) = a2 c m3 − 2abc m2 + (b2 c + cd) m. Y (m) = m(c(a m − b)2 + cd). Otherwise. The fact that the line aY − bX = 0 is an axis of symmetry for the curve results from a trivial computation.

and only intersecting the X-axis for m = 0. tangent to the vertical line X = cd at the intersection of this line with the axis of symmetry aY − bX = 0. The cubic of equation 3(Y − 2X)2 + 3X 2 = X 3 is shown below: Case 3: d = 0 (a cuspidal cubic). In this case.3. and only intersecting the X-axis for m = 0. that is. but between the two vertical lines X = cd and X = cd + cb2 . we have b2 − 3d > 0. We also get a kind of “humpy” curve. the case c < 0 being similar. then Y (m) increases when m varies from −∞ to m1 . which means that the singular point (X. the polynomial Y ′ (m) is always positive. decreases when m varies from m1 to m2 . that is. the polynomial Y ′ (m) has two roots m1 . which means that Y (m) is strictly increasing. a . since d > 0. the singular point (X. Y ) = (0. for X = cd + cb2 . In this case. and there is an inﬂexion point for m = m0 . 3a b m2 = . we must have d > 0. In this case. and Y ′ (m) has two roots. and since we assumed c > 0. tangent to the vertical line X = cd at the intersection of this line with the axis of symmetry aY − bX = 0. Y ′ (m) has a double root m0 = 3a .6. Case 1: 3d > b2 . When b2 − 3d < 0. Thus. 0) is not on the trace of the cubic. the curve makes two turns. the polynomial Y ′ (m) has no roots. and Y ′ (m) is positive except for m = m0 . The cubic of equation 3(Y − X)2 + 6X 2 = X 3 is shown below: Case 2: b2 ≥ 3d > 0. When b2 − 3d > 0. m2 . which are easily computed: m1 = b . 2b When b2 − 3d = 0. We also get a kind of “humpy” curve. Y (m) increases when m varies from −∞ to ∞. and increases when m varies from m2 to +∞. for X = cd + cb2 . We get a kind of “humpy” curve. 0) is not on the trace of the cubic either.6 Classiﬁcation of the Polynomial Cubics We treat the case where c > 0. CLASSIFICATION OF THE POLYNOMIAL CUBICS 93 3. and since we assumed c > 0. Y ) = (0.

14: “Humpy” Cubic (3d > b2 ) . INTRODUCTION TO POLYNOMIAL CURVES Figure 3.94 CHAPTER 3.

15: “Humpy” Cubic (b2 ≥ 3d > 0) .6. CLASSIFICATION OF THE POLYNOMIAL CUBICS 95 Figure 3.3.

0) belongs to the trace of the cubic. the polynomial X(m) = c(a m − b)2 + cd. The cubic of equation 3(Y − X)2 = X 3 is shown below: Case 4: d < 0 (a nodal cubic). Y ) = (0. and Y ′ (m) has two roots m1 and m2 . reaching the cusp point. and ﬁnally Y (t) increases as m varies from m2 to +∞. Since we assumed c > 0. has two distinct roots. which belongs to the trace of the cubic. which means that the origin is a cusp. Furthermore.16: Cuspidal Cubic (d = 0) For m2 . and intersects the X-axis only at X = b2 c. b2 − 3d > 0. The curve is tangential to the axis aY − bX = 0 at the origin (the cusp). then Y (m) . In this case. X ′ (m2 ) = 0 and Y ′ (m2 ) = 0. Y (m2 ) = 0. INTRODUCTION TO POLYNOMIAL CURVES Figure 3. then Y (t) decreases as m varies from m1 to m2 . Y ) = (0. we note that X(m2 ) = 0.96 CHAPTER 3. Since d < 0. the cubic is self-intersecting at the singular point (X. and thus. Thus. since d < 0. 0). Y (t) increases as m varies from −∞ to m1 . the singular point (X.

having the origin as a double point. tangent to the vertical line X = cd at the intersection of this line with the axis of symmetry aY − bX = 0.3. for X = cd + cb2 . through “cuspy”. from “humpy” to “loopy”.6.17: Nodal Cubic (d < 0) increases when m varies from −∞ to m1 . Remark: The implicit equation c(aY − bX)2 + cdX 2 = X 3 . and increases when m varies from m2 to +∞. CLASSIFICATION OF THE POLYNOMIAL CUBICS 97 Figure 3. The cubic of equation 3 (Y − X)2 − 3X 2 = X 3 4 is shown below: One will observe the progression of the shape of the curve. The trace of the cubic is a kind of “loopy curve” in the shape of an α. The curve also intersects the X-axis for m = 0. that is. decreases when m varies from m1 to m2 .

We easily verify that f must be given by f (x1 . such that f (x1 . x3 . x3 ) = f (x2 . Y ) = X 3 . Y ). Given any polynomial of degree ≤ 3. Thus. it can be shown that the (nondegenerate) cubics that can be represented by parametric rational curves of degree 3 (i. x1 .. Y ) are homogeneous polynomial in X and Y of total degree respectively 2 and 3. INTRODUCTION TO POLYNOMIAL CURVES of a nondegenerate polynomial cubic (with the exception of the singular point) is of the form ϕ2 (X. where ϕ2 (X. fractions of polynomials of degree ≤ 3) are exactly those cubics whose implicit equation is of the form ϕ2 (X. 3 3 . we would like to extend blossoming to polynomials of degree 3. the singular point is at inﬁnity. X). Using some algebraic geometry. and such that F (X) = f (X. Y ) = X 3 . F (X) = aX 3 + bX 2 + cX + d. i.. x1 . x3 ) = ax1 x2 x3 + b x1 x2 + x1 x3 + x2 x3 x1 + x2 + x3 +c + d. To make this statement precise. the cubics deﬁned by the implicit equation Y 2 = X(X − 1)(X − λ). inspired by our treatment of quadratic polynomials. but deﬁnitely beyond the scope of this course! Returning to polynomial cubics. Note that X 3 itself is a homogeneous polynomial in X and Y of degree 3. Y ) is a homogeneous polynomial in X and Y of total degree 2 (in the case of a degenerate cubic of equation y = aX 3 + bX 2 + cX + d. Elliptic curves are a venerable and fascinating topic. Y ) = ϕ3 (X.e. x3 ) = f (x1 . Furthermore. the polynomial case is obtained in the special case where ϕ3 (X. These cubics have a singular point at the origin.7 Second Encounter with Polar Forms (Blossoming) First. x2 . which is aﬃne in each argument. x2 .e. that is. cannot be parameterized rationally. x1 ). x2 . where λ = 0. x3 . there are some cubics that cannot be represented even as rational curves. x2 . x3 . for all X ∈ R. Y ) and ϕ3 (X. 3. For example. a function which takes the same value for all permutations of x1 . 1. the polar form of F is a symmetric triaﬃne function f : A3 → A. where ϕ2 (X. Such cubics are elliptic curves. we need to deﬁne the polar form (or blossom) of a polynomial of degree 3. x1 ) = f (x3 .98 CHAPTER 3. X. x2 ) = f (x3 . x2 ) = f (x2 . projective geometry is needed).

(1 − λ3 )r + λ3 s). and they add up to 1. t3 ) = s − t1 s−r s − t2 s−r t3 − r s−r + + t1 − r s−r s − t2 s−r t2 − r s−r t2 − r s−r s − t3 s−r t3 − r s−r + s − t1 s−r + t1 − r s−r + t1 − r s−r t2 − r s−r t3 − r s−r s − t3 s−r f (s. Again. 2. s). t2 . r. r. r. s) in A3 . f3 . SECOND ENCOUNTER WITH POLAR FORMS (BLOSSOMING) 99 Then. s). s. (1 − λ3 )r + λ3 s) = (1 − λ1 )(1 − λ2 )(1 − λ3 ) f (r. with r = s. r). Since λi = ti −r .7. Since f is symmetric and triaﬃne. r. r. s. F3 of degree ≤ 3. s) s − t2 s−r t3 − r s−r t2 − r s−r s − t3 s−r f (t1 . as it is easily veriﬁed by expanding the product (1 − λ1 + λ1 )(1 − λ2 + λ2 )(1 − λ3 + λ3 ) = 1. 3. f2 . t2 . f (r. r. s). and we obtain a symmetric triaﬃne map f : A3 → A3 . are obviously symmetric triaﬃne functions. The coeﬃcients of f (r. s. s−r for i = 1. s) + λ1 λ2 λ3 f (s. t3 ) = f ((1 − λ1 )r + λ1 s. r). for all X ∈ A. where r = s are elements of A. determined by three polynomials F1 . s). f (r. we get s − t1 s−r + s − t2 s−r s − t3 s−r f (r. we can determine their polar forms f1 . r) + s − t1 s−r f (r. s) t1 − r s−r f (r. r. s. X). s. and f (s. s. X. we get f (t1 .3. s. t3 ) = f ((1 − λ1 )r + λ1 s. and let us compute f (t1 . r) + ((1 − λ1 )(1 − λ2 )λ3 + (1 − λ1 )λ2 (1 − λ3 ) + λ1 (1 − λ2 )(1 − λ3 )) f (r. we showed that every symmetric triaﬃne map f : A3 → A3 is completely determined by the sequence of four points f (r. F2 . given a polynomial cubic curve F : A → A3 . let us pick an aﬃne basis (r. f (r. s). f (r. s). s) in A. r. s). s. (1 − λ2 )r + λ2 s. Thus. and f (s. such that F (X) = f (X. . (1 − λ2 )r + λ2 s. t2 . s) + ((1 − λ1 )λ2 λ3 + λ1 (1 − λ2 )λ3 + λ1 λ2 (1 − λ3 )) f (r.

f (r. s) = d. s). s). f (r. Lemma 3. The points f (r. we have shown the following result. r). and we get F (t) = f (t. r. Summarizing what we have done. r. we will assume any aﬃne space E of dimension ≥ 2. Thus. 3(1 − t)t2 .7. s). and the Bernstein polynomials. the point F (t) on the curve can be expressed in terms of the control points f (r. 1) + 3(1 − t)t2 f (0. 1. s) = c. and f (s. s). b. r) = a. t3 ) → + s − t1 s−r s − t2 s−r s − t3 s−r a + s − t1 s−r b t1 − r s−r c s − t2 s−r t3 − r s−r t2 − r s−r s − t3 s−r s − t1 s−r s − t2 s−r t3 − r s−r + + t1 − r s−r s − t2 s−r t2 − r s−r t2 − r s−r s − t3 s−r t3 − r s−r + s − t1 s−r + t1 − r s−r + t1 − r s−r t2 − r s−r t3 − r s−r s − t3 s−r d is symmetric triaﬃne. r. s). They form a basis of the vector space of polynomials of degree ≤ 3. r. 1) + t3 f (1.100 CHAPTER 3. f (r. f (r. the map (t1 . c. r. and thus. λ2 = t2 . t2 . r. and f (s.1. it is clear that given any sequence of four points a. s. If we let r = 0 and s = 1. r). but not through the other control points. s). s) = b. are the Bernstein polynomials of degree 3. r. whose polar form f : A3 → E satisﬁes the conditions f (r. f (r. s). 0. are called control points. and that f (r. 1. 3(1 − t)2 t. 0) + 3(1 − t)2 t f (0. 1). b. d ∈ A3 . t3 . t3 ) is obtained by letting t1 = t2 = t3 = t. t2 . Note that the polynomial curve deﬁned by f passes through the two points f (r. d in E. s. Given any sequence of four points a. r. so that λ1 = t1 . s. s. f (r. The polynomials (1 − t)3 . s. t) = (1 − t)3 f (0. it is more useful to extend the de Casteljau algorithm. r) = . s. and f (s. 0. or B´zier e control points. t. and λ3 = t3 . INTRODUCTION TO POLYNOMIAL CURVES Conversely. c. r) and f (s. However. They play a major role in the de Casteljau algorithm and its extensions. s. there is a unique polynomial curve F : A → E of degree 3. the polynomial function associated with f (t1 . It is immediately veriﬁed that the above arguments do not depend on the fact that the aﬃne space in which the curves live is A3 .

r. r) f (r. and f (s. s. r. consisting of three stages: 0 f (r. f (r. t) 2 3 f (t. s. t2 .3. r. the computation of F (t) can be arranged in a triangular array. s. t3 ) = + s − t1 s−r s − t2 s−r s − t3 s−r a + s − t1 s−r b t1 − r s−r c s − t2 s−r t3 − r s−r t2 − r s−r s − t3 s−r s − t1 s−r s − t2 s−r t3 − r s−r + + t1 − r s−r s − t2 s−r t2 − r s−r t2 − r s−r s − t3 s−r t3 − r s−r + s − t1 s−r + t1 − r s−r + t1 − r s−r t2 − r s−r t3 − r s−r s − t3 s−r d. s. Furthermore. r. 3. s. s) f (t. t. s) 1 f (r. s) = b2 . s. r = s). s) = c. Given any t ∈ [r. s]. s) f (t. r. t) f (t. It is easy to generalize the de Casteljau algorithm to polynomial cubic curves. t.8. r) = b0 . s) f (r. s) f (s. s) = b3 (where r. r < s). r. s) = b1 . s ∈ A. r) f (r. t. s ∈ A. s) . t. s) = d (where r. the polar form f of F is given by the formula f (t1 . f (r. SECOND ENCOUNTER WITH THE DE CASTELJAU ALGORITHM 101 a. s) = b. f (r. s. as shown below. f (r.8 Second Encounter with the de Casteljau Algorithm Let us assume that the cubic curve F is speciﬁed by the control points f (r. and f (s.

INTRODUCTION TO POLYNOMIAL CURVES The above computation is usually performed for t ∈ [r. t. from f (t. s) = (1 − λ) f (t. (1 − λ)r + λs. s). . b2. r. s. s. s) + λ f (s. we usually say that F (t) = f (t.1 . s). s. from f (r. from f (r. but it works just as well for any t ∈ A.0 . t. we compute the three points f (r. r. r. s. t. f (r. r) = (1 − λ) f (t. r. r. b1. t) is computed by extrapolation. r). the ratio of interpolation also being λ= t−r . s) and f (s. r) and f (r. let us denote the control points b0 = f (r.0 . s) + λ f (t. s. s) = f (r. r. s−r In order to describe the above computation more conveniently as an algorithm. as b0. the ratio of interpolation being λ= t−r . s) and b3 = f (s. (1 − λ)r + λs. s) = (1 − λ) f (r. r. t. r.102 CHAPTER 3. f (t. r) = f (t. from f (r. from f (t. s). s) = f (t. r) and f (r. r) + λ f (r. s. t) = f (t. s) = f (t. s. t) = f (r. s]. r. r. s. b2. from f (t.1 . r) + λ f (t. t). r) + λ f (t. s) = (1 − λ) f (r.0 . s). t) = f (t. r) and f (t. t. r. even outside [r.0 . r) = f (t. s) + λ f (r. and b3. r. the ratio of interpolation still being λ= t−r . r. b2 = f (r. r. s). When t is outside [r. b1. s. f (r. s). s). s]. (1 − λ)r + λs. s). t. s. During the ﬁrst stage. s. s−r During the third stage. s) as b0. s). Let us go through the stages of the computation. r. s. s−r During the second stage. s). (1 − λ)r + λs) = (1 − λ) f (t. s]. s).1 . and f (t. (1 − λ)r + λs) = (1 − λ) f (r. s. t. s) and f (t. t. and f (t. r). t. s). s) and f (r. we compute the point f (t. and the intermediate points f (r. f (r. s. r). r. s. b1 = f (r. r. s). r) and f (t. s. s) = f ((1 − λ)r + λs. we compute the two points f (t. since by symmetry. r. s). t. t. t.

and with the point b0. and 0 ≤ i ≤ 3 − j. where r = 0.j : bi.j lies between bi.j−1 . and the point f (t.3 . The polyline (b0 . and bi. but the shells are not nicely nested. In this case. the index j denotes the stage of the computation.1 . b1.1 . called shells. (b1. b3 ) is also called a control polygon of the curve.2 . t. t. the above formula generalizes to any degree m.j .1 b0.8.3 b1.3 . b3 ) (b0. b1.j−1 + t−r s−r bi+1.2 b1. b2. (b1 . b1 ). the algorithm constructs the three polylines (b0 .1.0 1 b0. and s = 6: The above example shows the construction of the point F (3) corresponding to t = 3.j = s−t s−r bi. We have F (t) = b0. (b1 . t) as b0.j−1 and bi+1. b1. b1 ). When λ is outside [r.2 . As will shall see in section 5. b2 ).1 ) (b0. r).j−1 . b2 ). we have the following inductive formula for computing bi. SECOND ENCOUNTER WITH THE DE CASTELJAU ALGORITHM 103 the intermediate points f (t. s) as b0. they form the de Casteljau diagram.3 .2 . The following diagram illustrates the de Casteljau algorithm for computing the point F (t) on a cubic. When r ≤ t ≤ s.2 ) where 1 ≤ j ≤ 3. on the curve F .3 .0 Then.1 ). we still obtain three shells and a de Casteljau diagram. Note that the shells are nested nicely. (b2 . each interpolation step computes a convex combination.1 b2 = b2.1 2 3 b0. . Then the triangle representing the computation is as follows: 0 b0 = b0.0 b1 = b1.2 b2. s]. Note that in bi. geometrically. and F (t) = b0.3. (b2 . t.0 b3 = b3. f (t.

0.104 CHAPTER 3. 3) f (6. 3. 3. 3) f (0. 3) F (3) = f (3. 6. 6. 6) f (0. INTRODUCTION TO POLYNOMIAL CURVES b1. 6) F (0) = f (0. 2 b0. 1 b0. 2 b2 b2.19: The de Casteljau algorithm for t = 3 . 0. 6) Figure 3. 1 b0 b3 Figure 3.18: A de Casteljau diagram f (0. 6. 1 b1 b0. 6) f (0. 3 b1. 3. 3) f (3. 0. 6) f (0. 0) F (6) = f (6. 3.

Remark: The above statements only make sense when b0 = b1 . r). t.2 . s) f (r. r. It will be shown in section 5.2 . t1 ).2 are computed during the second stage of the de Casteljau algorithm. It is possible for some (even all!) of the control points to coincide. t2 . t2 . t3 ) = b0.3 . t3 ) as b0. t2 . s. that the tangent at b0 is the line (b0 . All we have to do is to use a diﬀerent ratio of interpolation λj during phase j. s. s) f (t1 . s−r λj = The computation can also be represented as a triangle: 0 f (r.4. but the tangents may not be computed as easily as above.1 . r) f (r.j .2 . b2 = b3 . s) as b0. Then the triangle representing the computation is as follows: . t1 ) 2 3 f (t1 . s) f (s. Note that in bi. and b0. where b0. r.8. As in the quadratic case. t2 . b2. s) as b0.3 . r.3. b1.2 = b1. f (t1 . f (t1. and f (t1 .1 . t2 . s. f (r. the intermediate points f (t1 . b1. s) As above. s.2 and b1. s) 1 f (r.2 . the de Casteljau algorithm can also be used to compute any polar value f (t1 . s). r) f (r. b1. and the point f (t1 . t2 . r. b1 ). t1 . t2 . t) correctly.2 ). SECOND ENCOUNTER WITH THE DE CASTELJAU ALGORITHM 105 The de Casteljau algorithm also gives some information about some of the tangents to the curve. t3 ) f (t1 . s) f (t1 . the index j denotes the stage of the computation. t3 ) (which is not generally on the curve). The algorithm still computes f (t. b3 ).1 . the tangent at b3 is the line (b2 . it is convenient to denote the intermediate points f (r. and the tangent at F (t) is the line (b0. t1 . given by tj − r . t2 .

j−1 + tj − r s−r bi+1.1). Consider the plane cubic deﬁned as follows: F1 (t) = 3t. .1 b0.2 b1.3 b1. t3 ) = b0. and computations of the coordinates of control points.j = s − tj s−r bi.1 2 3 b0. t2 . The de Casteljau algorithm enjoys many nice properties that will be studied in chapter 5 (section 5.0 b1 = b1. The polar forms of F1 (t) and F2 (t) are: f1 (t1 . there is very little diﬀerence between this more general version of de Casteljau algorithm computing polar values and the version computing the point F (t) on the curve: just use a new ratio of interpolation at each stage.j−1 .2 b2.9 Examples of Cubics Deﬁned by Control Points We begin with example of plane cubics.1 b2 = b2. Thus. INTRODUCTION TO POLYNOMIAL CURVES 0 b0 = b0. F2 (t) = 3t3 − 3t. t3 ) = 3t1 t2 t3 − (t1 + t2 + t3 ). We have f (t1 . and 0 ≤ i ≤ 3 − j.3 . t2 .0 b3 = b3. Let us give a few examples of the computation of polar forms associated with parametric representation of cubics. 3.0 We also have the following inductive formula for computing bi. f2 (t1 . t3 ) = t1 + t2 + t3 . where 1 ≤ j ≤ 3. Example 1.0 1 b0. t2 .106 CHAPTER 3.j : bi.

we get the polar forms f1 (t1 . 4) = (1. −4) = (3. 0) = (−1. = (−3.20: Bezier Cubic 1 With respect to the aﬃne frame r = −1. Example 2. F2 (t) = 3t3 − 6t2 + 9t. EXAMPLES OF CUBICS DEFINED BY CONTROL POINTS 107 Figure 3. s = 1. Since F1 (t) = 3t2 − 6t + 9.3. 0). . F2 (t) = 3t(t − 1)2 + 6t. t3 ) = (t1 t2 + t1 t3 + t2 t3 ) − 2(t1 + t2 + t3 ) + 9 f2 (t1 . the coordinates of the control points are: b0 b1 b2 b3 The curve has the following shape. The above cubic is an example of degenerate “S-shaped” cubic. t2 . Consider the plane cubic deﬁned as follows: F1 (t) = 3(t − 1)2 + 6.9. t3 ) = 3t1 t2 t3 − 2(t1 t2 + t1 t3 + t2 t3 ) + 3(t1 + t2 + t3 ). t2 .

s = 1. F2 (t) = 3t3 − 12t2 + 15t. Since F1 (t) = 3t2 − 12t + 15. the coordinates of the control points are: b0 b1 b2 b3 The curve has the following shape. t3 ) = 3t1 t2 t3 − 4(t1 t2 + t1 t3 + t2 t3 ) + 5(t1 + t2 + t3 ). 0) = (7. t3 ) = (t1 t2 + t1 t3 + t2 t3 ) − 4(t1 + t2 + t3 ) + 15 f2 (t1 . 4) = (6. we get the polar forms f1 (t1 . = (9. Example 3.21: Bezier Cubic 2 With respect to the aﬃne frame r = 0. t2 . 6). t2 . F2 (t) = 3t(t − 2)2 + 3t. INTRODUCTION TO POLYNOMIAL CURVES Figure 3. The axis of symmetry is y = x. 3) = (6. We leave as an exercise to verify that this cubic corresponds to case 1. where 3d > b2 . . Consider the plane cubic deﬁned as follows: F1 (t) = 3(t − 2)2 + 3.108 CHAPTER 3.

3. 5) = (8.9. The axis of symmetry is y = 2x. the coordinates of the control points are: b0 b1 b2 b3 The curve has the following shape. where b2 ≥ 3d > 0. It is interesting to see which control points are obtained with respect to the aﬃne frame r = 0. The second “hump” of the curve is outside the convex hull of this new control polygon. s = 1: b′0 b′1 b′2 b′3 = (15. s = 2. 6). We leave as an exercise to verify that this cubic corresponds to case 2. 10) = (3. = (15. 4) = (3. EXAMPLES OF CUBICS DEFINED BY CONTROL POINTS 109 Figure 3. .22: Bezier Cubic 3 With respect to the aﬃne frame r = 0. 0) = (7. 6) = (6. 0) = (11. 6).

2) = (−1.110 CHAPTER 3. to predict what the shape of the entire curve will be! Example 4. Thus. = (3. We leave as an exercise to verify that this cubic corresponds to case 3. 4 3 F2 (t) = t(t − 1)2 − 3t. t3 ) = (t1 t2 + t1 t3 + t2 t3 ) − 2(t1 + t2 + t3 ) + 3 f2 (t1 . 0) = (1. INTRODUCTION TO POLYNOMIAL CURVES This shows that it is far from obvious. Since F1 (t) = 3t2 − 6t + 3. It is interesting to see which control points are obtained with respect to the aﬃne frame r = 0. where d = 0. the coordinates of the control points are: b0 b1 b2 b3 The curve has the following shape. Consider the plane cubic deﬁned as follows: F1 (t) = 3(t − 1)2 . 6). Example 5. t2 . With respect to the aﬃne frame r = 0. F2 (t) = 3t3 − 6t2 + 3t. 1) = (0. Consider the plane cubic deﬁned as follows: 3 F1 (t) = (t − 1)2 − 3. a cubic with a cusp at the origin. just by looking at some of the control points. we get the polar forms f1 (t1 . 0). F2 (t) = 3t(t − 1)2 . s = 2. t3 ) = 3t1 t2 t3 − 2(t1 t2 + t1 t3 + t2 t3 ) + (t1 + t2 + t3 ). The axis of symmetry is y = x. s = 1: b′0 b′1 b′2 b′3 = (3. b′2 = b′3 . This indicates that there is a cusp at the origin. −4) = (3. t2 . 0) = (0. 4 . 0) = (−1.

3. EXAMPLES OF CUBICS DEFINED BY CONTROL POINTS 111 Figure 3.9.23: Bezier Cubic 4 .

just by looking at some of the control points. 2 4 3 2 9 t − t. The two tangents at the origin are y = −x. −12) = (0. s = 1: 9 b′0 = (− . a cubic with a node at the origin. − ) 4 b′2 = (−3. Try it! = (0. to investigate which properties of the shape of the control polygon (b0 . −2) b′3 = (−3. the coordinates of the control points are: b0 b1 b2 b3 The curve has the following shape. and s = 3).112 Since CHAPTER 3. s = 3. b2 . b1 . and even fun. 0) = (−4. t3 ) = (t1 t2 + t1 t3 + t2 t3 ) − (t1 + t2 + t3 ) − 4 2 4 3 1 3 f2 (t1 . 0). 4 2 4 With respect to the aﬃne frame r = −1. and y = 3x (this explains the choice of r = −1. INTRODUCTION TO POLYNOMIAL CURVES 3 F1 (t) = t2 − 4 3 3 F2 (t) = t − 4 we get the polar forms 3 9 t− . where d < 0. t2 . to predict what the shape of the entire curve will be! The above examples suggest that it may be interesting. 4) = (−4. Here is a more global view of the same cubic: It is interesting to see which control points are obtained with respect to the aﬃne frame r = 0. t3 ) = t1 t2 t3 − (t1 t2 + t1 t3 + t2 t3 ) − (t1 + t2 + t3 ). The axis of symmetry is y = x. −3). We leave as an exercise to verify that this cubic corresponds to case 4. this example shows that it is far from obvious. t2 . Note that b0 = b3 . 0) 4 3 ′ b1 = (−1. 2 4 1 9 1 f1 (t1 . b3 ) determine the nature of the plane cubic that it deﬁnes. . As in example 3.

9. EXAMPLES OF CUBICS DEFINED BY CONTROL POINTS 113 Figure 3.3.24: Bezier Cubic 5 .

25: Nodal Cubic (d < 0) . INTRODUCTION TO POLYNOMIAL CURVES Figure 3.114 CHAPTER 3.

3). b2 . 0) 3 3 b3 = (1. to rational curves and also to polynomial and rational surfaces. This is the object of the next two chapters. EXAMPLES OF CUBICS DEFINED BY CONTROL POINTS 115 Challenge: Given a planar control polygon (b0 .3. .9. the coordinates of the control points are: b0 = (0. t3 ) = t1 t2 t3 . z − xy = 0. 0. This curve has some very interesting algebraic properties. s = 1. it is the zero locus (the set of common zeros) of the two polynomials y − x2 = 0. The reader should apply the de Casteljau algorithm to ﬁnd the point on the twisted cubic corresponding to t = 1/2. 1. For example. t2 . t3 ) = (t1 t2 + t1 t3 + t2 t3 ) 3 f3 (t1 . this time. 0) 1 b1 = ( . is it possible to ﬁnd the singular point geometrically? Is it possible to ﬁnd the axis of symmetry geometrically? Let us consider one more example. b1 . 1). 0. and later on. t2 . t2 . We get the polar forms 1 f1 (t1 . F2 (t) = t2 . known as a twisted cubic (it is not a plane curve). In order to do so. . F3 (t) = t3 . deﬁned as follows: F1 (t) = t. We would like to extend polar forms and the de Casteljau algorithm to polynomial curves of arbitrary degrees (not only m = 2. It can also be shown that any four points on the twisted cubic are aﬃnely independent. With respect to the aﬃne frame r = 0. we need to investigate some of the basic properties of multiaﬃne maps. Example 6. b3 ). 0) 3 2 1 b2 = ( . of a space curve. t3 ) = (t1 + t2 + t3 ) 3 1 f2 (t1 . Consider the cubic.

You may use Mathematica. . assuming that r = 0 and s = 1: b0 b1 b2 b3 = (6. 0). Problem 2 (20 pts). Consider the cubic deﬁned by the equations: x(t) = 9p(3t2 − 1). Consider (in the plane) the cubic curve F deﬁned by the following 4 control points. −10). 10). Write a computer program implementing the the de Casteljau algorithm for cubic curves. over some interval [r. (1) Find the polar forms of x(t) and y(t). Plot the curve as well as possible (choose some convenient value for p). (2) What are the slopes of the tangents at the origin. s = 1. = (6. = (4. = (−1. Use the de Casteljau algorithm to ﬁnd the coordinates of the points F (1/3). 0).116 CHAPTER 3. = (−6. F (1/2). 6). y(t) = 9pt(3t2 − 1). Plot the cubic as well as you can. Problem 4 (20 pts). Remark: This cubic is known as the “Tchirnhausen cubic”. or any other available software in which graphics primitives are available. = (1. Problem 3 (30 pts).10 Problems Problem 1 (40 pts). = (−6. s]. assuming that r = 0 and s = 1: b0 b1 b2 b3 = (−4. −6). 6). 6). and F (2/3). Give a geometric construction of the tangent to the cubic for t = 0. where p is any scalar. INTRODUCTION TO POLYNOMIAL CURVES 3. Find the control points with respect to the aﬃne frame r = −1. Consider (in the plane) the cubic curve F deﬁned by the following 4 control points.

3.5. Prove that [f (t. c. In particular. Plot the curve as well as you can. (ii) Given any parabola F deﬁned by some bilinear symmetric aﬃne map f : A2 → A2 . d) of four distinct points. viewed as a cubic. α β In this case. t2 ). b1 . b′3 yielding the same parabola.10. b. Prove lemma 3. t2 . t2 ). t1 ). t4 )). c. b′1 . t4 . f (t. we deﬁne the cross-ratio [a. given three control points b0 . b1 . t4 )] = −1. d] = − → − → cb db In any aﬃne space E. we say that (a. ﬁnd the values of t3 . d] = Show that [a. explain what happens when two control points are identical. f (t. More speciﬁcally. b′2 . c. t1 ). Explain how to treat a parabola as a degenerate cubic. c. f (t. t3 . d) of four distinct points on a line. t2 . b2 ). c. c. t1 ). t2 . consider the following questions: Does the degree of the curve drop? . Problem 5 (25 pts). d) forms a harmonic division. f (t. d] of these points as → → − − ca da [a. (The “four tangents theorem”) (i) Given any sequence (a. b. b. given any four distinct values t1 . t4 such that [f (t. b2 specifying a parabola. Problem 6 (30 pts). c. f (t. f (t. Construct the points F (1/4) and F (3/4). consider the four collinear points (f (t. determine control points b′0 . Given a plane cubic speciﬁed by its control points (b0 . Problem 8 (30 pts). b2 . given any sequence (a. t3 ). t2 ). PROBLEMS 117 Give a geometric construction of the point F (1/2). b. (1 − α) β 1 1 + = 2. t3 ). b. t4 . b. f (t. b. show that [a. f (t. t3 ). Given t1 . t3 . for any t. d] = −1 iﬀ α (1 − β) . f (t. Problem 7 (20 pts). such that c = (1 − α)a + αb and d = (1 − β)a + βb. t4 )] only depends on t1 .1.

Problem 11 (80 pts) Challenge. and that MN is tangent to the cubic at N. It is best to use polar coordinates (with pole F = (p. N varies on the cubic. Show that when M varies on the parabola. deﬁned by the equations: x(t) = 27pt2 . INTRODUCTION TO POLYNOMIAL CURVES Does the curve have a cusp at some control point? Does the curve have an inﬂexion point at some control point? Assuming that A2 is equipped with its usual Euclidean inner product. 1]. does the curve have points where the curvature is null? What happens when three control points are identical? What if the four control points are identical? Problem 9 (20 pts). where p is any scalar. (i) Let Hi be the foot of the perpendicular from F to the tangent to the cubic at Ni . Let F be the point of coordinates (p. You may want to use Mathematica to play around with these curves. 1 − cos θ sin θ(1 − 2 cos θ) y(θ) = 2p . Problem 10 (20 pts). y(t) = 9pt(3t2 − 1). . and let N be the symmetric of F with respect to ω. H3 belong to the parabola of equation y 2 = 4px. N2 . 0)). Use the representation in polar coordinates. You can check that you are right if you ﬁnd that the cubic is deﬁned by: x(θ) = 3p + 6p cos θ . (1 − cos θ)2 (ii) Prove that the tangents to the cubic at N1 . Consider the cubic of problem 3 (except that a new origin is chosen). Any line D through F intersects the cubic in three points N1 . Prove that H1 . 0). let ω be the intersection of the normal at M to the parabola with the mediatrix of the line segment F M. N2 . Hint. What is the maximum number of intersection points of two plane cubics? Give the control points of two plane cubics that intersect in the maximum number of points. over [0. Plot the Bernstein polynomials Bi3 (t). (iii) What is the locus of the center of gravity of this triangle.118 CHAPTER 3. Consider the following construction: for every point M on the parabola y 2 = 4px. H2 . 0 ≤ i ≤ 3. Hint. N3 . N3 intersect in three points forming an equilateral triangle.

splines. We conclude by showing that the deﬁnition of a polynomial curve in polar form is equivalent to the more traditional deﬁnition (deﬁnition 3. and some discussion of the aﬃne case can be found in Berger [5] (Chapter 3. in a form usually called “blossoming”. and more speciﬁcally to B´zier curves. In fact. we prove a generalization of lemma 2. This result is quite technical in nature. multilinear and multiaﬃne maps are deﬁned. characterizing multiaﬃne maps in terms of multilinear maps. Aﬃne polynomial functions h : E → F are described explicitly in the case where E has ﬁnite dimension.1. page 4-6). It is shown how polynomials in one or several variables are polarized. there is little doubt that the polar approach led him to the discovery of the beautiful algorithm known as the “de Casteljau algorithm”. and that the Bernstein polynomials of degree ≤ m form a basis of the vector space of polynomials of degree ≤ m. going back as early as 1879. Aﬃne polynomial functions and their polar forms are deﬁned. but it plays a crucial role in section 10. After a quick review of the binomial and multinomial coeﬃcients. and it is not very easily accessible. The proof can be omitted at ﬁrst reading. which ﬁrst appeared in 1939.Chapter 4 Multiaﬃne Maps and Polar Forms 4. Next.1). The uniqueness of the polar form of an aﬃne polynomial function is proved next. A presentation of the polarization of polynomials can be found in Hermann Weyl’s The Classical Groups [87] (Chapter I.2. section 6). showing that they subsume the usual multivariate polynomial functions.7. in the case of polynomial maps between vector spaces. Polynomial curves in polar form are deﬁned. we discuss the representation of certain polynomial maps in terms of multiaﬃne maps.1. and the de Casteljau algorithm. This has applications to curve and surface design. Section 3). and the equivalence between polynomials and symmetric multiaﬃne maps is established. It should be pointed out that de Casteljau pioneered the approach to polynomial curves and surfaces in terms of polar forms (see de Casteljau [23]). e This material is quite old. An equivalent form of polarization is discussed quite extensively in Cartan [16] (Chapter I. 119 . The chapter proceeds as follows. and their characterization in terms of control points and Bernstein polynomials is shown.1 Multiaﬃne Maps In this chapter.

namely itself. because they arise in the expansion of the binomial expression (a + b)n . .120 CHAPTER 4. = n−k k The binomial coeﬃcients can be computed inductively by forming what is usually called . . . Furthermore. . . n−1 . n}. . . Thus. which is equal to + k k k−1 k coeﬃcients. = n−1 n−1 . It is easy to see that n n . and n (read “n choose k”) as follows: k ∈ Z. . Now. n} containing 1 as there are subsets n−1 . . . n} consisting of k elements. . . . . . . . . . When n ≥ 1. (n + 1)! = (n + 1)n! It is well known that n! is the number of permutations on n elements. . n} having k elements: those containing 1. = 1. . . we deﬁne n! (read “n factorial”) as follows: 0! = 1. n}. and those not containing 1. . when n = 0. . and there are as many subsets of k of k − 1 elements from {2. / It is immediately shown by induction on n that n k = n! . namely k−1 elements from {1. . For n ∈ N. . if k ∈ {0. . . MULTIAFFINE MAPS AND POLAR FORMS We ﬁrst review quickly some elementary combinatorial facts. we can prove by induction that n is the k number of subsets of {1. + k−1 k if n ≥ 1. n}. . there are two kinds of subsets of {1. n} not containing 1 as there are subsets of k elements from {2. when n ≥ 0. n} consisting of k elements is namely k n n n−1 n−1 are also called binomial . the number of subsets of {1. The numbers . For every n ∈ N. there are as many subsets of k elements from {1. . . k!(n − k)! for 0 ≤ k ≤ n. we have the empty set which has only one subset. . we deﬁne k n k 0 0 n k = 0. Indeed.

−→ ∈ Ei−1 . A function f : E1 × . .. for any a1 . −→. Deﬁnition 4. we recall the deﬁnition of a multilinear map. 1 7 .. 1 . . . . −n ). More explicitly. . . For any a. . where m ≥ 1. . x i+1 i−1 y . 1 6 21 . . . . . . . for every → → i. . −→. and F . b ∈ R.1. −→. and n ≥ 0. k1 . x i−1 j∈J → → x− x λj −j . the proof is by induction. m k1 ! · · · km ! 1 =n k1 +···+km 0≤ki ≤n n! Again.4. n 0 1 1 1 1 1 1 1 1 . More generally. . . . − ∈ Em . be vector spaces over R. n 1 n 2 n 3 n 4 n 5 n : k n 6 n 7 . which is based on the recurrence for n 0 1 2 3 4 5 6 7 . we have the identity (a1 + · · · + am )n = n! ak1 · · · akm . . . . 1 4 10 20 35 . . . .. . The coeﬃcients k1 !···km ! . . . iﬀ it is linear in each argument. . . . −n ) = y i+1 j∈J → → x x− → x− λj f (−1 . . . . Em . . . . y → x− f (−1 . −→. am ∈ R. k The proof is a simple induction. 121 1 2 3 4 5 6 7 . . . −→ ∈ Ei+1 . holding the others ﬁxed. . for all −1 ∈ E1 . . . . . −j . are called multinomial coeﬃcients. For the reader’s convenience. 1 ≤ i ≤ m. MULTIAFFINE MAPS Pascal’s triangle. . . and they are also denoted as n . . . . for every family (λj )j∈J of scalars. Let E1 . where k1 + · · · + km = n. km We now proceed with multiaﬃne maps. m ≥ 2. . . . . 1 3 6 10 15 21 .1.1. .. × Em → F is a multilinear map (or an m-linear map). 1 5 15 35 . . . for every family x x− x− xm i−1 i+1 → (−j )j∈J of vectors in Ei . . the following identity holds for all n ≥ 0: n (a + b) = k=0 n n k n−k a b .

whereas multiaﬃne forms are not. . A bilinear form f : R2 → R must be of the form (x1 . . . a. . x2 ) → λx1 x2 . λ2 ∈ R. An aﬃne form f : A → A must be of the form x → λ1 x + λ2 . MULTIAFFINE MAPS AND POLAR FORMS Having reviewed the deﬁnition of a multilinear map. . An arbitrary function f : E m → F is symmetric (where E and F are arbitrary sets. λ3 . . a ∈ Ei . is that multilinear forms are homogeneous in their arguments. Let us try to gain some intuition for what multilinear maps and multiaﬃne maps are. Em . . iﬀ f (xπ(1) . in the simple case where E = A and F = A. The next lemma will show that an n-aﬃne form can be expressed as the sum of 2n − 1 k-linear forms. .1. . . . . . where m ≥ 1. × Em → F is a multiaﬃne map (or an maﬃne map). i. ai−1 . where 1 ≤ k ≤ n. . Deﬁnition 4. . . that is. an n-linear form f : Rn → R must be of the form (x1 . ai+1 . . . . iﬀ it preserves barycentric combinations. . the map a → f (a1 . . . Thus.122 CHAPTER 4. . A function f : E1 × . the aﬃne line associated with R. .e. for every family (λj )j∈J of scalars such that j∈J λj = 1. xn ) → λx1 · · · xn . . not just vector spaces or aﬃne spaces). m}. am ) = j∈J λj f (a1 . but they are sums of homogeneous forms. . plus a constant. More explicitly. j∈J λj bj . . ai−1 . . . Let E1 . . we deﬁne multiaﬃne maps. . For any n ≥ 2. λ2 . be aﬃne spaces over R. . and a little thinking shows that a biaﬃne form f : A2 → A must be of the form (x1 . A good example of n-aﬃne forms is the elementary symmetric functions. ai−1 ∈ Ei−1 . xπ(m) ) = f (x1 . ai+1 ∈ Ei+1 . . . for some λ1 . for all a1 ∈ E1 . Since R is of dimension 1. for some λ ∈ R. x2 ) → λ1 x1 x2 + λ2 x1 + λ3 x2 + λ3 . and F . Given n What about an n-aﬃne form f : An → A? . . . 1 ≤ i ≤ m. for some λ1 . . xm ). . ai+1 . . . . . . am ). It is immediately veriﬁed that a multilinear map is also a multiaﬃne map (viewing a vector space as an aﬃne space). iﬀ it is aﬃne in each argument. we see that the main diﬀerence between multilinear forms and multiaﬃne forms. . every linear form f : R → R must be of the form x → λx. . . .2. . . . for every family (bj )j∈J of points in Ei . am ) is an aﬃne map. . . λ4 ∈ R. . . for every i. m} → {1. . . for every permutation π : {1. ai+1 . . we have f (a1 . bj . ai−1 . . for some λ ∈ R. am ∈ Em .

. xn . a2 ). a2 + −2 ) = g(−1 . a2 ) + h1 (−1 . σk . it is aﬃne in a + − . . Thus. As a σk (x. . 0 ≤ k ≤ n.. characterizing multiaﬃne maps in terms of multilinear maps. . we deﬁne the k-th elementary symmetric function σk (x1 . . −2 )+f (a1 . we have → → → → v g(−1 .. . a2 )+h1 (−1 . σ1 = x1 + · · · + xn .. a2 +−2 )−f (a1 . → → → → → → f (a1 +−1 . a2 ) + h1 (−1 . a2 + −2 ) − f (a1 .. −2 ).. In order to understand where the proof of the next lemma comes from. .7. a2 + −2 ) − g(−1 . for short. A concise way to express σk is as follows: σk = I⊆{1. note that → → → → → f (a1 + −1 . . v v v v v → that is. 1≤i1 <. .n} |I|=k i∈I xi . let us consider the special case of a biaﬃne map f : E 2 → F . The proof is more complicated than might be expected. −2 ). k We will prove a generalization of lemma 2. a2 + −2 ) v v v v v → → → is a linear map in − . Because f is biaﬃne.1. each σk is symmetric. a2 )+f (a1 +−1 . we have v v v → → → → → v f (a1 + −1 . a2 + −2 ) = f (a1 + −1 . MULTIAFFINE MAPS 123 variables x1 . n k x . −2 ). and as a diﬀerence of aﬃne maps in a + − . v v v v v v Since → → → → v g(−1 . xn ). σk = σ2 = x1 x2 + x1 x3 + · · · + x1 xn + x2 x3 + · · · + xn−1 xn . σn = x1 x2 · · · xn . n k = n! k!(n−k)! terms of the form xi1 · · · xik .2. v v v v → → → → where h1 (−1 . a2 ) = h1 (−1 . a2 + −2 ) − f (a1 . v v v 1 2 2 2 2 But then. . a2 +−2 ) = f (a1 . an adaptation of Cartan’s use of “successive diﬀerences” allows us to overcome the complications. a2 ) − f (a1 . . Note that σk consists of a sum of consequence. as follows: σ0 = 1. a2 + −2 ) = g(−1 . v v v v → .<ik ≤n xi1 · · · xik .4. but luckily. for each k. x) = Clearly. . a2 )−f (a1 . . x. where F is a vector space. −2 ) is linear in −2 .

. the uniqueness of h2 and h3 follows easily.124 CHAPTER 4. a2 ) = ∆v1 f (a1 . a ) − f (a . a2 ) + f (a1 . a2 ) = f (a1 + −1 . which → → → → f (a1 . −2 ).. a2 + v2 v2 v1 2 v1 1 2 1 1 2 shows that we can write → → → → → → f (a1 + −1 . and as a consequence.. then h is bilinear. . . This idea of using successive diﬀerences v v (where at each step.. v v v v → → which is precisely the bilinear map h1 (−1 . . But → → → → v1 v2 1 v1 v2 1 − ) − f (a . . . a2 ) = f (a1 + −1 .1. a2 + −2 ) − ∆v1 f (a1 . and since we already know that h (− . a2 + −2 ) − f (a1 + −1 . . It is extremely technical. v v v v v → is bilinear. Thus. − ∈ E . k=|S| S={i1 . m}. a2 ). .. −2 ) is also linear in v v v v v v − . a2 ) + f (a1 . MULTIAFFINE MAPS AND POLAR FORMS → → → → → → where both g(−1 . a2 ). and h2 and h3 are linear. . .3. a2 ) = h1 (−1 . −2 ). a ) is linear in − . a2 ) − f (a1 . a ) is linear in − . such that → → f (a1 + −1 . and all −1 . we can obtain a more precise characterization in terms of m symmetric k-linear maps. a2 + −2 ) − f (a1 . v where we incremented the second argument instead of the ﬁrst argument as in the previous step..ik }. . we get → → → → ∆v2 ∆v1 f (a1 . . −k ). . v The slight trick is that if we compute the diﬀerence → ∆v2 ∆v1 f (a1 . we are led to consider diﬀerences of the form → ∆v1 f (a1 . . . v vm Proof. .. am ) + v vm S⊆{1. . . h1 (−1 . . For every m-aﬃne map f : E m → F . The above argument uses the crucial fact that the expression → → → → → v f (a1 + −1 . v v v v v v where h1 is bilinear. Lemma 4. . When f : E m → F is a symmetric m-aﬃne map. . there are 2m − 1 unique multilinear − → − → maps fS : E k → F . a2 ). a2 + −2 ) − f (a1 . k = |S|. . . and f (a + − . a2 ) + h1 (−1 . . −2 ) + h2 (−1 ) + h3 (−2 ). where S ⊆ {1.1. . am ∈ E. k≥1 → → fS (−1 . am + − ) = f (a1 . − ) is linear in − . a2 ) are linear in −1 .m}. and can be found in Chapter B. a2 + −2 ) = f (a1 . we move from argument k to argument k + 1) will be central to the proof of the next lemma. The uniqueness of h1 is clear. a2 + −2 ) and and g(−1 . S = ∅.. a2 + −2 ) − f (a1 + −1 . Section B. 1 ≤ k ≤ m. vi vi → → → − for all a1 .

am ∈ E. . . 1 ≤ k ≤ m. this notion is actually equivalent to the notion of polynomial function induced by a polynomial in n variables. The added beneﬁt is that we achieve a “multilinearization”. . . . the v vm sum fm + fm−1 + · · · + f1 of m symmetric k-linear maps. 4. − ∈ E . . .. v → → Thus. −k ) = f{j1 .. . or with symmetrized sums fm + fm−1 + · · · + f1 of symmetric k-linear maps. am ) + v vm m m → → fk (−1 . < jk ≤ m. . . . we have → → f (a1 + −1 .. . . and then that each f{i1 . . vj vj vj vj which shows that. In the next section.. In the special case where E = An and F = A. . . there is a permutation π such that π(i1 ) = j1 .. .. .3..ik } (−1 . . . The above lemma shows that it is equivalent to deal with symmetric m-aﬃne maps. .. ..4. . . .. . . . for every sequences i1 . . . am + − ) = f (a1 . vi vi k=1 1≤i1 <.ik } = f{j1 .ik } is symmetric.2 Aﬃne Polynomials and Polar Forms The beauty and usefulness of symmetric aﬃne maps lies is the fact that these maps can be used to deﬁne the notion of a polynomial function from an aﬃne (vector) space E of any dimension to an aﬃne (vector) space F of any dimension. −k ). jk such that 1 ≤ i1 < . and all −1 . − . . . . and thus..jk } ... − ∈ E .. . vi vi k=1 S⊆{1. .. .. . ..ik } → → → − vm for all a1 . < ik ≤ m and 1 ≤ j1 < . such that → → f (a1 + −1 . . a symmetric m-aﬃne map is obtained by making symmetric in −1 . . . . . where 1 ≤ k ≤ m. ... .4. . and all −1 . . .1. ... ik and j1 .. for every k.<ik ≤m → → fk (−1 . . π(ik ) = jk . . we will use multiaﬃne maps to deﬁne generalized polynomial functions from an aﬃne space to another aﬃne space. letting fk = f{1. . ...... . ... 1 ≤ k ≤ m. and since f (xπ(1) . . am ∈ E. . and . 1 ≤ k ≤ m. xm ). AFFINE POLYNOMIALS AND POLAR FORMS 125 Lemma 4.1. ... . am + − ) = f (a1 .k} . there are m unique sym− → − → metric multilinear maps fk : E k → F . . .jk } (−1 . xπ(m) ) = f (x1 .. . we must have → → → → f{i1 . . by the uniqueness of the sum given by lemma 4. am ) + v vm → → → − vm for all a1 . .m} S={i1 . Since f is symmetric. −k ). .. .. v Proof. . For every symmetric m-aﬃne map f : E m → F . f{i1 . . −k ). .2. . .

1. 1 ≤ k ≤ m. is a map h : E → F . The deﬁnition of a polynomial function of polar degree m given in deﬁnition 4. . − ) + · · · + f1 (− ) + f0 . instead of deﬁning polynomial maps of degree exactly m (as Cartan does). we deﬁne polynomial maps of degree at most m. is a map h : E → F . − ) + fm−1 (− .126 CHAPTER 4. . . . . is the deﬁnition given by Cartan [16]. in particular splines. . m − → − → for all a ∈ E. A polynomial function of polar degree m. A homogeneous polynomial function of degree m. . . with → → → h(− ) = f (− . an aﬃne polynomial function of polar degree m. v v v m m−1 → → − for all − ∈ E . . a). with h(a) = f (a. Given two aﬃne spaces E and F . whereas Cartan does not. Such an approach. . . who was among the ﬁrst to introduce it in the context of curve and surface representation).2. . 1 2 n is a polynomial of total degree 2 in n variables.2. MULTIAFFINE MAPS AND POLAR FORMS also that we can deﬁne (parameterized) “polynomial curves” in a very elegant and convenient manner. .1. we have the bilinear map f : (Rn )2 → R (called inner product). . is a map h : E → F . . sometimes called “blossoming” (a term introduced by Ramshaw. except that we allow any of the multilinear maps fi to be null. (y1. called the m-polar form of h. . such that h(x1 . with → → → → → → v v v h(− ) = fm (− . . v The deﬁnition of a homogeneous polynomial function of degree m given in deﬁnition 4. such that v − → − → − → there are m symmetric k-linear map fk : E k → F . also leads to an elegant and eﬀective presentation of the main algorithms used in CAGD. or for short an aﬃne polynomial of polar degree m. . if E = Rn and F = R. . called the polar form of h.1. . such that there is some symmetric m-aﬃne map f : E m → F . v v v m → − → − → → − for all − ∈ E . . Thus. and some f0 ∈ F . − ). . . such − → − → that there is some nonnull symmetric m-linear map f : E m → F . deﬁned such that f ((x1 . xn ). . . . is almost the deﬁnition given by Cartan [16]. . . Deﬁnition 4. − → − → For example. yn )) = x1 y1 + x2 y2 + · · · + xn yn . xn ) = x2 + x2 + · · · + x2 .2. . The corresponding polynomial h : Rn → R.

− → − → Clearly. may assume for − → − → → → simplicity that F = R). and F is a vector space (readers who are nervous. v v v → → − for all − ∈ E . . which is of polar degree 3 but a polynomial of degree 2 in x. then f is also deﬁned by the symmetric m-aﬃne map m −1 m → → → → fk (−1 .. − ) = v vm k k=1 − → − → Conversely. AFFINE POLYNOMIALS AND POLAR FORMS However the triaﬃne map f : R3 → R deﬁned such that f (x. . then h is also deﬁned by the m symmetric k-linear maps gk = v m fk and k 1≤i1 <. This is the deﬁnition commonly used by the CAGD community. z) = xy + yz + xz. .2. . and by f0 . .4. − ) of E . .2. − ). . in view of lemma 4. and g0 = f ( 0 . when E is a vector space − → of ﬁnite dimension n. → → → → vj E 1.2. . . − → − → − → → → for any basis (−1 . . . . . for any e en m vectors − = v − +···+v − ∈ − . Given any vector space E of ﬁnite dimension n.1. . − ) be a basis of E . . j en . .. . . 127 We adopt the more inclusive deﬁnition of a polynomial function (of polar degree m) because it is more convenient algorithmically. and any vector space F . . . . . and because it makes the correspondence with multiaﬃne polar forms nicer. from the point of view of algebraic geometry). a polynomial function h of polar degree m can be − → − → deﬁned by a symmetric m-aﬃne map f : E m → F . having been suﬃciently forwarned. . . although degree is confusing (and wrong. . Let (− . −k ) + f0 . if a map h : E → F is a polynomial function of polar degree m deﬁned by m symmetric k-linear maps fk . Nevertheless. for any symmetric multilinear map f : E m → F . From the point of view of terminology. . . with → → → h(− ) = f (− . . . it is cumbersome to constantly have to say “polar degree” rather than degree. y. f1 are the unique (symmetric) multilinear maps associated − → − → with f . . we will usually allow ourselves this abuse of language. if a map h : E → F is deﬁned by some symmetric − → − → m-aﬃne map f : E m → F . − → Let us see what homogeneous polynomials of degree m are. 0 ). induces the polynomial h : R → R such that h(x) = 3x2 . vi vi g(−1 .<ik ≤m − → − → g0 = f ( 0 . j e1 n. e e 1 n − → − → Lemma 4. .4. . Thus. where fm . 0 ).

− . − ) = v vm CHAPTER 4. . e e en en |I1 | |In | k → → → k When we calculate h(− ) = f (− . . . −1 . . we get the same product v1 1 · · · vnn a multiple v v v number of times. . Indeed. . where each Ij speciﬁes → which arguments of f are the basis vector −j ..j≤n i1 ∈I1 v1. . .128 we have → → f (−1 . . . 1 · · · vim .m} Ii ∩Ij =∅... . .. . In such that I1 ∪ .j≤n i1 ∈I1 ··· vn. . we have → → f (−1 . . .. − ) = v vm (i1 . . .∪In ={1.. Thus. i1 ··· vn. which are vectors.. m f (−1 . each of carm . . . .m} Ii ∩Ij =∅. lemma 4....2 shows that we can write h(− ) as v → h(− ) = v k1 +···+kn =m 0≤ki . i1 I1 ∪. . . ..im )∈{1. ... h is the homogeneous . . .kn . . . − ) = v I1 ∪. which is precisely k1 . − . . . kn second formula. −1 . m − → − → for some “coeﬃcients” ck1 . −m ).n}m → vi1 . the homogeneous polynomial function h associated with f is given by v → h(− ) = v k1 +···+kn =m 0≤ki . .. . we get e → → vm f (−1 . in in ∈In → → → → f (−1 . kn 1 k1 kn Proof. . .. . . − . . . the homogeneous polynomial function h of degree m in n arguments v1 . . . . . and this amounts to choosing n disjoint sets I1 . . . 1≤i≤n k k v1 1 · · · vnn ck1 . . vn . . .. .. e e en en k1 .. By multilinearity of f . . i=j 1≤i.. . . ... . . i=j 1≤i. . When F = R. e e en en |I1 | |In | → → − and for any − ∈ E .. − ).. . .. MULTIAFFINE MAPS AND POLAR FORMS v1.∪In ={1. . m}. . in in ∈In → → → → f (−1 . . ei e→ i Since f is symmetric. .kn ∈ F . . which explains the dinality ki . . . which is the number of ways of choosing n disjoints sets Ij . . ∪ In = {1... . − ). we can reorder the basis vectors arguments of f .2. agrees with the notion of polynomial function deﬁned by a homogeneous polynomial. → Thus. . . . where k1 + · · · + kn = m. . − ). .. . −1 . . − ). . . . . 1≤i≤n m → → → → k v k1 · · · vnn f (−1 ..

... Lemma 4. The standard approach is to deﬁne formal polynomials whose coeﬃcients belong to a (commutative) ring..kn ).4.. the aﬃne w v polynomial function h associated with f is given by → h(a + − ) = b + v 1≤p≤m k1 +···+kn =p 0≤ki .. and some − |I1 |.... . we deﬁne directly certain functions that behave like generalized polynomial functions. w − → → for some b ∈ F .2. − → → → → − for some b ∈ F .2.. Xn ). − ) of E .... in in ∈In − → w |I1 |. j e1 n. i1 ··· vn. and for any a ∈ E. j en for any points a1 . We could have taken the form of an aﬃne map given by this lemma as a deﬁnition. . k k ck1 . or an aﬃne space. we obtain the following useful characterization of multiaﬃne maps f : E → F . In the present approach.3 shows the crucial role played by homogeneous polynomials.|In | ∈ F .. .. w Lemma 4. . . am + − ) = b + v vm 1≤p≤m I1 ∪. .2. and some − k1 . (k1 ..1. and any aﬃne space F .j≤n i1 ∈I1 v1. i=j 1≤i. and − ∈ E . Xn .. − → − → Thus..2.4. when E is of ﬁnite dimension. ... Artin [1]..kn ∈ F . . . 1≤i≤n k k → v1 1 · · · vnn − k1 . we have → → f (a1 + −1 . Given any aﬃne space E of ﬁnite dimension n. agrees with the notion of polynomial function induced by a polynomial of degree ≤ m in n variables (X1 . for − → → → any basis (−1 . am ∈ E.. for any m e en vectors − = v − +···+v − ∈ − . or Mac Lane and Birkhoﬀ [52]. for any symmetric multiaﬃne map f : E m → F .kn .2. .. kj ≥0 k1 +···+kn =m Using the characterization of symmetric multiaﬃne maps given by lemma 4.. Then. the notion of (aﬃne) polynomial of polar degree m in n arguments. → → → → vj E 1..|In| . .p} Ii ∩Ij =∅. . one should not confuse such polynomial functions with the polynomials deﬁned as usual. .3. . .. . Another major diﬀerence ..∪In ={1. when E is of ﬁnite dimension. . . it is shown how a polynomial deﬁnes a polynomial function. and lemma 4.. when E = Rn and F = R. AFFINE POLYNOMIALS AND POLAR FORMS 129 polynomial function induced by the homogeneous polynomial of degree m in the variables X1 .. ..kn X1 1 · · · Xnn . . − → When F is a vector space of dimension greater than one. say in Lang [47].

multiplication does not make any sense.2. for example.2. f . because an aﬃne polynomial map f of polar degree m may end up being degenerate. which is some symmetric m-aﬃne map f : Am → E.130 CHAPTER 4. Also note that we deﬁned a polynomial curve in polar form of degree at most m. . fm−1 . Nevertheless. It would then be confusing to denote the aﬃne space which is the range of the maps F and f also as F . we denote it as E (or at least. Although we can make sense of addition as aﬃne combination in the case of polynomial functions with range an aﬃne space. s]).1).4 is not the standard deﬁnition of a parameterized polynomial curve. but lower-case letter. and the polar form of F by the same. Given any r.4. the aﬃne space E is the real aﬃne space A3 of dimension 3. F = A3 . in the sense that it could be equivalent to a polynomial map of lower polar degree (the symmetric multilinear maps fm . Typically.2. fm−k+1 involved in the unique decomposition of f as a sum of multilinear maps may be identically null. . s] as F ([r. it is convenient to denote the polynomial map deﬁning the curve by an upper-case letter. Deﬁnition 4. and the the trace of F [r. deﬁnition 4. with r < s. Recall that the canonical aﬃne space associated with the ﬁeld R is denoted as A. and thus. f is also deﬁned by a polynomial map of polar degree m − k). as given in section 3. unless confusions arise. such as F : A → E.1 (see deﬁnition 3. where F is any aﬃne space. . as the next example will show.1 when E is of ﬁnite dimension. . is that formal polynomials can be added and multiplied. in which case. MULTIAFFINE MAPS AND POLAR FORMS between the polynomial functions of deﬁnition 4. a (parameterized) polynomial curve segment F ([r. deﬁned by its m-polar form. s] → E of an aﬃne polynomial curve F : A → E in polar form of degree m.1 and formal polynomials. and it is also more convenient for the purpose of designing curves satisfying some simple geometric constraints. . we can deﬁne (parameterized) polynomial curves in polar form. However. we are led to consider symmetric m-aﬃne maps f : Am → F . Indeed.4 turns out to be more general and equivalent to deﬁnition 3. where A is the real aﬃne line. For convenience. this generalization of the notion of a polynomial function is very fruitful.1. Deﬁnition 4. Remark: When deﬁning polynomial curves.2. we use a letter diﬀerent from the letter used to denote the polynomial map deﬁning the curve). and E is any aﬃne space (of dimension at least 2). rather than a polynomial curve in polar form of degree exactly m. we will allows ourselves the abuse of language where we abbreviate “polynomial curve in polar form” to “polynomial curve”.1. s ∈ A. s]) in polar form of degree m is the restriction F : [r. We deﬁne the trace of F as F (A). Thus. A (parameterized) polynomial curve in polar form of degree m is an aﬃne polynomial map F : A → E of polar degree m.

. (1 − λm )r + λm s). . where F is any aﬃne space. . . s−r for 1 ≤ i ≤ m. |J|=k f (t1 .m} i∈I I∩J=∅. . we showed that every symmetric m-aﬃne map f : Am → F is completely determined by the sequence of m + 1 points f (r. tm ) m = k=0 I∪J={1.3. . . r. . Let us now consider the general case of a symmetric m-aﬃne map f : Am → F . . . . . . . . s). . . s) in A. m−k k where r = s are elements of A. tm ) = I∪J={1. . . . . t2 . t2 . . s) m−k k is obviously a symmetric m-aﬃne function. . . .4. .. . tm ) = f ((1 − λ1 )r + λ1 s. . .1. r . . Thus.. . . . by a simple induction.. tm ) = (1 − λ1 ) f (r. . tm ) + λ1 f (s.. . s.. . . . . .. . s). . .3 Polynomial Curves and Control Points In section 3. tm ) = f ((1 − λ1 )r + λ1 s.m} i∈I I∩J=∅. . and let us compute f (t1 .. . . . as it is easily veriﬁed by expanding the product i=m i=1 s − ti ti − r + s−r s−r = 1. |J|=k (1 − λi ) λj f (r. s)... |J|=k s − ti s−r j∈J tj − r s−r of f (r. Let us pick an aﬃne basis (r. s. . . we get f (t1 .. . . and these functions add up to 1. s. with r = s. tm ) = s − ti s−r j∈J tj − r s−r f (r. and we saw a pattern emerge. . . . t2 . .m} i∈I I∩J=∅. j∈J m−k k and since λi = ti −r . .. . . we considered the case of symmetric biaﬃne functions f : A2 → A3 and symmetric triaﬃne functions f : A3 → A3 . we get m k=0 I∪J={1. s. . . . . . . r. m−k k The coeﬃcient pk (t1 .. . r . POLYNOMIAL CURVES AND CONTROL POINTS 131 4. Since f is symmetric and m-aﬃne.

e they play a major role in the de Casteljau algorithm and its extensions. . . .. . . given any sequence of m + 1 points a0 . . . or B´zier control points. . . and the polynomial function h associated with f is given by m h(t) = k=0 m (1 − t)m−k tk f (0. |J|=k s − ti s−r j∈J tj − r s−r ak is symmetric m-aﬃne. . r. . . . The polynomial function associated with f is given by h(t) = f (t. .132 CHAPTER 4. and we have f (r. r . the map m (t1 . s. 1). . . s). m−k k The points ak = f (r. m−k k The polynomials m Bk [r. . . . . . . . k m−k k . . . 0.. m h(t) = k=0 m k s−t s−r m−k t−r s−r k f (r.m} i∈I I∩J=∅. am ∈ F . that is. . .. . s]. . . . some simpliﬁcations take place. For r = 0 and s = 1. tm ) → k=0 I∪J={1. . . r. . s) = ak . since ti = λi . . . s](t) = m k s−t s−r m−k t−r s−r k are the Bernstein polynomials of degree m over [r. k=0 This is shown using the binomial expansion formula: m m Bk [r. 1. s](t) k=0 m = k=0 m k s−t s−r m−k t−r s−r k = t−r s−t + s−r s−r m = 1. . . . . s) m−k k are called control points.. s](t) = 1. These polynomials form a partition of unity: m m Bk [r. t). . s. . . . MULTIAFFINE MAPS AND POLAR FORMS Conversely. s. and as we have already said several times.

whose polar form f : Am → E satisﬁes the conditions f (r. s](t) = m k s−t s−r m−k t−r s−r k are the Bernstein polynomials of degree m over [r.4. we have m m Bk [r.4. m−k k (where r.3. s ∈ A. r. we can prove in full generality that the polar form f deﬁning an aﬃne polynomial h of degree m is unique.1. s](t) ak . r = s). We could use Cartan’s proof in the case of polynomial . and F (t) is given by the formula m F (t) = k=0 m Bk [r.. tm ) = k=0 I∪J={1. . It is not hard to show that they form a basis for the vector space of polynomials of degree ≤ m. . Clearly. s. am in some aﬃne space E. Summarizing the above considerations. UNIQUENESS OF THE POLAR FORM OF AN AFFINE POLYNOMIAL MAP 133 The polynomials m Bk (t) = m (1 − t)m−k tk k are the Bernstein polynomials of degree m (over [0. . . Lemma 4. but we hope that the above considerations are striking motivations for using polar forms. . . . . .. we showed the following lemma. s) = ak . . |J|=k s − ti s−r j∈J tj − r s−r ak . we will come back to polynomial curves and the de Casteljau algorithm and its generalizations.. . 1]).m} i∈I I∩J=∅. Given any sequence of m + 1 points a0 .4 Uniqueness of the Polar Form of an Aﬃne Polynomial Map Remarkably. . s](t) = Bk t−r s−r .. . . 4. Of course. . Furthermore. there is a unique polynomial curve F : A → E of degree m. the polar form f of F is given by the formula m f (t1 . where the polynomials m Bk [r. s]. .

you may want to verify that for a polynomial h(X) of degree 2. . ai .1 is far from being economical. the polar form is given by the identity f (x1 . You may also want to try working out on your own.I. . the polar form f : E m → F of h is unique. MULTIAFFINE MAPS AND POLAR FORMS functions. In particular cases.. even of inﬁnite dimension (for example.4. for any polynomial function h of degree m.1. k It should be noted that lemma 4.m} k=|H|. We ﬁrst show the following simple lemma. since it applies to arbitrary aﬃne spaces.IV. since it contains 2m − 1 terms. x) for all x ∈ R. a formula giving the polar form for a polynomial h(X) of degree 3. but they are deeply buried! We now prove a general lemma giving the polar form of a polynomial in terms of this polynomial. If p(X) ∈ R[X] is a homogeneous polynomial of degree exactly m. (1) For every polynomial p(X) ∈ R[X]. it is often possible to reduce the number of terms.1. Lemma 4. .134 CHAPTER 4.1. 2 used for passing from a quadratic form to a bilinear form. then the symmetric m-aﬃne form f is multilinear. .4.1 to show that polynomials in one or several variables are uniquely deﬁned by polar forms which are multiaﬃne maps. proposition 3). and can be found in Chapter B. x. but we can give a slightly more direct proof which yields an explicit expression for f in terms of h. k≥1 Proof. and is given by the following expression: f (a1 .4.1. the above identity reduces to the (perhaps more familiar) identity f (x1 . Note that when h(X) is a homogeneous polynomial of degree 2. All the ingredients to prove this result are in Bourbaki [14] (chapter A..4.4..5. section §5. The expression of lemma 4.. . x2 ) = 1 4h 2 x1 + x2 2 − h(x1 ) − h(x2 ) . x2 ) = 1 [h(x1 + x2 ) − h(x1 ) − h(x2 )] . of degree ≤ m. there is a symmetric m-aﬃne form f : Rm → R. and [15] (chapter A. . . Section B. such that p(x) = f (x. Lemma 4. am ) = 1 m! (−1)m−k k m h i∈H H⊆{1.5 Polarizing Polynomials in One or Several Variables We now use lemma 4. Given two aﬃne spaces E and F . section §8.4.1 is very general. proposition 2). . 4. Hilbert spaces).2. It is quite technical. Before plunging into the proof of lemma 4. .

. . . . xm . . such that p(x1 . . x) = xk1 · · · xkn . Clearly. There are m! = k1 ! · · · kn !(m − d)! m k1 . which consists of = k!(m−k)! terms). we get a multilinear k map. In . . x). . . xn. . m − d such families of n disjoint sets. . i=j. (x1. m} consisting of d ≤ m elements into n disjoint subsets I1 . . and that f (x. . . where xj = (x1. 1 . . . Xn ] is a homogeneous polynomial of total degree exactly m. . .. . . k ≤ m.. . . . . . The idea is to split any subset of {1. and then k2 m − (k1 + · · · + kn−1 ) choices for the last subset In of size kn . is a symmetric m-aﬃne form satisfying the lemma (where σk is the k-th elementary symmetric m m! function. . kn . . k1 . k k (2) It is enough to prove it for a homogeneous monomial of the form X1 1 · · · Xnn . . of total degree ≤ m. . . . . . . . x. in in ∈In . . . when d = m.. .m} Ii ∩Ij =∅. .4. kn . . xm ) = k!(m − k)! σk m! (2) For every polynomial p(X1 . m and m − d elements. 1 n m for all x = (x1 . . Let f be deﬁned such that f ((x1. kn . . for all x = (x1 . . and k1 + · · · + kn = d ≤ m. of size k2 . . . . Xn ) ∈ R[X1 . . Indeed. . .. . . . xn ) = f (x. j . m} consisting respectively of k1 . . where k1 + · · · + kn = d. 1 ). . (1) It is enough to prove it for a monomial of the form X k . this is the number of ways of choosing n + 1 disjoint subsets of {1. m )) = k1 ! · · · kn !(m − d)! m! x1. . the number of such choices is indeed m! = k1 ! · · · kn !(m − d)! m . . . . . xn ) ∈ R . xn..∪In ⊆{1. i1 I1 ∪. m − d It is clear that f is symmetric m-aﬃne in x1 . . . . . . . . kn After some simple arithmetic. . . Also. it is easy to see that f is multilinear. POLARIZING POLYNOMIALS IN ONE OR SEVERAL VARIABLES 135 Proof. Xn ]. . . and ﬁnally. . f (x1 . . where Ij is of size kj (and with k1 + · · · + kn = d). . there is a symmetric m-aﬃne form f : (Rn )m → R. . m . n . . . then f is a symmetric multilinear map f : (Rn )m → R. xn ) ∈ Rn .. where ki ≥ 0.5. . . |Ij |=kj i1 ∈I1 ··· xn. Xn ) ∈ R[X1 . . . . . One can also argue as follows: There are k1 m − k1 choices for the second subset I2 choices for the ﬁrst subset I1 of size k1 . xn. . j ). etc. . and when k = m. where k1 + · · · + kn = d ≤ m. If p(X1 .

of total degree ≤ m. vm )) = h!k!(m − (h + k))! m! ui I∪J⊆{1. Xn ) ∈ R[X1 . . 2 We can now prove the following theorem showing a certain equivalence between polynomials and multiaﬃne maps. . . . .. . . . . x. xn ) ∈ Rn . given the monomial U h V k . . .5. . Xn ) ∈ R[X1 . with h + k = d ≤ m. such that p(x1 . .5. . . x3 ) = x1 x2 x3 + x1 x2 + x1 x3 + x2 x3 + (x1 + x2 + x3 ) − 1. . xn ) = f (x. . . 3 When n = 2. . . .. . Xn ] is a homogeneous polynomial of total degree exactly m. . . then the function p : Rn → R deﬁned such that p(x1 . There is an equivalence between polynomials in R[X1 . we get f ((u1 . we can give an expression which is easier to understand. and symmetric m-aﬃne maps f : (Rn )m → R. . x2 . . . which corresponds to the case of surfaces. x) for all x = (x1 .. . Xn ) ∈ R[X1 . when p(X1 . Part (1) is trivial. . . . f is a symmetric multilinear map f : (Rn )m → R. v1 ). 5 f (x1 . . if p(U. . . . we get f ((u1.136 As an example. v2 )) = u1 v2 + u2 v1 + u1 u2 + v1 v2 . . . . Writing U = X1 and V = X2 . is a polynomial (function) corresponding to a unique polynomial p(X1 . of total degree ≤ m. Theorem 4. .m} I∩J=∅ |I|=h. MULTIAFFINE MAPS AND POLAR FORMS p(X) = X 3 + 3X 2 + 5X − 1. xn ) ∈ Rn . .1. .1. . to minimize the number of subscripts. v1 ). . . .4. . . . Xn ] of total degree ≤ m. |J|=k i∈I j∈J vj . . Proof. . there is a unique symmetric m-aﬃne map f : (Rn )m → R. . . . . Furthermore. (2) For every polynomial p(X1 . in the following sense: (1) If f : (Rn )m → R is a symmetric m-aﬃne map.2. To prove part (2). V ) = UV + U 2 + V 2 . xn ) = f (x. Xn ]. . For a concrete example involving two variables. (u2 . and the uniqueness of such an f is given by lemma 4. (um . . if we get CHAPTER 4. . Xn ]. x. x) for all x = (x1 .. observe that the existence of some symmetric m-aﬃne map f : (Rn )m → R with the desired property is given by lemma 4. and conversely.

bk. m and f (sm ) = f (s. e . . .1). where the coeﬃcient pk (t1 . First.2. we show that a standard polynomial curve of degree at most m in the sense of deﬁnition 3. if we let (bk. is an aﬃne polynomial curve of polar degree m. 0 ≤ k ≤ m. . . . we show that an aﬃne polynomial curve of polar degree m. with f (r m ) = f (r. in the sense of deﬁnition 3. . . the curve F is completely determined by the m+1 points bk = f (r m−k sk ). s). . Conversely. . POLARIZING POLYNOMIALS IN ONE OR SEVERAL VARIABLES 137 We can now relate the deﬁnition of a (parameterized) polynomial curve in polar form of degree (at most) m given in deﬁnition 4. 1 ≤ i ≤ n. . . . tm )−1 + · · · + fn (t1 . we have e en n m F (t) = f (t. r). .i −i .2. . . . m−k k as f (r m−k sk ). deﬁned such that → → f (t1 . m Remark: The abbreviation f (r m−k sk ) can be justiﬁed rigorously in terms of symmetric tensor products (see section 11.4.1. . . . . s). .3. e en is clearly a symmetric aﬃne map such that F (t) = f (t. .5.1) given in section 3.1. .4. . − )). . . Let us introduce the following abbreviations.m} i∈I I∩J=∅. . s ∈ A. . . .2. tm ) = a0 + f1 (t1 . every polynomial Fi (t) corresponds to a unique m-polar form fi : Rm → R.1..1. ..1. and in fact.1. . tm )− . .. |J|=k s − ti s−r j∈J tj − r s−r of f (r m−k sk ). . . we showed that m f (t1 . . . tm ) = k=0 pk (t1 .1. . (−1 . By lemma 4.4 to the more standard deﬁnition (deﬁnition 3. is a symmetric m-aﬃne function. using lemma 4..4. is a standard polynomial curve of degree at most m. s.n ) denote the → → coordinates of the point bk = f (r m−k sk ) over the aﬃne basis (a0 .1 . Assume that the aﬃne polynomial curve F of polar degree m is deﬁned by a polar form f : Am → E. r. . . . Let r. Indeed. . We will denote f (r. . t) = b0 + i=1 k=0 → pk (t. t). . . when E is of ﬁnite dimension n. . tm ) f (r m−k sk ).1. . . and the map f : Am → E. Thus. . with r < s. . tm ) = I∪J={1. . . . in the sense of deﬁnition 4. t)bk. . . . in the sense of deﬁnition 4. . . .5. .

. m−i (1 − t)m−i−j tj . 1]). we get m−i ti = j=0 m−i However. in m terms of the Bernstein polynomials Bj (t) (over [0. . e m m We conclude this chapter by proving that the Bernstein polynomials B0 (t). how polar forms of polynomials simplify quite considerably the treatment of B´zier curves and splines. t) are polynomials of degree ≤ m. . we express each t . (m − i)!i! (m − (i + j))!j! (m − (i + j))!(i + j)! j!i! . where 0 ≤ i ≤ m. among other things. we just have to prove that i+j m i m−i j = m i+j i+j . j m−i ti Bj (t) = ti m−i (1 − t)m−i−j tj = j m Since Bi+j (t) = m (1 − t)m−(i+j) ti+j . MULTIAFFINE MAPS AND POLAR FORMS and since the pk (t. j=0 Multiplying both side by ti . Bm (t) i also form a basis of the polynomials of degree ≤ m . i to conclude that m−i (1 − t)m−(i+j) ti+j = j i+j i m i m−i ti Bj (t) = m Bi+j (t). . we know that Bj (t) = m−i ti Bj (t). . j=0 Using this identity for m − i (instead of m). We will see later on. 0 ≤ i ≤ m. this shows that F is a standard polynomial curve of degree at most m. j m−i (1 − t)m−(i+j) ti+j . . . and thus. .138 CHAPTER 4. we have m−i m−i Bj (t) = 1. For this. The veriﬁcation consists in showing that m! (m − i)! m! (i + j)! = . Recall that the Bernstein polynomials m Bj (t) form a partition of unity: m m Bj (t) = 1.

. j=0 we get m−i ti = j=0 i+j i m i m Bi+j (t). . t. Prove that for any a. Bm (t) also form a basis of the polynomials of degree ≤ m (since they generate this space. . . Since 1. vi vi Problem 3 (10 pts). . . . Prove the following statement assumed in the proof of lemma 4. and there are m + 1 of them). the m + 1 Bernstein m m polynomials B0 (t).4. ai1 + −1 . . . m ≥ 2..<ik ≤m → → f (a1 . am ∈ R. substituting i+j i m i 139 m−i ti Bj (t) = m Bi+j (t) in ti = m−i m−i ti Bj (t). . . aik + −k . 4. PROBLEMS which is indeed true! Thus. Show that the Bernstein polynomial m Bk (t) = m (1 − t)m−k tk k .3: m ∆vm · · · ∆v1 f (a) = (−1)m−k k=0 1≤i1 <. .. . . . am ). . . . t2 . for any a1 . b ∈ R.6 Problems Problem 1 (20 pts). k More generally.6. and n ≥ 0. . . . .1. the following identity holds for all n ≥ 0: n (a + b) = k=0 n n k n−k a b . tm form a basis of the polynomials of degree ≤ m. . prove the identity (a1 + · · · + am )n = k1 +···+km 0≤ki ≤n n! ak1 · · · akm . m k1 ! · · · km ! 1 =n Problem 2 (20 pts). .

where 0 ≤ i ≤ m. w) = 1 27F 24 u+v+w 3 − F (u + v − w) − F (u + w − v) − F (v + w − u) Problem 5 (20 pts).. where 1 ≤ k ≤ m.140 reaches a maximum for t = CHAPTER 4. . MULTIAFFINE MAPS AND POLAR FORMS k . Prove that the polar form f : A3 → E of F can be expressed as f (u.m} |I|=k i∈I . You may use Mathematica of any other language with symbolic facilities. Prove that the polar form f : A3 → E of F can be expressed as f (u. if k = 0 and m ≥ 0. otherwise. m and σk = m k m fk . Prove the following recurrence equations: m−1 m−1 σk + tm σk−1 1 0 m σk = if 1 ≤ k ≤ m. v. m Alternatively.. v. (i) Write a computer program taking a polynomial F (X) of degree n (in one variable X). 1]. Let F : A → E be a polynomial cubic curve. Plot the Bernstein polynomials Bi4 (t). over [0. 1i ). Let F : A → E be a polynomial cubic curve. Problem 7 (20 pts). and returning its m-polar form f .. 3(w − u)(u − v) 3(w − v)(v − u) 3(v − w)(w − u) Advice: There is no need to do horrendous calculations. m m Can you improve the complexity of the algorithm of question (i) if you just want to compute the polar values f (0m−i . 0 ≤ i ≤ 4. where n ≤ m. show that fk can be computed directly using the recurrence formula m fk = (m − k) m−1 k m−1 fk + tm fk−1 . Think hard! Problem 6 (50 pts). (ii) Let m fk = 1 m k ti I⊆{1. w) = (w − v)2 F (u) (w − u)2 F (v) (v − u)2F (w) + + . Estimate the complexity of your algorithm.. m Problem 4 (20 pts).

PROBLEMS 141 Problem 8 (20 pts). B1 (t). t. t3 in terms of the 3 3 3 3 Bernstein polynomials B0 (t). Show that in general. . m Bim (t) = j=i (−1)j−i m j j i tj . Give the matrix expressing the polynomials 1.4.6. B2 (t). t2 . B3 (t).

MULTIAFFINE MAPS AND POLAR FORMS .142 CHAPTER 4.

s]). we are interested in a polynomial curve segment F ([r. The de Casteljau algorithm gives a geometric iterative method for determining any point F (t) = f (t.1 The de Casteljau Algorithm In this chapter. Actually. . . b1 .. is completely determined by the sequence of m + 1 points bk = f (r m−k sk ). and we showed that m f (t1 . the de Casteljau algorithm presented earlier for curves of degree 3 in section 3. s]. . . . . . speciﬁed by the sequence of its m + 1 control points. . Finally. In preparation for the discussion of spline curves. . .m} i∈I I∩J=∅. . is a symmetric m-aﬃne function. t) on the curve segment F ([r.. bm . the formulae giving the derivatives of polynomial curves are given. lemma 4. . where bk = f (r m−k sk ). that an aﬃne polynomial curve F : A → E of degree m deﬁned by its m-polar form f : Am → E. |J|=k s − ti s−r j∈J tj − r s−r of f (r m−k sk ). . Typically. or B´zier e control points. bm .. s]). The only diﬀerence is that 143 . . . for t ∈ A not necessarily in [r. s ∈ A. t) on the curve F speciﬁed by the sequence of control points b0 . . . b0 . The powerful method of subdivision is also discussed extensively. . the de Boor algorithm is also introduced. . . .3. 0 ≤ k ≤ m. the de Casteljau algorithm can be used to calculate any point F (t) = f (t. where the coeﬃcient pk (t1 . tm ) f (r m−k sk ). r = s. Some programs written in Mathematica illustrate concretely the de Casteljau algorithm and its version using subdivision. . . We saw in section 4. . b1 . .1.. where r. . tm ) = I∪J={1. .Chapter 5 Polynomial Curves as B´zier Curves e 5. .3. and the conditions for joining polynomial curve segments with C k -continuity are shown.8 is generalized to polynomial curves of arbitrary degree m. tm ) = k=0 pk (t1 .

s) f (s. s. even outside [r. b1 . All that is given is the sequence b0 . t2 .1 The curve goes through the two end points b0 and bm . . and t. s]. s) The above computation is usually performed for t ∈ [r. s. we usually say that F (t) = f (t. . . r. All we have to do is to use tj during phase j. b1 . and the result is the point F (t) on the curve. the computation of the point F (t) on a polynomial cubic curve F can be arranged in a triangular array. t. b1 ). some of the points bk and bk+1 may coincide. s) f (r. t) by repeated linear interpolations. Let us review this case (polynomial cubic curves). as shown 1 Actually. . . but the algorithm still works ﬁne. and the idea is to approximate the shape of the polygonal line consisting of the m line segments (b0 . bm . s) f (r. t) f (r. (b1 . are arbitrary: f (t. . Since f is symmetric. The essence of the de Casteljau algorithm is to compute f (t. r. as shown below: 1 f (r. s) f (t. t) 2 3 The de Casteljau algorithm can also be used to compute any polar value f (t1 . As we observed. t3 ) (which is not generally on the curve). t. such as the fact that the curve segment F ([r. t. . . s]. bm of m + 1 control points. m + 1 − j interpolations are performed. . only one interpolation is performed. but not through the other control points. The following diagram shows an example of the de Casteljau algorithm for computing the point F (t) on a cubic. s]. We already discussed the de Casteljau algorithm in some detail in section 3. bm ). but it works just as well for any t ∈ A. and thus. s. . t.8. . s. r) f (r. b2 ). . s]) may not hold for larger curve segments of F . When t is outside [r. where r. t) is computed by extrapolation. POLYNOMIAL CURVES AS BEZIER CURVES in the more general case. r) f (t. s]) is contained in the convex hull of the control polygon (or control polyline) determined by the control points b0 .. r.144 ´ CHAPTER 5. We strongly advise our readers to look at de Casteljau’s original presentation in de Casteljau [23]. . . In phase m (the last phase). we can think of its m arguments as a multiset rather than a sequence. What’s good about the algorithm is that it does not assume any prior knowledge of the curve. (bm−1 . . t. we can write the arguments in any order we please. The computation consists of m phases. using the fact that f is symmetric and m-aﬃne. s) f (t. certain properties holding for the curve segment F ([r. corresponding to the parameter value t ∈ A. where in phase j. .

f (0. 6). t2 ). s). t2 . s) f (r. s) Figure 5. s) f (r. t. r. . and f (6. There are several possible computations.1. r. . The diagram below shows the computation of the polar values f (0. . s. s. s) f (r. t) f (t. t) f (r. t. f (t3 . . r. f (0. t) F (t) = f (t. t3 . . t3 ): The general case for computing the point F (t) on the curve F determined by the sequence of control points b0 . s) f (r. This turns out to be very useful. s. 6). f (0. 0. . . t3 ). t2 . s) f (s. s) 145 F (r) = f (r. the de Casteljau algorithm can also be used for computing other control points. . 6. t2 . t2 ): The polar value f (t1 . The following diagram shows an example of the de Casteljau algorithm for computing the polar value f (t1 . t2 . THE DE CASTELJAU ALGORITHM f (r. . 6. t1 . t1 . 0. t2 . r. . t2. r) F (s) = f (s. t3 ) on a cubic. r. i j k 2 3 f (t1 . r. . 6). r. r) f (r. . bm . f (0. where r = 0. t) f (s. s) . We will abbreviate f (t. . t. where bk = f (r m−k sk ). 6). t1 ). f (t1 . s. and t3 = 4. t3). t1 . t1 . s) f (r. t2 .1: The de Casteljau algorithm below in the case where m = 3: 1 f (r. s) f (t1 . t. . s. t2 = 3. t1 ) f (r. is shown below. . s) Since control points are polar values.5. . t3 ) is also obtained by computing the polar values f (0. r. s. t1 = 2. and f (6. s. s = 6. t. r) f (t1 . t3 ) f (t1 .

6) Figure 5. 6) f (0. 6. 0) F (6) = f (6. 6.146 ´ CHAPTER 5. for t1 = 2. t2 ) f (t1 . t1 . 6) f (0. 0. t2 . t1. 0) F (6) = f (6. 6. t2 = 3. t3 = 4 f (0. 0.2: The de Casteljau algorithm. 6) f (t1 . t3 = 4 . t2 = 3. t2 . t2 ) f (0. 6) f (0. for t1 = 2. 6. 6. t3 ) f (0. 0. t3 ) f (t1 . 6) f (0. 6) F (0) = f (0. t1 . t3. 6. 0. t3 ) f (6. 6) f (0. t2 . t3 ) f (t3 . 0. 0. 6) f (6. t3) f (0. POLYNOMIAL CURVES AS BEZIER CURVES f (0. 6) Figure 5. t2 .3: The de Casteljau algorithm. t1) F (0) = f (0.

. THE DE CASTELJAU ALGORITHM 147 as f (ti r j sk ). by the interpolation step f (tj r m−i−j si ) = 0 f (r m ) f (r m−1 s) . f (tj−1 r m−j s) . f (tj r m−i−j si ) if 1 ≤ j ≤ m. f (tj r m−i−j si ) . f (tm−1 s) m−j f (tm ) rs ) f (tj sm−j ) f (tj−1 sm−j+1 ) . where i + j + k = m.j = s−t s−r bi. j−1 m−i−j+1 i f (t r s) f (tj−1r m−i−j si+1 ) f (tm−1 r) .. Then... 0 ≤ i ≤ m − j. let us deﬁne the following points bi.. . f (t j−1 s−t s−r f (tj−1r m−i−j+1 si ) + j −1 t−r s−r j f (tj−1r m−i−j si+1 ). for 1 ≤ j ≤ m. 0 ≤ i ≤ m − j..j = bi if j = 0. . ..j−1 + t−r s−r bi+1.... we have the following equations: bi.. used during the computation: bi.1. 0 ≤ i ≤ m. f (tj−1 r m−j+1) f (tj r m−j ) ..j−1 .5..j ... f (rsm−1 ) f (tsm−1 ) f (sm ) In order to make the above triangular array a bit more readable... m−1 m 1 f (tr m−1 ) 2 .. The point f (tj r m−i−j si ) is obtained at step i of phase j.

(b2. .0 bm−1.m−2 . . (bm−1 .j .j−1 bm−k−1. b2 ). .1 b1.j . b3.0 ).0 b0. bm−k−j. Note that the shells are nested nicely. (b2 .0 When r ≤ t ≤ s. bm−2. . .m−k . bi.0 ). b1 ). . (b1.j−1 bi+1. . . (b3. .0 ) (b0. bm−j. (b1. bm−1. j −1 j .m .1 ). bm−j+1.j 1 . the algorithm consists of a diagram consisting of the m polylines (b0. bm−k−j+1. bk. b2. m . (b1.j−1 . b1.1 . POLYNOMIAL CURVES AS BEZIER CURVES Such a computation can be conveniently represented in the following triangular form: 0 b0. . . b2. each interpolation step computes a convex combination. . geometrically. b3.j−1 .1 . . b1.0 . b1. they form the de Casteljau diagram. b4. b2. b0.0 ).j lies between bi. b4 ). . . (b1 .m−k b0. .1 . . b1.j−1 .j . ...m−1 ) called shells. (bm−3. we still obtain m shells and a de Casteljau diagram. . . and bi.2 ) .m−2 ) (b0. m− k .2 ). b2.1 ). but the shells are not nicely nested.148 ´ CHAPTER 5.0 .. .0 .1 ) (b0. . .j−1 b0.. s]. . . .2 . . .0 . b0.2 . .1 ).j−1 and bi+1.0 . bm−1.2 ).. (bm−1..m−1 .0 . .0 ).. In this case. and with the point b0. (bm−2. When t is outside [r.m−2 .m . (b1. (b0.1 bm−k. (b2. .1 . bi. . bm ) is also called a control polygon of the curve.. bm... . . .m−2 ).1 bm. (b3 .2 .0 . The polyline (b0 . . . b3 ). . b1.

1. affine basis [r. . where the polynomials m Bk [r. s]. on the unique polynomial curve F : A → E of degree m determined by the sequence of control points b0 .3. . .r) p + (t . . . without actually using the Bernstein polynomials. s]). Note that the parameter t can take any value in A. We will sometimes denote the B´zier curve determined by the sequence of control points b0 . s]. . . and we give below several versions in Mathematica. .r) q. s]). with respect to the aﬃne frame [r. . the ﬁrst one being the shells of the de Casteljau diagram. . . . . THE DE CASTELJAU ALGORITHM 149 By lemma 4. or B[r. s].r. and deﬁned with respect e to the interval [r. and the second one being F (t) itself. s]. s](t) = m k s−t s−r m−k t−r s−r k are the Bernstein polynomials of degree m over [r. and the point e corresponding to the parameter value t as B b0 .q_List. s](t) bk . *) . The function decas computes the point F (t) on a polynomial curve F speciﬁed by a control polygon cpoly (over [r. The function badecas simply computes the point F (t) on a polynomial curve F speciﬁed by a control polygon cpoly (over [r. [r. . in the sense that f (r. r .s_. for a value t of the parameter. we are assuming that t ∈ [r. and not necessarily only in [r. s. bm . but also the shells of the de Casteljau diagram. q (* w. . . The result is the point F (t).t)/(s .t. [r. . s].r_. . bm . . s] . s]. . The output is a list consisting of two sublists. The de Casteljau algorithm is very easy to implement. . s) = bi .t_] := (s . the de Casteljau algorithm provides an iterative method for computing F (t). bm . and interpolating value t *) lerp[p_List.1. bm . The point F (t) is given by the formula m F (t) = k=0 m Bk [r. . . These functions all use a simple function lerp performing aﬃne interpolation between two points p1 and p2. . m−i i where f is the polar form of the B´zier curve. This can be advantageous for numerical stability. but when we refer to the B´zier e curve segment over [r. we have an explicit formula giving any point F (t) associated with a parameter t ∈ A. s]. (* Performs general affine interpolation between two points p. s].5.r)/(s . as B b0 . s] (t). Thus.

b = {}. j}. r. r. (m = Length[bb] . lerp[bb[[i]].1]]. res := Append[lseg. lseg = {}. If[i > 1. s_. b = {}. t_] := Block[ {bb = {cpoly}. s. i.j + 1} ]. m} ]. s. i. {j. 1. bb[[i+1]]. r_. (* computes the point F(t) and the line segments involved in computing F(t) using the de Casteljau algorithm *) decas[{cpoly__}. b = {}. res ) ]. t]]. j. POLYNOMIAL CURVES AS BEZIER CURVES (* (* (* computes a point F(t) on a curve using the de Casteljau algorithm *) this is the simplest version of de Casteljau *) the auxiliary points involved in the algorithm are not computed *) badecas[{cpoly__}. 1.1. {j. 1. {b[[i . res}. {i. The following function pdecas creates a list consisting of Mathematica line segments and of the point of the curve. {i. b = {}. bb[[1]] ) ]. r_. (m = Length[bb] . m. m} ]. t]]. 1. Do[ Do[ b = Append[b. t_] := Block[ {bb = {cpoly}. m. m . bb = b. Do[ Do[ b = Append[b. b[[i]]}]] . . lerp[bb[[i]]. lseg = Append[lseg. m .j + 1} ]. bb[[1]]]. ready for display. bb = b. s_.1.150 ´ CHAPTER 5. bb[[i+1]].

tj−1 r m−j s) .. tm ) is shown below. . ll = {}. .. tj−1 rsm−j ) f (t1 .. tj sm−j ) f (t1 .0]. tj−1r m−j+1 ) f (t1 .. .. . . l1.1. r. . -1]. . THE DE CASTELJAU ALGORITHM (* this function calls decas. s_. res = Append[ll. ll. Do[ edge = res[[i]]. tj−1 r m−i−j si+1 ) . . . f (t1 . f (t1 .. l1} ]. ll = Append[ll.01]. 0 f (r m ) f (r m−1 s) . . f (t1 . tm ) ) . . pt = Last[res].. tj r m−j ) . 1. t_] := Block[ {bb = {cpoly}. f (rsm−1 ) f (t1 sm−1 ) f (sm ) m−j+1 151 1 f (t1 r m−1 ) j−1 j m . s. The general computation of the polar value f (t1 . ... f (t1 . with colors *) pdecas[{cpoly__}. {RGBColor[1. . i. .. edge}. Point[pt]}]. . PointSize[0.5. tj−1 r m−i−j+1 si ) f (t1 . . l1 = Length[res].0. res. . f (t1 .. t]. ... tj−1 s .. {i. . . pt. f (t1 . res ]. and computes the line segments in Mathematica. . Line[edge]]. . res = decas[bb. res = Drop[res. . r_. . tj r m−i−j si ) .

b = {}. .j = bi if j = 0. 0 ≤ i ≤ m − j. for 1 ≤ j ≤ m. bb[[i+1]]. The point f (t1 . . t[[j]]]]. tj r m−i−j si ) is obtained at step i of phase j. . . tj r m−i−j si ) = s − tj s−r f (t1 . f (t1 . s). s. . . lseg = {}. . . . m−i i as f (r m−i si ). tj r m−i−j si ) if 1 ≤ j ≤ m. {tt__}. j m−i−j i as f (t1 ..j−1 + tj − r s−r bi+1. tm) *) the input is a list tt of m numbers *) gdecas[{cpoly__}. . . m. . r_. 0 ≤ i ≤ m. and 0 ≤ i ≤ m − j. r. . .152 We abbreviate ´ CHAPTER 5. (* (* (* computes a polar value using the de Casteljau algorithm *) and the line segments involved in computing f(t1. . we abbreviate f (r. If[i > 1. . . tj−1 r m−i−j si+1 ). . {b[[i . tj−1 r m−i−j+1 si ) + tj − r s−r f (t1 . s. . . t = {tt}. r. . r. . res}.. i.j used during the computation as bi. deﬁning the points bi. and in the case where j = 0. . The computation of polar values is implemented in Mathematica as follows.. where 0 ≤ i ≤ m.j = s − tj s−r bi. . Do[ Do[ b = Append[b. where 1 ≤ j ≤ m. j. r. . . by the interpolation step f (t1 . .1]]. . tj .j−1 . s_] := Block[ {bb = {cpoly}. we have the following more readable equations: bi. s. . .1. lerp[bb[[i]]. tj r m−i−j si ). (m = Length[bb] . . . s). 0 ≤ i ≤ m − j. . Again. POLYNOMIAL CURVES AS BEZIER CURVES f (t1 . b[[i]]}]] . lseg = Append[lseg.

. and then with respect to µ. which is obvious. since both of these points coincide with the point R = (1 − λ)(1 − µ)A + (λ + µ − 2λµ)B + λµC. . and that aﬃne maps preserve aﬃne combinations. . 1. C be three points. . . it is clear that it yields a multiaﬃne map. . h(bm ). Pµ = (1 − µ)A + µB. . The above result is closely related to a standard result known as Menela¨ s’s theorem. B. . bm . . s] ) = B [h(b0 ). . . m . and let Pλ = (1 − λ)A + λB. . we claim that Qλ = (1 − λ)B + λC. This means that if an aﬃne map h is applied to a B´zier curve F e speciﬁed by the sequence of control points b0 . is a symmetric multiaﬃne map. res := Append[lseg. This can be expressed as h(B b0 . . . is identical to the curve h(F ). and then with respect to λ. This is because. bm . tm ) remains invariant if we exchange any two arguments. res ) ].1. and also Then. obtaining a curve h(F ). . . we only need to show that f (t1 . yields the same result a interpolating ﬁrst with respect to µ. . s]] . as follows. . bm . . Since every permutation is a product of transpositions. Thus. . [r. {j. Let A. . . . (1 − µ)Pλ + µQλ = (1 − λ)Pµ + λQµ .5. interpolating ﬁrst with respect to λ. . . . bb = b. images e of the original control points b0 . . Qµ = (1 − µ)B + µC. . then the B´zier curve F ′ determined by the sequence of control points h(b0 ). {i. 153 Remark: The version of the de Casteljau algorithm for computing polar values provides a geometric argument for proving that the function f (t1 . e Aﬃne Invariance.j + 1} ]. . tm ) that it computes. 1. . . b = {}. [r. What is not entirely obvious. is symmetry. u We now consider a number of remarkable properties of B´zier curves. We can establish this fact. THE DE CASTELJAU ALGORITHM . points on a B´zier curve speciﬁed by a sequence of control points e b0 . . m} ]. bb[[1]]]. bm are obtained by computing aﬃne combinations. . Since the algorithm proceeds by aﬃne interpolation steps. h(bm ).

. This fact is basically obvious. . bm . . . . . and then contruct the new B´zier curve. . . If we want to move a curve using an aﬃne map. bm . rather than applying the aﬃne map to all the points computed on the original curve. bm . .4: Symmetry of biaﬃne interpolation C This property can be used to save a lot of computation time. [r. This can be expressed as B b0 . bm . . . s] (t) = B bm . . over the interval [r. Invariance under aﬃne parameter change. .154 ´ CHAPTER 5. This is because. with the same sequence of control points b0 . and left to the reader. . . is contained within the convex hull of the control points b0 . . . . 1]. we simply apply the aﬃne map to the control points. . [0. . The B´zier curve F (t) speciﬁed by the see quence of control points b0 . The segment of B´zier curve F (t) speciﬁed by the sequence of e control points b0 . bm . . . 1] t−r r−s . . . . . . bm . [r. . bm and deﬁned over the interval [r. h] (t − r). and B b0 . . . . b0 . is the same as the B´zier e t−r curve F r−s . [s. . . s]. POLYNOMIAL CURVES AS BEZIER CURVES B Pµ R Pλ Qλ Qµ A Figure 5. . [r. r] (t). Convex Hull Property. over [0. . when r ≤ t ≤ s. r + h] (t) = B b0 . who can also verify the following two properties: B b0 . [0. bm . . s] (t) = B b0 . This is usually a lot cheaper than moving e every point around. . . s] (r < s). all aﬃne combinations involved in computing a point F (t) on the curve segment are convex barycentric combinations. bm . . . .

. . . . . s] (r + s − t) = B bm . r) and b1. the tangent to the B´zier curve at the point b0 is the line determined by b0 e and b1 . where f is the polar form of the original curve F (t) speciﬁed by the sequence of control points b0 . the tangent at the point bm is the line determined by bm−1 and bm (provided that these points are distinct). the tangent at the current point F (t) determined by the parameter t. s]. . This is because it can be shown i that the Bernstein polynomial Bin reaches its maximum at t = n . . that when b0 and b1 are distinct. r. r. . Determination of tangents. . bm . . Similarly. . over the interval [r. . This can be seen easily using the symmetry of the Bernstein polynomials. s]. . . s) = f (s. m−i i m−i i and since this curve is unique. s). . passes trough the points b0 and bm . s] (t). . . bm . . It will be shown in section 5. whose polar form is g(t1 . . . . . [r. . . . . Furthermore. satisﬁes the conditions g(r. . bm is that same line. The B´zier curve F (r + s − t) speciﬁed by the sequence of control points e b0 . . . . . Symmetry. . bm . . . is equal to the B´zier curve F ′ (t) speciﬁed by the e sequence of control points bm . . . .4. is determined by the two points b0. t.1. . over the interval [r. . bm . m−1 m−1 given by the de Casteljau algorithm. . e This is obvious since the points on the curve are obtained by aﬃne combinations. it is indeed F ′ (t). This can be expressed as B b0 . b0 . then the B´zier curve determined by the sequence of control points b0 . . . . . . . . . . . tm ) = f (r + s − t1 . Given a B´zier curve F speciﬁed by a sequence of control points e b0 . the curve is most aﬀected i around the points whose parameter value is close to n . m−1 = f (t. is to observe that the curve F (r + s − t). . over the interval [r. If the points b0 . Endpoint Interpolation. t. m−1 = f (t. THE DE CASTELJAU ALGORITHM 155 This property can be used to determine whether two B´zier curve segments intersect e each other. b0 . . . . .5. . s. . Linear Precision. . . . . . . Another way to see it. . . . [r. . . Pseudo-local control. s]. r + s − tm ). The B´zier curve F (t) speciﬁed by the sequence of control points e b0 . r) = bm−i . bm . if some control point bi is moved a little bit. s. bm are collinear (belong to the same line). .

. and the computation can be conveniently represented in the following triangular form: 0 b0. . subdivision can be used to approximate a curve using a polygon. . Observe that the two diagonals b0. ..m . .j . As a consequence. .j−1 .0 Let us now assume that r < t < s. . . bm . .m .. b0. As we will see. . . .0 b0. for every hyperplane H. s] and the hyperplane H. the number of intersections between the B´zier curve segment F [r. is less than or equal e to the number of intersections between the control polygon determined by b0 . b0.. and an interval [r.m on the B´zier e curve.1 . bm−1. .2 Subdivision Algorithms for Polynomial Curves We now consider the subdivision method. .. bi. the subdivision property. .j 1 . . . . over [r. 5. bm−j+1. POLYNOMIAL CURVES AS BEZIER CURVES Variation Diminishing Property.1 bm−k. we saw how the de Casteljau algorithm gives a way of computing the point B b0 . bm−k−j.j−1 bm−k−1. . b0. . and the hyperplane H. bm−j.m−k . . .j−1 bi+1. bi. ..0 . . . bm−k−j+1. . Given a B´zier curve F speciﬁed by a sequence of e control points b0 . m . . s]. b0. .j . and the convergence is very fast.j−1 b0.0 bm−1. . . . . . Given a sequence of control points b0 . .156 ´ CHAPTER 5. . .j−1 . bm . bk. . . bm .. .j . . . bm .1 b1. b0.0 .j .1 bm. .. m− k .0 .m−k b0. We will prove the variation diminishing property as a consequence of another property. a convex control polygon corresponds to a convex curve.. . . s] (t) = b0. . [r. j −1 j . for every t ∈ A. . s].

. (i + 1)/m]. . b0. Assuming for 1 1 simplicity that r = 0 and s = 1. g is the polar form associated with the B´zier curve e speciﬁed by the sequence of control points b0. For the n following lemma. s].1. .0 .0 . since f and g agree on the sequence of m + 1 points b0. . bm .0 . . Also. Clearly.j . we can in turn subdivide each of the two segments. [t. .m over [r. . t] and B b0. b0. . s] (u) = B b0. . . if we ﬁrst choose t = 2 . 0 ≤ k ≤ 2n .j . 157 Indeed. and h is the polar form associated with the B´zier curve speciﬁed by the sequence of control points e b0. SUBDIVISION ALGORITHMS FOR POLYNOMIAL CURVES and b0. . we have 2n B´zier segments. It seems intuitively clear that the polygon obtained by repeated subdivision converges to the original curve segment. . . bm.0 over [t. . bm. .0 . s] (u). b0. . b0. . . This is indeed the case.m . . . . . . . . . . . . . given a control polygon Π with m sides. bm. . we assume that Π is viewed as the piecewise linear curve deﬁned such that over [i/m.0 . For t ∈ [r. there are 2n + 1 points on the original curve. [r. .m . .j . and f and h agree on the sequence of m + 1 points b0. .m . .0 . .j . . bm−j. Among these points. . . each consist of m + 1 points. bm. bm−j. and 2n control subpolygons. . whose union e forms a polygon Πn with m2n + 1 nodes. form a subdivision of the curve segment B b0 . bm−j. we assume that E is a normed aﬃne space. [t. bm . . . [r. Π(u) = (i + 1 − m u)ai + (m u − i)ai+1 . . if f is the polar form associated with the B´zier curve speciﬁed by the sequence e of control points b0 .j . . by lemma 4. with 0 ≤ i ≤ m − 1. . . s] as above. .j . . for all u ∈ A. [r. . . . . and then t = 4 and t = 3 . . s] . . s] . bm. . .j . . t] (u) = B b0. . bm−j. .j .m .0 . . . . bm−j. we say that the two curve segments B b0. . .m−1 . . . .5. the points corresponding to parameter values t = 2k . . and so 4 on. . bm over [r. [r. .3. . . . . t]. . b1. .2. . . . .m . b0. . and continue recursively in this manner. We claim that we have B b0 . and the convergence is in fact very fast.m . .0 . s]. b0. b0.m . b0. at the n-th step. .m .j . . . we have f = g = h.

. 0≤u≤1 2 for some constant C > 0 independent of n. . which proves the lemma. bm . . . bm. b1. . . 1] .j . . the polygon obtained by subdivision becomes indistinguishable from the curve.|Π|=m 2 max 0≤u≤1 where Π is any subpolygon of Πn consisting of m sides. The following Mathematica function returns a pair consisting of B[r. . for a B´zier curve segment F .0 . . whose endpoints are on the curve F . b0. b0. [0. . . . . this yields a fast method for plotting the curve segment! The subdivision method can easily be implemented. . the distance between the B´zier curve e segment F and the control polygon satisﬁes the inequality 0≤u≤1 −− −→ −−− max Π(u)F (u) ≤ max 0≤i. The above lemma is the key to eﬃcient methods for rendering polynomial (and rational) curves. bm . . bm . and since only convex combinations are involved. . for every t ∈ A. Let M = max bi bi+1 . −− −→ by the triangle inequality. by the triangle inequality. from above. . converges uniformly to the curve segment F = B b0 .2. . and as B[t.1. the polygon Πn obtained after n steps of subdivision. due to the screen resolution. . we denote as B[r. Then. 0≤u≤1 Π⊆Πn .j≤m − → bi bj ≤ m max 0≤i≤m−1 −− −→ bi bi+1 . . .1 . Proof. bm ) over an aﬃne frame [r.158 ´ CHAPTER 5. bm−j. and thus. since F e is contained in the convex hull of the control polygon.t] and B[t. e over the interval [0. . s].m . . . the maximum length of the sides of M the polygon Πn is 2n . from an input control polygon cpoly. Given a polynomial curve F deﬁned by a control polygon B = (b0 . .m . Since we are subdividing in a binary fashion.t] the control polygon b0. for the original control polygon with control points b0 . . . . . 1]. .s] the control polygon b0.s] . b0. . we have −− − − − − −→ max Πn (u)F (u) ≤ −− −→ −−− mM max Π(u)F (u) ≤ n . if we subdivide in a binary fashion. After a certain number of steps.j . . If Π is any control polygon with m + 1 nodes. . Given a B´zier curve speciﬁed by a sequence of m+1 control points b0 . in the sense that −− − − − − −→ C max Πn (u)F (u) ≤ n .0 . POLYNOMIAL CURVES AS BEZIER CURVES Lemma 5.m−1 .

r..j + 1]]]. the function makedgelis makes a list of Mathematica line segments from a list of control polygons.1.. t)) and (f(t. Do[ lpoly = Join[lpoly. b[[m .. t = (r + s)/2. res ) ]. s. SUBDIVISION ALGORITHMS FOR POLYNOMIAL CURVES (* (* (* Performs a single subdivision step using the de Casteljau algorithm Returns the control poly (f(r. l.2. s_] := Block[ {cpoly = {poly}.r. In order to approximate the curve segment over [r. t_] := Block[ {bb = {cpoly}. (m = Length[bb] . t]]. r_. s_. i}. . bb = b. The function subdiv performs n calls to subdivstep. res}. Do[ Do[ b = Append[b. s. in order to display the resulting curve... ld = Prepend[ld. m . res := Join[{ud}..{ld}]. s)) 159 *) *) *) subdecas[{cpoly__}. ld = {bb[[m + 1]]}. ud = Append[ud. j. l} ]. lpoly ) *) . t. bb[[i+1]]. lerp[bb[[i]]. . lpoly = {}. m. ld = {}.. The function subdivstep subdivides each control polygon in a list of control polygons. s].j + 1} ].. t. 1. (l = Length[cpoly]. {i. . ud = {}. we recursively apply subdivision to a list consisting originally of a single control polygon. 1.. r. subdecas[cpoly[[i]]. Uses t = (r + s)/2 *) subdivstep[{poly__}. ud = {bb[[1]]}.5. 1.. i. r_. Finally. b[[1]]]... t]] . {j. b = {}. (* subdivides each control polygon in a list of control polygons (* using subdecas. b = {}. {i. m} ]. t. s..

l1. ´ CHAPTER 5. (0. POLYNOMIAL CURVES AS BEZIER CURVES (* calls subdivstep n times *) subdiv[{poly__}. res = Append[res. 30). sl[[j]]}]]]. The subdivision method is illustrated by the following example of a curve of degree 4 given by the control polygon cpoly = ((0. (10. Do[If[j > 1. newsl = {poly}. s].160 ]. sl. Do[ sl = newsl[[i]]. −20). r. 5. l2 = Length[sl]. (5. 6. −4). (* To create a list of line segments from a list of control polygons *) makedgelis[{poly__}] := Block[ {res. Line[{sl[[j-1]]. for n = 1. {j. The following 6 pictures show polygonal approximations of the curve segment over [0. . 3. {i. (l1 = Length[newsl]. s_. newp ) ]. res = {}. (10. n} ]. −4)). n_] := Block[ {pol1 = {poly}. j. res ) ]. l2} ]. 4. 1. l1} ]. l2}. i. ( newp = {pol1}. newp = {}. r_. 1. i}. 1. 30). {i. 2. Do[ newp = subdivstep[newp. 1] using subdiv.

1 iteration . SUBDIVISION ALGORITHMS FOR POLYNOMIAL CURVES 161 Figure 5.5: Subdivision.2.5.

POLYNOMIAL CURVES AS BEZIER CURVES Figure 5.162 ´ CHAPTER 5.6: Subdivision. 2 iterations .

2.5. SUBDIVISION ALGORITHMS FOR POLYNOMIAL CURVES 163 Figure 5.7: Subdivision. 3 iterations .

4 iterations . POLYNOMIAL CURVES AS BEZIER CURVES Figure 5.164 ´ CHAPTER 5.8: Subdivision.

9: Subdivision.2. SUBDIVISION ALGORITHMS FOR POLYNOMIAL CURVES 165 Figure 5.5. 5 iterations .

POLYNOMIAL CURVES AS BEZIER CURVES Figure 5. 6 iterations .10: Subdivision.166 ´ CHAPTER 5.

.2. b_] := Block[ {poly = {cpoly}. pola = pol1[[1]]. b] diﬀerent from the interval [r. we now have the three edges (a. . we observe that a hyperplane intersects the new polygon in a number of points which is at most the number of times that it intersects the original polygon. b. b) Assumes that a = (1 . pol2 = Prepend[pol2. we subdivide w. b] of a polynomial curve given by a control polygon B over [r. and (b′′ . m} ].b] over a new aﬃne frame [a. s.. [a. pt}.. npoly = subdecas[pol2. r_. pt]. where 0 ≤ λ ≤ 1. The above function can be used to render curve segments over intervals [a.r.mu) r + mu t. c) of the control polygon. [r.. We can now prove the variation diminishing property. npoly. {i. Given any two adjacent edges (a. .t. and then we reverse this control polygon and subdivide again w. to get B[a. r] using b. m. b) and (b.t. (b′ . i. s_. Indeed. r. pol1. a. by subdividing once w. 1. Observe that the subdivision process only involves convex aﬃne combinations.lambda) r + lambda t and that b = (1 .r. Do[ pt = pola[[i]]. s]. (* (* (* (* Computes the control polygon wrt new affine frame (a.. npoly[[1]] ) ]. [r. s] over which the original control polygon is deﬁned. This immediately follows by convexity.r. a_. SUBDIVISION ALGORITHMS FOR POLYNOMIAL CURVES 167 The convergence is indeed very fast.t. pola.a] . r. Then. c). b′′ ). r. b)) *) *) *) *) newcpoly[{cpoly__}. b] ]. using b. a]. (* Print[" npoly: ". pol2 = {}. When r = a. if we contruct the points b′ = (1−λ)a+λb and b′′ = (1−λ)b+λc. t) Returns control poly (f(a. ( If[a =!= r.b] . pol1 = subdecas[poly. pol2. m = Length[pola]. b) and (b. we get the control polygon B[r. c). s. a. Another nice application of the subdivision method is that we can compute very cheaply the control polygon B[a. npoly] *) npoly = subdecas[poly. s] using the parameter a.. wrt original frame (s.5. and modify the control polygon so that instead of having the two edges (a. b]. s]. assuming a = r. . b′ ).

t2 ) + f (t1 . . . . . . i=1 where the hat over the argument ti indicates that this argument is omitted. This can be done very easily in terms of polar forms. if f is biaﬃne. tm+1 ). . . . . . it may be necessary to raise the (polar) degree of the curve. bm . if F is deﬁned by the polar form f : Am → E.<im ≤m+1 Indeed. tim ) . and it is an 0 m+1 easy exercise to show that the points b1 are given in terms of the original points bi . . . . and speciﬁed by a sequence of m + 1 control points b0 . For example. we can show that a hyperplane intersects any polygon obtained by subdivision in a number of points which is at most the number of times that it intersects the original control polygon. . . . . the following notation is often used. . in order to use such a system on a curve of lower degree. say 3. it is the desired polar form. POLYNOMIAL CURVES AS BEZIER CURVES by induction. Or a system may only accept curves of a speciﬁed degree.168 ´ CHAPTER 5. g as deﬁned above is clearly m + 1-aﬃne and symmetric. .. degree raising. bm . . . t3 ) . tm+1 ) = 1 m+1 f (ti1 . the polar form g : Am+1 → E that will yield the same curve F . b1 = i m+1 m+1 where 1 ≤ i ≤ m. . We consider one more property of B´zier curves. . 3 If F (and thus f ) is speciﬁed by the m + 1 control points b0 . t) = f (t. . with b1 = b0 . it is equal to F on the diagonal. and thus. we have g(t1. For example. tm+1 ) = m+1 m+1 f (t1 . . by the j equations: m+1−i i bi−1 + bi . m+1 m is necessarily g(t1 . . .. . . Since these polygons obtained via subdivision converge to the B´zier curve. . 1 g(t1 . . e a hyperplane intersects the B´zier curve in a number of points which is at most the number e of times that it intersects its control polygon. a curve of degree 2. . it is sometimes necessary to view F as a curve of polar degree m + 1. then F considered of degree m + 1 (and thus g). t3 ) + f (t2 . t2 . and since it is unique. certain algorithms can only be applied to curve segments of the same degree. Given a B´zier curve e e F of polar degree m. in the sense that g(t. Instead of the above notation. and b1 0 m+1 = bm . . is speciﬁed by m + 2 control points b1 . ti . t) = F (t). . . Indeed. . . . t3 ) = f (t1 . . 1≤i1 <. b1 . for example. . .

for reasons that will become clear shortly.3. and so on. u4 .5.3 The Progressive Version of the de Casteljau Algorithm (the de Boor Algorithm) We now consider one more generalization of the de Casteljau algorithm that will be useful when we deal with splines. 5. . Such a version will be called the progressive version. u3 . f (u2 . converge to the original curve segment. u3 ) 2 3 f (t1 . u5 . f (u3. u2m of length 2m. it is convenient to consider control points not just of the form f (r m−i si ). However. u5 ) f (u4. u4). satisfying certain inequality conditions. u4 . u4 . When dealing with splines. THE PROGRESSIVE VERSION OF THE DE CASTELJAU ALGORITHM 169 One can also raise the degree again. t2 . u6 . u5. u6 . . u3. Now. by sliding a window of length 3 over the sequence. u3 . u5 . this convergence is much slower than the convergence obtained by subdivision. u3) f (t1 . . . t2 . f (u4. we consider the following four control points: f (u1 . u5 ) f (u3. . u3). . u3. and it is not useful in practice. . u2. u5 . u4 ) 1 f (t1 . Let us begin with the case m = 3. u2. u3 ) f (u2. from left to right. Observe that these points are obtained from the sequence u1 . Given a sequence u1 . . but of the form f (uk+1. uk+m). This explains the name “progressive case”. t3 ) f (t1 . u2 . u4 ) f (t1 . u3 . u4 . u2 . we say that this sequence is progressive iﬀ the inequalities indicated in the following array hold: u1 = u2 = = u3 = = = u4 u5 u6 Then. It can be shown that the control polygons obtained by successive degree raising. u2 . t2 . u6 ). we can compute any polar value f (t1 . t3 ) from the above control points. using the following triangular array obtained using the de Casteljau algorithm: 0 f (u1. u4. u6 ) . t2 . where the ui are real numbers taken from a sequence u1 . u4) f (t1 . u5 ).

170 f (u2 , u3 , u4) f (t1 , u2, u3 )

´ CHAPTER 5. POLYNOMIAL CURVES AS BEZIER CURVES f (u3 , u4 , u5) f (t1 , u3, u4 ) f (t1 , t2 , u3 ) f (t1 , t2 , t3 ) f (t1 , t2 , u4 )

f (t1 , u4, u5 )

f (u1 , u2 , u3) f (u4 , u5 , u6) Figure 5.11: The de Casteljau algorithm, progressive case

At stage 1, we can successfully interpolate because u1 = u4 , u2 = u5 , and u3 = u6 . This corresponds to the inequalities on the main descending diagonal of the array of inequality conditions. At stage 2, we can successfully interpolate because u2 = u4 , and u3 = u5 . This corresponds to the inequalities on the second descending diagonal of the array of inequality conditions. At stage 3 we can successfully interpolate because u3 = u4 . This corresponds to the third lowest descending diagonal. Thus, we used exactly all of the “progressive” inequality conditions.

The following diagram shows an example of the de Casteljau algorithm for computing the polar value f (t1 , t2 , t3 ) on a cubic, given the progressive sequence u1 , u2 , u3 , u4 , u5 , u6 :

In the general case, we have a sequence u1 , . . . , u2m of numbers ui ∈ R.

Deﬁnition 5.3.1. A sequence u1, . . . , u2m of numbers ui ∈ R is progressive iﬀ uj = um+i , for all j, and all i, 1 ≤ i ≤ j ≤ m. These m(m+1) conditions correspond to the lower 2

5.3. THE PROGRESSIVE VERSION OF THE DE CASTELJAU ALGORITHM triangular part of the following array: u1 u2 . . . uj . . . . . . . . . um−1 um = = =

171

= = ... ... ... ... ... = ... = . . . um+j = ... ... = ... ... = ... = = . . . u2m−1 u2m ... ... = ... = . . . u2m−j+1

= =

= =

um+1 um+2

Note that the j-th descending diagonal of the array of progressive inequalities begins with uj (on the vertical axis), and ends with u2m−j+1 (on the horizontal axis). The entry u2m−j+1 will be before or after um+j , depending on the inequality 2j ≤ m + 1. We will abbreviate f (t1 , . . . , tj , ui+j+1, . . . , um+i ),

j m−j

**as f (t1 . . . tj ui+j+1 . . . um+i ), and when j = 0, we abbreviate f (ui+1 , . . . , um+i ),
**

m

as f (ui+1 . . . um+i ). The point f (t1 . . . tj ui+j+1 . . . um+i ) is obtained at step i of phase j, for 1 ≤ j ≤ m, 0 ≤ i ≤ m − j, by the interpolation step f (t1 . . . tj ui+j+1 . . . um+i ) = um+i+1 − tj f (t1 . . . tj−1 ui+j . . . um+i ) um+i+1 − ui+j tj − ui+j f (t1 . . . tj−1 ui+j+1 . . . um+i+1 ). + um+i+1 − ui+j

Phase j of the general computation of the polar value f (t1 , . . . , tm ), in the progressive

172 case, is shown below:

´ CHAPTER 5. POLYNOMIAL CURVES AS BEZIER CURVES

j−1 f (t1 . . . tj−1 uj . . . um ) f (t1 . . . tj−1 uj+1 . . . um+1 ) ... f (t1 . . . tj−1 ui+j . . . um+i ) f (t1 . . . tj−1 ui+j+1 . . . um+i+1 ) ... f (t1 . . . tj−1 um . . . u2m−j ) f (t1 . . . tj−1 um+1 . . . u2m−j+1 )

j f (t1 . . . tj uj+1 . . . um ) ... f (t1 . . . tj ui+j+1 . . . um+i ) ... f (t1 . . . tj um+1 . . . u2m−j )

Note that the reason why the interpolation steps can be performed is that we have the inequalities uj = um+1 , uj+1 = um+2 , . . . , ui+j = um+i+1 , . . . , um = u2m−j+1, which corresponds to the j-th descending diagonal of the array of progressive inequalities, counting from the main descending diagonal. In order to make the above triangular array a bit more readable, let us deﬁne the following points bi,j , used during the computation: bi,j = f (t1 . . . tj ui+j+1 . . . um+i ), for 1 ≤ j ≤ m, 0 ≤ i ≤ m − j, with bi,0 = f (ui+1 , . . . , um+i ), for 0 ≤ i ≤ m. Then, we have the following equations:

bi,j =

um+i+1 − tj um+i+1 − ui+j

bi,j−1 +

tj − ui+j um+i+1 − ui+j

bi+1,j−1 .

The progressive version of the de Casteljau algorithm is also called the de Boor algorithm. It is the major algorithm used in dealing with splines. One may wonder whether it is possible to give a closed form for f (t1 , . . . , tm ), as computed by the progressive case of the de Casteljau algorithm, and come up with a version of lemma 4.3.1. This turns out to be diﬃcult, as the case m = 2 already reveals!

5.3. THE PROGRESSIVE VERSION OF THE DE CASTELJAU ALGORITHM

173

Consider a progressive sequence u1, u2 , u3 , u4 , where the following inequalities hold: u1 = u2 = = u3 u4 We would like to compute f (t1 , t2 ) in terms of f (u1 , u2 ), f (u2, u3 ), and f (u3 , u4 ). We could apply the algorithm, but we can also proceed directly as follows. Since u1 = u3 and u2 = u4 , we can express t1 in two ways using two parameters λ1 and λ2 , as t1 = (1 − λ1 )u1 + λ1 u3 = (1 − λ2 )u2 + λ2 u4 , and since u2 = u3 , we can express t2 in terms of λ3 , as t2 = (1 − λ3 )u2 + λ3 u3 . Now, we compute f (t1 , t2 ), by ﬁrst expanding t2 : f (t1 , t2 ) = f (t1 , (1 − λ3 )u2 + λ3 u3 ) = (1 − λ3 ) f (t1 , u2 ) + λ3 f (t1 , u3 ), = (1 − λ3 ) f ((1 − λ1 )u1 + λ1 u3 , u2 ) + λ3 f ((1 − λ2 )u2 + λ2 u4 , u3 ), = (1 − λ1 )(1 − λ3 ) f (u1, u2 ) + [λ1 (1 − λ3 ) + λ3 (1 − λ2 )] f (u2, u3 ) + λ2 λ3 f (u3 , u4 ), and by expressing λ1 , λ2 , λ3 in terms of t1 and t2 , we get f (t1 , t2 ) = u3 − t1 u3 − t2 f (u1, u2 ) u3 − u1 u3 − u2 u3 − t2 t2 − u2 t1 − u1 + + u3 − u1 u3 − u2 u3 − u2 t1 − u2 t2 − u2 + f (u3 , u4 ). u4 − u2 u3 − u2

u4 − t1 u4 − u2

f (u2, u3 )

The coeﬃcients of f (u1, u2 ) and f (u3 , u4 ) are symmetric in t1 and t2 , but it is certainly not obvious that the coeﬃcient of f (u2 , u3 ) is symmetric in t1 and t2 . Actually, by doing more calculations, it can be veriﬁed that t1 − u1 u3 − u1 is symmetric. These calculations are already rather involved for m = 2. What are we going to do for the general case m ≥ 3? u3 − t2 u3 − u2 + t2 − u2 u3 − u2 u4 − t1 u4 − u2

We can still prove the following theorem generalizing lemma 4.3.1 to the progressive case. The easy half follows from the progressive version of the de Casteljau algorithm, and the converse will be proved later.

174

´ CHAPTER 5. POLYNOMIAL CURVES AS BEZIER CURVES

Theorem 5.3.2. Let u1 , . . . , u2m be a progressive sequence of numbers ui ∈ R. Given any sequence of m+1 points b0 , . . . , bm in some aﬃne space E, there is a unique polynomial curve F : A → E of degree m, whose polar form f : Am → E satisﬁes the conditions f (uk+1, . . . , um+k ) = bk , for every k, 0 ≤ k ≤ m. Proof. If such a curve exists, and f : Am → E is its polar form, the progressive version of the de Casteljau algorithm shows that f (t1 , . . . , tm ) = b0,m , where b0,m is uniquely determined by the inductive computation bi,j = where bi,j = f (t1 . . . tj ui+j+1 . . . um+i ), for 1 ≤ j ≤ m, 0 ≤ i ≤ m − j, and with bi,0 = f (ui+1, . . . , um+i ) = bi , for 0 ≤ i ≤ m. The above computation is well-deﬁned because the sequence u1, . . . , u2m is progressive. The existence of a curve is much more diﬃcult to prove, and we postpone giving such an argument until section 11.1. There are at least two ways of proving the existence of a curve satisfying the conditions of theorem 5.3.2. One proof is fairly computational, and requires computing a certain determinant, which turns out to be nonzero precisely because the sequence is progressive. The other proof, is more elegant and conceptual, but it uses the more sophisticated concept of symmetric tensor product (see section 11.1). um+i+1 − tj um+i+1 − ui+j bi,j−1 + tj − ui+j um+i+1 − ui+j bi+1,j−1 ,

5.4

Derivatives of Polynomial Curves

In this section, it is assumed that E is some aﬃne space An , with n ≥ 2. Our intention is to give the formulae for the derivatives of polynomial curves F : A → E in terms of control points. This way, we will be able to describe the tangents to polynomial curves, as well as the higher-order derivatives, in terms of control points. This characterization will be used in the next section dealing with the conditions for joining polynomial curves with C k -continuity. A more general treatment of (directional) derivatives of aﬃne polynomial functions F : Am → E will be given in section 10.5, as an application of the homogenization of an aﬃne space presented in chapter 10. In this section, we decided to go easy on our readers,

5.4. DERIVATIVES OF POLYNOMIAL CURVES

175

and proofs are omitted. Such proofs are easily supplied (by direct computation). Our experience shows that most readers are happy to skip proofs, since they can ﬁnd them later in chapter 10. In this section, following Ramshaw, it will be convenient to denote a point in A as a, to distinguish it from the vector a ∈ R. The unit vector 1 ∈ R is denoted as δ. When dealing → − with derivatives, it is also more convenient to denote the vector ab as b − a.

Given a polynomial curve F : A → E, for any a ∈ A, recall that the derivative DF (a) is the limit F (a + tδ) − F (a) lim , t→0, t=0 t if it exists.

Recall that since F : A → E, where E is an aﬃne space, the derivative DF (a) of F at a − → is a vector in E , and not a point in E. Since coeﬃcients of the form m(m−1) · · · (m−k + 1) occur a lot when taking derivatives, following Knuth, it is useful to introduce the falling power notation. We deﬁne the falling power mk , as mk = m(m − 1) · · · (m − k + 1), for 0 ≤ k ≤ m, with m0 = 1, and with the convention that mk = 0 when k > m. The falling powers mk have some interesting combinatorial properties of their own. The following lemma giving the k-th derivative Dk F (r) of F at r in terms of polar values, can be shown. Lemma 5.4.1. Given an aﬃne polynomial function F : A → E of polar degree m, for any r, s ∈ A, with r = s, the k-th derivative Dk F (r) can be computed from the polar form f of F as follows, where 1 ≤ k ≤ m: Dk F (r) = mk (s − r)k

i=k

i=0

k (−1)k−i f (r, . . . , r, s, . . . , s). i

m−i i

A proof is given in section 10.5. It is also possible to obtain this formula by expressing F (r) in terms of the Bernstein polynomials and computing their derivatives. If F is speciﬁed by the sequence of m + 1 control points bi = f (r m−i s i ), 0 ≤ i ≤ m, the above lemma shows that the k-th derivative Dk F (r) of F at r, depends only on the k + 1 control points b0 , . . . , bk In terms of the control points b0 , . . . , bk , the formula of lemma 5.4.1 reads as follows: i=k mk k k D F (r) = (−1)k−i bi . k i (s − r)

i=0

176

´ CHAPTER 5. POLYNOMIAL CURVES AS BEZIER CURVES

In particular, if b0 = b1 , then DF (r) is the velocity vector of F at b0 , and it is given by DF (r) = − m −→ m (b1 − b0 ). b0 b1 = s−r s−r

This shows that when b0 and b1 are distinct, the tangent to the B´zier curve at the point e b0 is the line determined by b0 and b1 . Similarly, the tangent at the point bm is the line determined by bm−1 and bm (provided that these points are distinct). More generally, the tangent at the current point F (t) deﬁned by the parameter t, is determined by the two points b0, m−1 = f (t, . . . , t, r) and b1, m−1 = f (t, . . . , t, s),

m−1 m−1

given by the de Casteljau algorithm. It can be shown that DF (t) = m (b1,m−1 − b0,m−1 ). s−r

The acceleration vector D2 F (r) is given by D2 F (r) = − −→ − m(m − 1) m(m − 1) −→ (b0 b2 − 2b0 b1 ) = (b2 − 2b1 + b0 ). 2 (s − r) (s − r)2

More generally, if b0 = b1 = . . . = bk , and bk = bk+1 , it can be shown that the tangent at the point b0 is determined by the points b0 and bk+1 . In the next section, we use lemma 5.4.1 to give a very nice condition for joining two polynomial curves with certain speciﬁed smoothness conditions. This material will be useful when we deal with splines. We also urge the readers who have not yet looked at the treatment of derivatives given in section 10.5 to read chapter 10.

5.5

Joining Aﬃne Polynomial Functions

The weakest condition is no condition at all, called C −1 -continuity. This means that we don’t even care whether F (q) = G(q), that is, there could be a discontinuity at q. In this case, we say that q is a discontinuity knot. The next weakest condition, called C 0 -continuity, is that F (q) = G(q). In other words, we impose continuity at q, but no conditions on the derivatives. Generally, we have the following deﬁnition.

When dealing with splines, we have several curve segments that need to be joined with certain required continuity conditions ensuring smoothness. The typical situation is that we have two intervals [p, q] and [q, r], where p, q, r ∈ A, with p < q < r, and two aﬃne curve segments F : [p, q] → E and G : [q, r] → E, of polar degree m, that we wish to join at q.

5.5. JOINING AFFINE POLYNOMIAL FUNCTIONS

177

Deﬁnition 5.5.1. Two curve segments F ([p, q]) and G[q, r]) of polar degree m are said to join with C k -continuity at q, where 0 ≤ k ≤ m, iﬀ Di F (q) = Di G(q), for all i, 0 ≤ i ≤ k, where by convention, D0 F (q) = F (q), and D0 G(q) = G(q). As we will see, for curve segments F and G of polar degree m, C m -continuity imposes that F = G, which is too strong, and thus, we usually consider C k -continuity, where 0 ≤ k ≤ m−1 (or even k = −1, as mentioned above). The continuity conditions of deﬁnition 5.5.1 are ususally referred to as parametric continuity. There are other useful kinds of continuity, for example geometric continuity. We can characterize C k -continuity of joins of curve segments very conveniently in terms of polar forms. A more conceptual proof of a slightly more general lemma, will be given in section 11.1, using symmetric tensor products (see lemma B.4.5). The proof below uses lemma 10.5.1, which is given in section 10.5. As a consequence, readers who have only read section 5.4 may skip the proof. Lemma 5.5.2. Given two intervals [p, q] and [q, r], where p, q, r ∈ A, with p < q < r, and two aﬃne curve segments F : [p, q] → E and G : [q, r] → E, of polar degree m, the curve segments F ([p, q]) and G[q, r]) join with continuity C k at q, where 0 ≤ k ≤ m, iﬀ their polar forms f : Am → E and g : Am → E agree on all multisets of points that contain at most k points distinct from q, i.e., f (u1 , . . . , uk , q, . . . , q) = g(u1, . . . , uk , q, . . . , q ),

m−k m−k

**for all u1 , . . . , uk ∈ A. Proof. First, assume that the polar forms f and g satisfy the condition f (u1 , . . . , uk , q, . . . , q) = g(u1, . . . , uk , q, . . . , q ),
**

m−k m−k

**for all u1, . . . , uk ∈ A. If ui = q, 1 ≤ i ≤ k, we are requiring that f (q, . . . , q) = g(q, . . . , q ),
**

m m

that is, F (q) = G(q), which is C 0 -continuity. Let f : (A)m → E and g : (A)m → E be the homogenized versions of f and g. For every j, 1 ≤ j ≤ k, by lemma 5.4.1, the j-th derivative Dj F (q) can be computed from the polar form f of F as follows: mj D F (q) = (r − q)j

j i=j

i=0

j (−1)j−i f (q, . . . , q, r, . . . , r ). i

m−i i

178 Similarly, we have

´ CHAPTER 5. POLYNOMIAL CURVES AS BEZIER CURVES

mj D G(q) = (r − q)j

j

i=j

i=0

j (−1)j−i g(q, . . . , q , r, . . . , r ). i

m−i i

**Since i ≤ j ≤ k, by the assumption, we have f (q, . . . , q , r, . . . , r) = g(q, . . . , q , r, . . . , r),
**

m−i i m−i i

and thus, Dj F (q) = Dj G(q). Thus, we have C k -continuity of the join at q. Conversely, assume that we have C k -continuity at q. Thus, we have Di F (q) = Di G(q), for all i, 0 ≤ i ≤ k, where by convention, D0 F (q) = F (q), and D0 G(q) = G(q). Thus, for i = 0, we get f (q, . . . , q) = g(q, . . . , q ).

m m

**By lemma 10.5.1, we have Di F (q) = mi f (q, . . . , q, δ, . . . , δ ),
**

m−i i

and similarly, Di G(q) = mi g(q, . . . , q, δ, . . . , δ ),

m−i k i

**for 1 ≤ i ≤ m. Then, the assumption of C -continuity implies that f (q, . . . , q, δ, . . . , δ ) = g(q, . . . , q , δ, . . . , δ),
**

m−i i m−i i

**for 1 ≤ i ≤ k. However, for any ui ∈ A, we have (in A) ui = q + (ui − q)δ, and thus, for any j, 1 ≤ j ≤ k, we have f (u1 , . . . , uj , q, . . . , q ) = f (q + (u1 − q)δ, . . . , q + (uj − q)δ, q, . . . , q ),
**

m−j m−j

**which can be expanded using multilinearity and symmetry, and yields
**

i=j

f (u1 , . . . , uj , q, . . . , q) =

m−j i=0 L⊆{1,...,j} l∈L |L|=i

(ul − q) f (δ, . . . , δ , q, . . . , q).

i m−i

**5.5. JOINING AFFINE POLYNOMIAL FUNCTIONS Similarly, we have
**

i=j

179

g(u1 , . . . , uj , q, . . . , q) =

m−j i=0 L⊆{1,...,j} l∈L |L|=i

(ul − q) g(δ, . . . , δ, q, . . . , q ).

i m−i

**However, we know that f (δ, . . . , δ , q, . . . , q) = g(δ, . . . , δ, q, . . . , q ),
**

i m−i i m−i

**for all i, 1 ≤ i ≤ k, and thus, we have f (u1 , . . . , uj , q, . . . , q) = g(u1, . . . , uj , q, . . . , q),
**

m−j m−j

**for all j, 1 ≤ j ≤ k. Since f extends f and g extends g, together with f (q, . . . , q) = g(q, . . . , q ),
**

m m

which we already proved, we have established the conditions of the lemma. Another way to state lemma 5.5.2 is to say that the curve segments F ([p, q]) and G[q, r]) join with continuity C k at q, where 0 ≤ k ≤ m, iﬀ their polar forms f : Am → E and g : Am → E agree on all multisets of points that contain at least m−k copies of the argument q. Thus, the number k is the number of arguments that can be varied away from q without disturbing the values of the polar forms f and g. When k = 0, we can’t change any of the arguments, and this means that f and g agree on the multiset q, . . . , q ,

m

i.e., the curve segments F and G simply join at q, without any further conditions. On the other hand, for k = m − 1, we can vary m − 1 arguments away from q without changing the value of the polar forms, which means that the curve segments F and G join with a high degre of smoothness (C m−1 -continuity). In the extreme case where k = m (C m -continuity), the polar forms f and g must agree when all arguments vary, and thus f = g, i.e. F and G coincide. We will see that lemma 5.5.2 yields a very pleasant treatment of parametric continuity for splines. The following diagrams illustrate the geometric conditions that must hold so that two segments of cubic curves F : A → E and G : A → E deﬁned on the intervals [p, q] and [q, r], join at q with C k -continuity, for k = 0, 1, 2, 3. Let f and g denote the polar forms of F and G.

q 3−i−j ) = g(p i . r j . r j . where i + j ≤ k ≤ 3. r) g(q. constitute a de Casteljau diagram with k shells. q. p) g(q. POLYNOMIAL CURVES AS BEZIER CURVES f = g(q. iﬀ f = g. The curve segments F and G join at q with C 1 -continuity iﬀ the polar forms f and g agree on all (multiset) triplets including two copies of the argument q. These de Casteljau diagrams are represented in bold. p.. q) g(r. q m−i−j ) for all i. The above examples show that the points corresponding to the common values f (p i . then f (p i . and these points form a de Casteljau diagram with k shells. i.180 ´ CHAPTER 5. The curve segments F and G join at q with C 3 -continuity iﬀ the polar forms f and g agree on all (multiset) triplets.12: Cubic curves joining at q with C 0 -continuity The curve segments F and G join at q with C 0 -continuity iﬀ the polar forms f and g agree on the (multiset) triplet q. q. r j . We are now ready to consider B-spline curves. q m−i−j ) = g(p i . r) Figure 5. When two polynomial curves F and G of degree m join at q with C k -continuity (where 0 ≤ k ≤ m). q. q) f (p. j with i + j ≤ k ≤ m. . q 3−i−j ) of the polar forms f and g. q) f (p. r) f (p. This is a general fact. r. p. where k is the degree of continuity required. The curve segments F and G join at q with C 2 -continuity iﬀ the polar forms f and g agree on all (multiset) triplets including the argument q.e. r j . q. r. q.

p) Figure 5. r) f = g(q. p. JOINING AFFINE POLYNOMIAL FUNCTIONS 181 f = g(p. p. q. r) f (p. q. q) f = g(q. q) f = g(p. q. q.14: Cubic curves joining at q with C 2 -continuity . p) g(q.5. r) Figure 5. q. r. r. r. r) f = g(p. p. q. q. q) f = g(q. q) g(r. q) f = g(q. r) g(r.5. q) f = g(q. r) f (p. r) f (p. p.13: Cubic curves joining at q with C 1 -continuity f = g(p. r.

q) f = g(q. r. it is assumed that r = 0 and s = 1. In the next three problems. Consider (in the plane) the curve F deﬁned by the following 5 control points: b0 b1 b2 b3 b4 = (6. p. q. −1). q. or any other available software in which graphics primitives are available. Use the de Casteljau algorithm to ﬁnd the coordinates of the points F (1/2) and F (3/4). 6). Does the curve self intersect? Plot the curve segment (in the convex hull of the control polygon) as well as you can. r) f = g(p. Consider (in the plane) the curve F deﬁned by the following 5 control . = (6. q) f = g(q. q) f = g(p. r. = (6. r) Figure 5. r) f = g(p.15: Cubic curves joining at q with C 3 -continuity 5. 0). Problem 3 (10 pts). Write a computer program implementing the subdivision version of the de Casteljau algorithm. p) f = g(r.6 Problems Problem 1 (60 pts). = (0. 6). over some interval [r. = (0. Test your program extensively.182 f = g(p. r) f = g(q. q. s]. p. POLYNOMIAL CURVES AS BEZIER CURVES f = g(p. r) f = g(p. You may use Mathematica. r. Problem 2 (10 pts). p. r) ´ CHAPTER 5. q. 0).

If any of the control points bi changes (one only) to a new point b′i . −1). Does the curve self intersect? Plot the curve segment (in the convex hull of the control polygon) as well as you can. Consider (in the plane) the curve F deﬁned by the following 5 control points: b0 b1 b2 b3 b4 = (0. and t = 1/4? . 30). (baby Lagrange interpolation via control points) Given any aﬃne space E. = (0. −4). b2 . 6). and given any sequence of four points (x0 . and compute its control points b0 . b1 . = (10. 183 Use the de Casteljau algorithm to ﬁnd the coordinates of the points F (1/4) and F (1/2). Show that for any t ∈ [0. −20). 30). x1 . To simplify the calculations. Let F be a cubic curve given by its control points b0 . Problem 4 (20 pts). 0). where Bi3 (t) is the i-th Bernstein polynomial of degree 3. = (5. given any four distinct values t0 < t1 < t2 < t3 .5.. PROBLEMS points: b0 b1 b2 b3 b4 = (6. 6). b3 . b3 . Consider the arc of cubic F [0. x1 . Problem 6 (20 pts). Problem 5 (10 pts). b1 . Use the de Casteljau algorithm to ﬁnd the coordinates of the points F (1/2). x2 . t = 1/2. assume that t0 = 0 and t3 = 1. = (6. and t1 < t2 . Does the curve self intersect? Plot the curve segment (in the convex hull of the control polygon) as well as you can. x3 ). we would like to ﬁnd a cubic F : A → E such that F (ti ) = xi (i. in terms of x0 . x2 . 1].e. −4). b2 . we have F ′ (t) − F (t) = Bi3 (t) b′i − bi . 1]. an interpolating cubic curve). = (3. = (10. x3 . −1). = (0. What is F ′ (t) − F (t) for i = 2.5. = (6.6. we get a diﬀerent cubic F ′ . Prove that there exists a unique cubic satisfying the above conditions.

Show that where 3 3 3 H0 (t) = B0 (t) + B1 (t). F (1) = a1 . a1 be any two points in A3 . → F ′ (0) = − . The polynomials Hi3 (t) are called the cubic Hermite polynomials. 3 3 3 3 H3 (t) = B2 (t) + B3 (t). 3 1 3 3 H2 (t) = − B2 (t). u 0 ′ → F (1) = −1 . 1 3 3 H1 (t) = B1 (t). b1 = a0 + 1− → u0 . 0 ≤ i ≤ 3. Problem 8 (40 pts). POLYNOMIAL CURVES AS BEZIER CURVES Problem 7 (30 pts). u u → → Show that there is a unique quintic curve F : A → A3 such that F (0) = a0 . u 0 ′ 0 → F (1) = −1 . −1 . F (1) = a1 . u 3 u 3 Compute explicitly the polynomials Hi3 (t). → → 3 3 F (t) = a0 H0 (t) + −0 H1 (t) + −1 H2 (t) + a1 H3 (t). 3 b3 = a1 . v ′′ → F (1) = −1 . (Cubic Hermite interpolation) → → u u Let a0 . u → F ′′ (0) = − . u and show that its control points are given by b0 = a0 . be any four vectors in R3 . Show that there is a unique cubic curve F : A → A3 such that F (0) = a0 .184 ´ CHAPTER 5. u 5 v v u 5 . and let −0 . Show that → → 5 → 5 → 5 5 F (t) = a0 H0 (t) + −0 H1 (t) + −0 H2 (t) + −1 H3 (t) + −1 H4 (t) + a1 H5 (t). → F ′ (0) = − . and let −0 and −1 be any two vectors in R3 . Show that they are linearly independent. a1 be any two points in A3 . −1 . −0 . v and compute its control points. (Quintic Hermite interpolation) → → v v Let a0 . 3 b2 = a1 − 1− → u1 .

Problem 13 (30 pts). Write a computer program implementing the progressive version of the de Casteljau algorithm. Draw diagrams showing C 3 -continuity for curves of degree 4. or any other available software in which graphics primitives are available. 185 Compute explicitly the polynomials Hi5 (t). 1]. Problem 10 (20 pts).5. Problem 15 (30 pts). Use your implementation of the subdivision version of the Casteljau algorithm to experiment with cubic and quintic Hermite interpolants. 0 ≤ i ≤ 5. Problem 12 (20 pts). Plot the Hermite polynomials Hi5 (t). Assume that two Bezier curve segments F and G are deﬁned over [0. PROBLEMS where 5 5 5 5 H0 (t) = B0 (t) + B1 (t) + B2 (t). Show that they are linearly independent. 5 1 5 5 H2 (t) = B (t). 1]. Problem 9 (30 pts). Give a method for ﬁnding all the intersection points between the curve segments F and G. You may use Mathematica. for t = 1/2). Plot the Hermite polynomials Hi3 (t).6. over some interval [r. H3 (t) = 20 3 1 5 5 5 H4 (t) = − (2B3 (t) + B4 (t)). 0 ≤ i ≤ 5. The polynomials Hi5 (t) are called the quintic Hermite polynomials. Draw diagrams showing C 5 -continuity . Use the de Casteljau algorithm to design a curve of degree four whose third control point b2 belongs to the curve (in fact. Draw diagrams showing C 4 -continuity for curves of degree 5. 1 5 5 5 H1 (t) = (B1 (t) + 2B2 (t)). 1]. over [0. s]. 20 2 1 5 5 B (t). What if F and G have a common component? What if F and G have diﬀerent degrees? Problem 14 (30 pts). 0 ≤ i ≤ 3. Hermite interpolants were deﬁned with respect to the interval [0. What happens if the aﬃne map t → (1 − t)a + tb is applied to the domain of the Hermite interpolants? How can you modify the Hermite polynomials to obtain the same kind of expression as in problems 7 and 8? Problem 11 (20 pts). 5 5 5 5 5 H5 (t) = B3 (t) + B4 (t) + B5 (t). 1] in terms of their control points. Try many diﬀerent cases. over [0.

.186 ´ CHAPTER 5. As a consequence. . . there is some i such that i/(m + r) is closest to t. Problem 16 (30 pts). as r → ∞. by a random displacement. . POLYNOMIAL CURVES AS BEZIER CURVES for curves of degree 6. . and is not useful in practice. 1]. lim (r) bi = j=0 m bj Bj (t) = F (t). You may want to write computer programs to draw such diagrams. . Let F be a polynomial curve deﬁned by its control polygon B = (r) (r) (b0 . However. Problem 17 (20 pts). After a number of subdivision steps. it can be shown (using Stirling’s formula) that lim r i−j m+r i m i/(m+r)→t i/(m+r)→t = tj (1 − t)m−j . allow the middle control point (b2 ) to be perturbed in each of the two subpolygons. bm+r ) be the control polygon for the curve F obtained after r steps of degree elevation. or a controlled displacement. for each r ≥ 1. . Write a computer program implementing a version of the de Casteljau algorithm applied to curves of degree four modiﬁed as exlained below. . Let B(r) = (b0 . This means that the control polygons B(r) converge towards the curve segment F [0. Prove that m bi = j=0 (r) m j r i−j m+r i bj . this convergence is very slow. you should get “fractal-style” curves with C 1 -continuity. 1]. Experiment with various ways of perturbing the middle control point. . Then. bm ). During a subdivision step. Remark: For any t ∈ [0.

people thought about segmenting a curve speciﬁed by a complex control polygon into smaller and more manageable segments. 3. and thus impractical. in the sense that moving a control point only aﬀects a small region of the curve. We show how knot sequences and sequences of de Boor control points can be used to specify B-splines and we present the de Boor algorithm for B-splines. This leads to systems of linear equations. It would be desirable to have better local control. we will have to compute control points from points on the curve. as well as the useful knot insertion algorithm.Chapter 6 B-Spline Curves 6. When m is large. but they are also unsatisfactory in a number of ways: 1. de Boor Control Points Polynomial curves have many virtues for CAGD. this may be too costly. In this chapter. we present a class of spline curves known as B-splines and explain how to use knots to control the degree of continuity between the curve segments forming a spline curve. If we are interested in interpolation rather than just approximating a shape. the degree of the polynomial curve determined by these control points is m. Given a control polygon containing m sides (m + 1 vertices). and the segments will be joined together in some smooth fashion. We know that it takes m(m+1) 2 steps to compute a point on the curve. Each segment of the curve will be controlled by some small subpolygon of the global polygon. and brieﬂy present the more traditional approach to B-splines in terms of basis functions. This is the idea behind splines. In order to 187 . Moving any control point will aﬀect the entire curve. and solving such a system can be impractical when the degree of the curve is large. We also discuss interpolation using B-splines.1 Introduction: Knot Sequences. 2. For the above reasons.

This can be captured by a knot sequence by letting a knot appear more than once. we will assume that the fourth control point of the curve segment F [1. . 9. 1. 3. 3. [6. . 3] to [3. and that such ﬂexibility is in fact useful. 3. 3. 6]. . 6. 2. 3. For example. 5]. the above knot sequence becomes . [6. if we want to collapse the interval [3. [2. that the fourth control point of the curve segment F [2. Since we want to join these cubic curve segments. F [5. . . Usually. rather than a single interval [r. . 2] is equal to the ﬁrst control point of the curve segment F [2. and so on. 9. . 3]. 6) than just contact. For example. . 3]. . and if we want to use cubic segments. 10. . 3]. . 9]. 9. . 3] is equal to the ﬁrst control point of the curve segment F [3. [5. 9]. The number of consecutive occurrences of a knot is called its multiplicity. 5]. 6. Why did we choose a nonuniform spacing of the intervals? Because it would not really simplify anything. . F [3. [2. . . 6. 9]. 6. F [6. [1. that the fourth control point of the curve segment F [5. we can also collapse [2. . 2]. . It is also possible to handle cyclic sequences. . as we will explain shortly. . 2. 9]. 2]. 3.188 CHAPTER 6. . s] as in the case of polynomial curves. 6]. since this is a way to lower the degree of continuity of a join. where the sequences extend to inﬁnity in both directions. B-SPLINE CURVES achieve this. 3]. 1. Extending our sequence a bit. . 12. 1. . [3. and ﬁnally. 3]. 6]. For the time being. as in the following sequence: . 3. as we shall see later. we would need four control points for each of these intervals. Clearly. say to . 3. 3. Let us denote the corresponding cubic curve segments as F [1. . . . 5]. 2]. . Note that it is slightly more economical to specify the sequence of intervals by just listing the junction points of these intervals. 3 has multiplicity 3. We will see that it is useful to collapse intervals. 5] to [3. we will want better continuity at the junction points (2. [5. . . F [2. When looking at a ﬁnite sequence such as . . we may also want to be more careful about endpoints (in this case. we are just focusing on a particular subsequence of some bi-inﬁnite sequence. 1 and 9). 5. we could have the intervals [1. 1. . in the above sequence. 6] is equal to the ﬁrst control point of the curve segment F [6. . obtaining the sequence . 5. 5]. [3. . This can be achieved. . Thus. . . 3. we can assume that we are dealing with inﬁnite sequences of contiguous intervals. 9. 15. Such a sequence is called a knot sequence. . it becomes necessary to consider several contiguous parameter intervals. 3]. We also allow collapsing several consecutive intervals. For example.

5. and to ﬁgure out the conditions that C n -continuity at ui+1 impose on the polar forms fi and fi+1 . 2. F2 . multiplicity m + 1 corresponds to a discontinuity at the control point associated with that multiple knot). 3]. . F [6. Since we are now using knot sequences to represent contiguous intervals.1. 3. which turn out to be more convenient (and more economical) than the B´zier control points of the curve segments. . 9. 12] to [10. it does not make sense for the knot multiplicity to exceed m + 1. we have curve segments: F1 . . 15. . F6 . and thus. 6]. . . We now have to explain how the de Boor control points arise. 10. . F5 . However. . ui+1 ]. F [5. 3]. without having to ﬁrst compute the B´zier control points for the given e curve segment. We will also see how to extend the de e Casteljau algorithm in order to compute points on any curve segment directly from the de Boor control points. Knots of multiplicity 1 are called simple knots. 3. . . 16 . 15. we could compute the B´zier control points using this algorithm. 1. F [1. 3. Note that we need to take care of multiple knots. . Lemma 5. F4 . 9. 10. 6. and we index each curve segment by the index (position) of the last occurrence of the knot corresponding to the left of its interval domain. There are several possible presentations. given the knot sequence . e which is called the de Boor algorithm. 3. The more mathematical presentation is to consider the polar form fi of each curve segment Fi associated with an interval [ui . . . and knots of multiplicity greater than 1 are called multiple knots. F4 . uk+m . we will denote F [1. . We simply consider the subsequence of strictly increasing knots. 3. INTRODUCTION: KNOT SEQUENCES. 5. 10. F [6.2 will yield a very pleasant answer to the problem of continuity of joins and this answer will also show that there are natural control points called de Boor points. 3. as shown by collapsing [10. 2]. 1. we can simplify our notation of curve segments by using the index (position) of the knot corresponding to the beginning of an interval as the index of the curve segment on that interval. 6]. Fortunately. . where m is the degree of each curve segment (in fact. 5]. . there is a nice answer to both questions. as in the sequence . F [5. 6. . As we will see. it turns out that we are led to consider sequences of consecutive knots of length m of the form uk+1. and a double knot 10. F [3. we allow knot multiplicity at most m (although it is easy to accommodate knots of multiplicity m + 1). . . and F5 . F9 . . 2]. F8 . DE BOOR CONTROL POINTS 189 we can collapse several intervals. . 10]: . 3. F3 . The above sequence has a triple knot 3. and get several multiple knots. F [3.6. F [2. . 1. 9]). F [2. and to ﬁnd more convenient control points than the B´zier control points e of the curve segments (in our example. The problem now is to ﬁnd a convenient way of specifying the degree of continuity that we want at each join. . . 5]. If the degree of all curve segments is ≤ m. 9. . For example. 6. and for simplicity. 10. 9]. simply as F1 .

3. . also called the de Boor algorithm. . if m = 3. . . uk+m yields m + 1 sequences of consecutive knots uk−m+i+1. 8]. 5.190 and that for all i ∈ [k. . . . B-SPLINE CURVES fi (uk+1. and more generally. we have the sequence 3. . uk−m+1 = u3 = 3. For example. 9. Then. . uk+i . . Observe that if uk is a simple knot as above. uk+m is progressive. 2. uk+i . uk+i ). . 1. . 5. 14. 8. the sequence uk−m+1. . a knot such that uk < uk+1 . uk+1]. k = 5. 6. it is possible to use the progressive version of the de Casteljau algorithm presented in section 5. and so uk+1 = u6 = 8. . 3. and thus. 6. then the m + 1 sequences of consecutive knots uk−m+i+1. In fact. the sequence of 2m consecutives knots uk−m+1. these de Boor points are the polar values dk+i = fk (uk−m+i+1. where 0 ≤ i ≤ m. . . . each of length m. k + m]. the middle interval is [6. 11 consisting of 6 = 2 · 3 knots. 8. to compute points on the curve Fk . These points dk are the de Boor points. . 15. . . . the value CHAPTER 6. any polar value of fk for parameter values in [uk . 9. . . given the following (portion of a) knot sequence . . . 11. and uk = u5 = 6. . and these sequences turn out to deﬁne m + 1 de Boor control points for the curve segment Fk associated with the middle interval [uk . This is illustrated by the following progressive sequence of 6 knots: 3. if fk is the polar form of Fk . . . . where 0 ≤ i ≤ m. . 6. uk+1]. Since uk < uk+1 . 5. 8. . uk+1. . uk . . 11 . . . . uk+1. uk . . . . uk+m) of the polar form fi associated with the interval beginning with knot ui is constant and denoted as dk . 9. for any simple knot uk . that is. and uk+m = u8 = 11. .

9. For example. . uk+2]. d8 . . 6] maps onto (d2 . di . and a bi-inﬁnite sequence di . the sequence 3. consists only of simple knots (i. uk+m+1 ] onto the line segment (di . . di+1 ) of the control polygon as being divided into m subsegments. (di+m−1 . . uj . where uj < uj+1 for all j). .e. 5. . . .1 shows part of a cubic spline. uk+m+1 ] with diﬀerent colors. . we can see that m consecutive line segments (di . . uk+m+1 ]. . . 8. uk+1. uk+1] of the sequence uk−m+1 . d4 . . Since consecutive control points map to consecutive knots. . the line segment (di . the interval [9. overlap precisely on the middle interval [uk . d2 ). . we can view each side (di . d8 ). . di+l . . .1. .. . . . with an additional constraint: we assume that there is a bijective function from the sequence di of control points to the knot sequence uj . then di+1 is mapped to uk+2. d7 . 5] maps onto (d1 . . 11. [uk+m . uk . . . . . we ﬁx the maximum degree m of the curve segments that will arise. due to Ken Shoemake. INTRODUCTION: KNOT SEQUENCES. uj+k . and the interval [1. . di+1 ). . ﬁgure 6. Thus. Assuming that we color all intervals [uk+1 . . . . . . 3. This property is the point of departure of another rather intuitive explanation of the de Boor algorithm. . If we color the m intervals [uk+1 . . 8. d5 . di+1 ). given the following (portion of a) knot sequence . with the property that for every control point di . Let us now forget that we have curve segments. . . 11 . . d6 . 6. uk+m+1].6. . 5. d3 ). di+1 ) is also colored in a similar fashion. . assuming m = 3. which for simplicity. . d1 . uk+m . 6. DE BOOR CONTROL POINTS 191 each of length m (where 0 ≤ i ≤ m). 15] maps onto (d7 . . there is obviously an aﬃne bijection mapping the interval [uk+1. . Instead. . . . 1. of (distinct) control points. d3 . d2 . . 14. . . di+m ) of the control polygon share exactly one color: that of the interval [uk+m . For k = 5. uk+2] (assuming inﬁnitely many colors). 15. the interval [2. and we assume that we are given a bi-inﬁnite knot sequence uj . . . 9. . . and uk = u5 = 6. . . 2. . deﬁned as u→ uk+m+1 − u u − uk+1 di + di+1 . uk+m+1 − uk+1 uk+m+1 − uk+1 where u ∈ [uk+1. and given the following (portion of a) sequence of control points . if di is mapped to the knot uk+1 .

1. 5. 11. . ..14. 8]. . 8. Thick segments are images of [6.9.192 CHAPTER 6. . . .6. 2.3. 15.1: Part of a cubic spline with knot sequence . B-SPLINE CURVES d6 : 89 11 d2 : 235 d3 : 356 d7 : 9 11 14 d1 : 123 d8 : 11 14 15 d4 : 568 d5 : 689 Figure 6. .

. a point on the curve segment speciﬁed by the m + 1 control points di . the line segments (d3 . . . correspond to a sequence of 2m knots uk+1. Since any two consecutive line segments (di . di+m ) on the control polygon. . uk+m . The connection with the previous explanation becomes clearer. dm−1. . where 0 ≤ j ≤ m − 1. . uk+m of m knots. di+1 . uk+2m . and that the control point di+1 corresponds to the knot sequence uk+2. . . d6 ). . Then. di+1 ). 6]. . DE BOOR CONTROL POINTS 193 whose middle interval is [6. 5 . 9]. Given a sequence of 2m knots uk+1. and d8 corresponds to 11.. uk+m+j+1] onto the line segment (di+j . d5 ) both contain images of the interval [5. . 3 . . d5 ). . . di+1 ). Therefore. 1 . Note that any two consecutive line segments (di . . . . . d1. .6. di+2 ) are the aﬃne images of intervals that overlap on exactly m − 1 consecutive subintervals of the knot sequence. m consecutive line segments (di . 14. . di+m (where di is mapped onto uk+1). di+j+1). we map t ∈ [uk+m. . . and to the rightmost knot in the sequence associated with di+1 . The line segment (d4 . 8].1. which consists of [5. . di+2 ) correspond to intervals that overlap on m − 1 consecutive subintervals of the knot sequence. and [8. for any parameter value in the middle interval t ∈ [uk+m . . 9]. . For example. which itself consists of the three subintervals [5. . (di+m−1 . 1 . [6. 8] and [8. 1 . as follows: Using the mapping t→ uk+m+j+1 − t t − uk+j+1 di+j + di+j+1. Thus. 3. d4 ). uk+m+1 ]. . corresponds to the three line segments (d3 . di+m . (d4 . m + 1 consecutive control points di . . . and (d5 . . . For example. the interval [uk+1. 9]. . we consider the new control polygon determined by the m points d0. uk+2m . uk+m+j+1 − uk+j+1 uk+m+j+1 − uk+j+1 mapping the interval [uk+j+1. 8]. d5 ) is the aﬃne image of the interval [5. or equivalently. uk+m+1 ]. . di+1 ) and (di+1 . which consists of [6. . 1 . d1 corresponds to 1. . 8]. uk+m+1 . uk+2 . 2. Note that the control point di corresponds to the knot sequence uk+1 . which gives us a point dj. di+1 ). is computed by repeated aﬃne interpolation. uk+2 . . d6 ) both contain images of the interval [6. and the line segments (d4 . 15 . . di+j+1). where di is mapped to uk+1. . d5 ) and (d5 . . 9]. corresponds to the leftmost knot in the sequence associated with di . we can index each control point di by the knot sequence uk+1. INTRODUCTION: KNOT SEQUENCES. This is an easy way to remember which interval maps onto the line segment (di . di+1 ) and (di+1 . uk+m+1] onto the line segment (di+j . 6] and [6. which is mapped aﬃnely onto the line segment (di . d4 ) and (d4 . . 8]. d2 corresponds to 2.

2 : 677. and we repeat the procedure. Note that each interval [uk+j+2. d1. and d0.2 is 7−6 1 = .2 is the interpolation ratio associated with the point d1. the intersection of the m original intervals [uk+j+1. we get a point dj. and for t ∈ [uk+m . d1. dm−2. The point d0. Figure 6. the middle interval [uk+m. 9−6 3 1 7−6 = . the immediate successor of the starting knot uk+1 of the leftmost interval used at the previous stage. 11 − 6 5 2 7−5 = .1 : 789. m obtained during the m-th round is a point on the curve segment. d1. . where 0 ≤ j ≤ m − 2. . dj+1. 8−5 3 the interpolation ratio associated with the point d2. 1 ). d2. The interpolation ratio associated with the point d0. The above round gives us a new control polygon determined by the m − 1 points d0. uk+m+j+1] onto the line segment (dj. 1 . and the starting knot of the leftmost interval used during this round is the (right) successor of the starting knot of the leftmost interval used at the previous round. we only have one interval. d0. uk+m+1]. the number of consecutive intervals aﬃnely mapped onto a line segment of the current control polygon decreases by one. where 0 ≤ j ≤ m − 1.1 is 7−5 2 1 = = . uk+m+j+1].1 : 678. uk+m+1] starts at knot uk+2 . For example. we have d0. 8−6 2 and the interpolation ratio associated with the point d0. 8−3 5 the interpolation ratio associated with the point d1.1 : 567. B-SPLINE CURVES and we map aﬃnely each of the m − 1 intervals [uk+j+2. 2 . At every round.2 : 778. so that at the m-th round. 1 ).3 is .3 : 777. 2 on (dj. . uk+m+j+1] now consists of m − 1 consecutive subintervals. These points are also shown as labels of polar values. dj+1.1 is the interpolation ratio associated with the point d0.2 illustrates the computation of the point corresponding to t = 7 on the spline of the previous ﬁgure.1 is 4 7−3 = . 2 . uk+m+1 ]. 9−5 4 2 7−6 1 = . 2 .194 CHAPTER 6. and that the leftmost interval [uk+2. . 1 .

9. . and construction of the point corresponding to t = 7. DE BOOR CONTROL POINTS 195 d6 : 89 11 d2 : 235 d3 : 356 d7 : 9 11 14 d1 : 123 d8 : 11 14 15 d0. Thick segments are images of [6.1 : 789 d1. INTRODUCTION: KNOT SEQUENCES. 2. 5. .2 : 778 d2.1.3. . .2 : 677 d0.6. 1.14. .6. 15.1 : 678 d5 : 689 Figure 6. .1 : 567 d4 : 568 d0. 8].3 : 777 d1. 8.2: Part of a cubic spline with knot sequence . .. 11.

5] and [5. in the sense that the curve segments associated with [3. We can now provide a better intuition for the use of multiple knots. 9]). 8] (which used to be [6. If we also squeeze the interval [5. 2. d1 . [5. This time. . . [8. to the empty interval [5. As usual. 15.2. now that 5 is a triple knot. We can now be more precise and prove some results showing that splines are uniquely determined by de Boor control points (given a knot sequence).4). 9] (previously [8. . . to distinguish between real numbers in R and points in A. 11]) may not even join at the point associated with the knot 5. 14. 9. . d6 . 8] to the empty interval. .1. 11]. . 9]) will join with even less continuity. we may not even have C 1 -continuity. d4 . 5. and the (portion of a) sequence of control points . and the result will be that the degree of continuity of the junction between the curve segments associated with [3. 3. the curve segments corresponding to the intervals [3. . and in fact. d7 .3. d5 . 8. 5] this will have the eﬀect that the corresponding curve segment shrinks to a single point. . . we are also shrinking the corresponding curve segment to a single point. Deﬁnition 6. 11. 5] and [5. say [5. meaning that the tangents may not agree. we can see that these control points determine ﬁve curve segments. as u ∈ A (as explained in section 5. The bookkeeping which consists in labeling control points using sequences of m consecutive knots becomes clearer: it is used to keep track of the consecutive intervals (on the knot line) and how they are mapped onto line segments of the current control polygon. 6.e. B-SPLINE CURVES We recognize the progressive version of the de Casteljau algorithm presented in section 5. d8 . Thus. 11] (previously [9. say [5. we may even have a discontinuity at the parameter value 5. . 5].196 CHAPTER 6. 8]) will be lower. 1. [6.2 Inﬁnite Knot Sequences. 8]. and now. d2 . . The extreme is to squeeze one more interval. 6. 9]. corresponding to the intervals [3. 9] (previously [8. we will denote knots as points of the real aﬃne line A. 5] and [5.3 shows the construction of the B´zier control points of the ﬁve B´zier segments forming e e this part of the spline. Going back to the previous example of the (portion of a) knot sequence . . If we “squeeze” any of these intervals. uk ≤ uk+1 for all k ∈ Z). Open B-Spline Curves We begin with knot sequences. 6]. Figure 6. d3 . . 6]. and [9. we see how the knot multiplicity can be used to control the degree of continuity of joins between curve segments. such that every knot in the sequence has ﬁnitely many . A knot sequence is a bi-inﬁnite nondecreasing sequence uk k∈Z of points uk ∈ A (i.

. 15. . .6. OPEN B-SPLINE CURVES 197 d6 : 89 11 d2 : 235 11 11 11 d3 : 356 333 555 d1 : 123 999 d7 : 9 11 14 d8 : 11 14 15 666 888 d4 : 568 d5 : 689 Figure 6. 11.6.2.9. 8.3: Part of a cubic spline with knot sequence . 1. 5. INFINITE KNOT SEQUENCES. . and some of its B´zier control points e .14.. . 2. .3.

a knot sequence has degree of multiplicity at most m + 1 iﬀ every knot has multiplicity at most m + 1. so that the junction knot is indeed ui+1 . we must have uk ≤ uk+1 for all k ∈ Z. A knot uk of multiplicity m + 1 is called a discontinuity (knot). Given any natural number m ≥ 1. . where n is the multiplicity of the knot ui+1 (1 ≤ n ≤ m + 1). a knot of multiplicity m + 1. A spline curve F of degree m based on the knot sequence uk k∈Z is a piecewise polynomial curve F : A → E. if uk+1 = uk+2 = . ui+1 [ by the last occurrence of the knot p = ui in the knot sequence. A knot of multiplicity 1 is called a simple knot. The restriction of F to [ui . Deﬁnition 6. then we have C −1 -continuity. for some ﬁxed h ∈ R+ . such that. a piecewise polynomial curve of degree m based on the knot sequence uk k∈Z is a function F : A → E. A knot sequence uk k∈Z is uniform iﬀ uk+1 = uk +h. ui+1 [. it is more convenient to index the curve segment on the interval [ui . in the sense of deﬁnition 5. F agrees with a polynomial curve Fi on the interval [ui . Thus. such that. .5. there are at most m + 1 occurrences of identical knots in the sequence. ui+1 [ agrees with a polynomial curve Fi of polar degree m. The set F (A) is called the trace of the spline F . and any knot sequence uk k∈Z of degree of multiplicity at most m + 1. if ui+1 is a knot of multiplicity n. The curve segments Fi and Fi+n join with continuity (at least) C m−n at ui+1 . if ui+1 is a discontinuity knot. i. Remarks: (1) Note that by deﬁnition. then the following condition holds: 1. . for every two consecutive distinct knots ui < ui+1 . and for every k ∈ Z. that is. ui+n+1 [. A knot uk in a knot sequence uk k∈Z has multiplicity n (n ≥ 1) iﬀ it occurs exactly n (consecutive) times in the knot sequence.1. B-SPLINE CURVES occurrences. = uk+n . the next distinct knot being ui+n+1 (since we must have ui+1 = . . and with a polynomial curve Fi+n on the interval [ui+1 .2.198 CHAPTER 6. with associated polar form fi . Given any natural number m ≥ 1. where E is some aﬃne space (of dimension at least 2). .e. and Fi (ui+1 ) and Fi+n (ui+1 ) may diﬀer. Thus. for any two consecutive distinct knots ui < ui+1 . for a knot sequence of degree of multiplicity at most m + 1. the following condition holds: 2.2. rather than its ﬁrst occurrence. then 1 ≤ n ≤ m + 1. = ui+n < ui+n+1 ). Thus. in particular. We can now deﬁne spline (B-spline) curves.

and by giving an arbitrary value to F (ui+1 ). and one can safely ignore these subtleties. Thus. (4) Note that no requirements at all are placed on the joins of a piecewise polynomial curve. discontinuities are rare anyway.e.2. open on the left. when ui+1 is a knot of multiplicity m + 1. are called a discontinuity of the ﬁrst kind (see Schwartz. t>ui+1 t→ui+1 . ui+m+2 ]. ui+1 [. . ui+m+2 [. although possibly discontinuous. lim F (t) = Fi+m+1 (ui+1 ). when ui+1 has multiplicity m + 1. However. This is achieved by requiring that the restriction of F to the interval [ui+1 . is equal to the limit of Fi (t) when t approaches ui+1 (from below). OPEN B-SPLINE CURVES 199 (2) If we assume that there are no discontinuities. For the curious reader. We could have instead used intervals ]ui+1 . we mention that the kind of discontinuity arising at a discontinuity knot ui+1 . ui+m+2 [. agrees with a polynomial curve Fi+m+1 . or for a spline curve. that every knot has multiplicity ≤ m. t<ui+1 lim F (t) = Fi (ui+1 ). agrees with a polynomial curve Fi+m+1 . we want the spline function F to be deﬁned at ui+1 . Practically. since we have at least C 0 -continuity. INFINITE KNOT SEQUENCES. the limit F (ui+1 −) of F (t) when t approaches ui+1 from below. ui+1 ] agrees with a polynomial curve Fi of polar degree m. when ui+1 has multiplicity n ≤ m. to have C m continuity at all joins: this is the case when F is a polynomial curve! In this case. [71]). i. (5) It is possible for a piecewise polynomial curve. then clause (1) can be simpliﬁed a little bit: we simply require that the restriction of F to the closed interval [ui . Since F agrees with Fi on [ui . (3) The number m + 1 is often called the order of the B-spline curve. F (ui+1 −) = t→ui+1 . we would have F (ui+1 −) = and F (ui+1 +) = t→ui+1 . and thus. However. there is no advantage in viewing F as a spline. which amounts to say that each join has continuity C −1 (but may be better). open on the right. where we require that the restriction of F to the open interval ]ui+1 . The ﬁrst option seems the one used in most books. we have F (ui+1 ) = Fi (ui+1 ) = Fi+n (ui+1 ). In this case.6. This ensures that F (ui+1 ) = Fi+m+1 (ui+1 ). this would complicate the treatment of control points (we would need special control points corresponding to the “dicontinuity values” F (ui+1 )). t<ui+1 lim F (t) = Fi (ui+1 ). We could also have used a more symmetric approach.

. 0.6.5. when in fact. 0. 3. . .200 123 CHAPTER 6. the polar values are denoted as triplets of consecutive knots ui ui+1 ui+2 . they should be denoted as fk (ui . .5 shows the construction of the control points for the three B´zier curve segments e constituting this part of the spline (with ﬂoating ends). 6. . 4. iﬀ their polar forms fi : Am → E and fi+n : Am → E agree on all multisets of points that contain at most m − n points distinct from q = ui+1 .4: Part of a cubic spline with knot sequence . . 1. .2. 7. . The following ﬁgure represents a part of a cubic spline based on the uniform knot sequence . Figure 6. we often say that the spline has ﬂoating ends. 2. 1. or equivalently. . . 3. When considering part of a spline curve obtained by restricting our attention to a ﬁnite subsequence of an inﬁnite knot sequence. . 5. 7.2. . ui+1 .4. for some appropriate k. ui+2 ). . the curve segments Fi and Fi+n join with continuity C m−n at q = ui+1 (where 1 ≤ n ≤ m + 1). For simplicity of notation. By lemma 5. B-SPLINE CURVES 456 567 012 234 345 Figure 6. iﬀ the polar forms fi : Am → E and fi+n : Am → E agree on all multisets (supermultisets) of . 5.

2. .5: Construction of part of a cubic spline with knot sequence . . .6. . 7.6. 3. 0. .2. INFINITE KNOT SEQUENCES. 1. OPEN B-SPLINE CURVES 201 123 122 456 556 567 555 222 223 012 455 233 333 444 445 234 334 344 345 Figure 6. 5. .4. .

the condition is vacuous. . the continuity conditions for joining curve segments forming a spline impose constraints on the polar forms of adjacent curve segments. As we will prove. . .5. . . . nh be the multiplicities of the knots of the intervening sequence of knots {ui+1 . uj }. {ui+n1 +1 . . ui+n } = {q. . . ui < ui+1 and uj < uj+1 . . . Given any m ≥ 1.2. fi and fi+n agree on all multisets containing the multiset {ui+1 . . When n = m + 1. . Lemma 6. uj }. . Since Fi and Fi+n1 join with C m−n1 -continuity at ui+1 = ui+n1 . Let n1 . q }. then for any two adjacent line segments Fi and Fi+n (1 ≤ n ≤ m).3. B-SPLINE CURVES {ui+1 . . . ui < ui+1 and uj < uj+1 . We proceed by induction on h. ui+2 . Proof. ui < ui+1 and uj < uj+1. . as long as the number j − i of intervening knots is at most m. fi+n1 and fj agree on all multisets containing the multiset . . ui+2 . for any piecewise polynomial curve F of (polar) degree m based on the knot sequence uk k∈Z . where i < j ≤ i + m. . . but that’s ﬁne.2. By the induction hypothesis. uj }. fi and fi+n1 agree on all multisets containing the multiset {ui+1 . implies that the join at ui+1 = ui+n has continuity at least C m−n . which by lemma 5. j. Consider any two curve segments Fi and Fj . . . . ui+2 . by lemma 5. suppose that the curve segments Fi ﬁt together to form a spline. . ui+2 . . where i < j ≤ i + m. . ui+n1 }. ui+n }. the curve F is a spline iﬀ the following condition holds: For all i. . . uj }. . and any knot sequence uk k∈Z of degree of multiplicity at most m + 1.5. . .2. . .202 m points from A containing the multiset CHAPTER 6. In fact. the polar forms fi and fj must agree on all supermultisets (of m elements) of the multiset {ui+1 . . n2 . the polar forms fi and fj agree on all multisets of m elements from A (supermultisets) containing the multiset of intervening knots {ui+1 . . . ui+2 . . uj }. with i < j ≤ i + m. If the polar forms fi and fj agree on all multisets of m elements from A containing the multiset of intervening knots {ui+1 . n Thus. since C −1 -continuity does not impose any continuity at all! In the other direction. . two non-adjacent curve segments Fi and Fj are still related to each other. . . so that n1 + n2 + · · · + nh = j − i.

v. s) g = h = k(u. [s. t. fi and fj agree on all multisets containing the multiset {ui+1 . . s. u]. u) f = g = h(r. v) f = g = h(s. g and h agree on all triplets including the argument u. s]. u. v. . then f and g agree on all (multiset) triplets including the argument s.2. u) f = g(s. v. r) Figure 6. v) g = h = k(u. v. . [u. . r. g. Recall that C 2 -continuity at the knots s. v) h = k(v. t) h = k(v. u. OPEN B-SPLINE CURVES f = g = h = k(s. s. uj }.6. s. s) f = g(r. . s. r. u. . . s. INFINITE KNOT SEQUENCES. u. u) g = h(u. with C 2 -continuity at the knots s. The following ﬁgure shows part of a cubic spline corresponding to the knot sequence . v]. t. . v means that. r. k denote the polar forms corresponding to the intervals [r. v) 203 f = g(r. s) f (r. u) f = g = h(s. u. v and by transitivity. t. v. . u. If we assume that the points labeled . . h. t) g = h = k(u. u. and h and k agree on all triplets including the argument v: The fact that a knot u of multiplicity 2 generally corresponds to a join with only C 1 continuity can be derived by the following reasoning. v. t) k(t.6: Part of a cubic spline with C 2 -continuity at the knots s. if we let f. t]. t) h = k(v. u. and [v.

v. v) (displayed as a dashed curve) is contracted to the single point g(u. f (r. v. and between k(v. the curve segment between g(u. u) CHAPTER 6. that is. u. t) do not change. s) and g(u. end up joining at the new point g(u. k(v. u. Let us now go back to our original ﬁgure showing part of a cubic spline corresponding to the knot sequence . g(u. f (r. s. r) Figure 6. u) k(t. f (s. u. v. u) and h(v. s) g = k(u. t. u). v) = f (s. t) f = g(r. The following ﬁgure illustrates this situation. u. t) k(u. . r. s). r. u. t) = g(u. u. . v) and k(t. only with C 1 -continuity. u. t. u) with diﬀerent acceleration vectors. and the two curve segments between f (s. t) f (r. t). u. s. u). with C 2 -continuity at the knots s. t). s. r). t. . v. and if we let u and v converge to a common value. s. u) on the line segment between f (s. Note that C 1 -continuity at the double knot u corresponds to the fact that the polar forms g and k agree on all (multiset) triplets that include two copies of the argument u. in the limit. . v. u) f = g(s. . and k(t. t) (since the polar form h vanishes). v). u. r. s. . t. u. B-SPLINE CURVES f = g(s. t. t). s. and let us see what happens when the knot s becomes a knot of multiplicity 3.204 f = g = k(s. s. u.7: Part of a cubic spline with C 1 -continuity at the knot u f (r. u. u. r. r. u) and g(u. s) g = k(u. v. u) f = g(r. . s) f = g(r. t.

s. s) f = g(r. r. s. v). v) 205 f = g(r. r. u). r) Figure 6. u. s. u. k(v. in the limit. u. s. OPEN B-SPLINE CURVES f = g = h = k(s. . u. v) g = h = k(u. v. s. t) h = k(v. and v converge to a common value. s. s) (since the polar forms g and h vanish). u) and h(v. s. t) do not change. v) = f (s. t. v) f = g = h(s. t. . Note that C 0 -continuity at the triple knot s corresponds to the fact that the polar forms f and k agree on the (multiset) triplet s. only with C 0 -continuity. u. v. t). u.3 tells us about the values of polar forms on multisets of arguments of the form {uk+1 . v. . and if we let s. v The fact that a knot s of multiplicity 3 generally corresponds to a join with only C 0 continuity can be derived by the following reasoning. s) g = h = k(u. t) h = k(v. r). uk+m }. v) and k(t. that is. s. u. u) f = g = h(r. t.6. . v. INFINITE KNOT SEQUENCES. s) and g(u. s) with diﬀerent tangents. v) = f (s. and k(t. r) and f (s. t. u) and between g(u.8: Part of a cubic spline with C 2 -continuity at the knots s. t) g = h = k(u. If F is a spline curve of degree m. v) (displayed as dashed curves) are contracted to the single point f (s. the two curve segments between f (s. s. v. if we assume that the points labeled f (r. g(u. f (s. The following ﬁgure illustrates this situation. As in the previous case. s. t.2. s) and between k(v. r. t) end up joining at f (s. t). f (r. s) f (r. r. v) h = k(v. let us see what lemma 6.2. . u) g = h(u. u. s). u) f = g(s. r. f (r. u. u) f = g = h(s. v. u. v. u. s. and the two curve segments between f (r. t) k(t.

t. r. r) Figure 6. s) k(s. s) f (r. t) f (r. s.9: Part of a cubic spline with C 0 -continuity at the triple knot s .206 CHAPTER 6. s) k(t. s. r. B-SPLINE CURVES f = k(s. s. t. t) f (r. t) k(s.

uk+m ). such that the following condition holds: dk = fi (uk+1 . and they play an important role for splines. there exists a unique spline curve F : A → E.2. for all k. . ≤ ui < ui+1 ≤ .2. . . . . . uj } is a submultiset of the multiset {uk+1. by theorem 5. . . i.4. uk+m }. . We must show that the polar forms fi and fj agree on all multisets of m elements from A (supermultisets) containing the multiset of intervening knots {ui+1 . .3. fi and fj agree on {uk+1 . Basically. Then. ui+1 ]. which can be viewed as a generalization of lemma 4. . . and consider the sequence of 2m knots ui−m+1 . there is a unique polynomial curve Fi of polar degree m. such that dk = fi (uk+1 . . . . . . From the requirements of theorem 6.2. centered at the interval [ui . uk+m ) = dk . if it exists. Assume that such a spline curve F exists. .3. uk+m ) (where k ≤ i ≤ k + m). . uk+m ). Proof. Theorem 6. The proof is due to Ramshaw. uk+m }.3. . and by lemma 6. . .2. . . Since we have ui−m+1 ≤ . INFINITE KNOT SEQUENCES. ui+m . Given any m ≥ 1. . . . . for all k. where i − m ≤ k ≤ i. For this. and any knot sequence uk k∈Z of degree of multiplicity at most m + 1. be such that i < j ≤ i + m. the multiset {ui+1 . .2. they play for splines the role that B´zier control points play for polynomial curves. the sequence is progressive. uj }. Let ui < ui+1 be two consecutive distinct knots. OPEN B-SPLINE CURVES 207 Consider i < j such that k ≤ i < j ≤ k + m. . Since the requirements of theorem 6. we know that fi (uk+1 . The polar values fi (uk+1 . we have to show that the curve segments Fi ﬁt together in the right way to form a spline with the required continuity conditions. . .2. Thus. are called de Boor points. . .6.4 uniquely determine every curve segment Fi . we have shown that the spline curve F is unique. . . j. . . .2. We will use lemma 6. Then. . . . . . we need to show the existence of such a spline curve.1. where ui < ui+1 and k ≤ i ≤ k + m. for any bi-inﬁnite sequence dk k∈Z of points in some aﬃne space E. uk+m ) = fj (uk+1. This is conﬁrmed formally by the following e theorem.3. Let i.4. ≤ ui+m . ui < ui+1 and uj < uj+1. .

and think of the inequalities as i − m ≤ k ≤ i. . . uj ). uk+m ). . simply as f (uk+1 . . . we can consider i as ﬁxed. . . . . . . . vl ) = fj (v 1 . . This does not depend on the knot multiplicity. Let l = m − j + i. . ui+m−1 ui+m Now. . tm ) as f (t1 . . the theorem tells us which curve segments Fi of the spline F are inﬂuenced by the speciﬁc de Boor point dk : the de Boor point dk inﬂuences at most m + 1 curve segments. . . . . In this case. ui+m is progressive. . . . . .. ui+m−1 . . uj ). Proving that fi and fj agree on all supermultisets of {ui+1 . ui ui+1 ui+1 . This is achieved when all the knots are simple. and since fi and fj agree on the previous parallelogram. Nevertheless. We can only say that F is a piecewise polar form. . and let gi : Al → E and gj : Al → E denote the symmetric multiaﬃne functions of l arguments deﬁned as follows: gi (v1 . . . vl ) = fi (v1 . . cut the middle j − i columns out of this parallelogram. v l . when the index i is clear from the context. . . ui+1 [. On the other hand. . is equivalent to proving that gi = gj . If we think of k as ﬁxed. . v l . uj uj+1 . . . . . Note that the sequence of 2l knots uj−m+1 . . the theorem tells us which de Boor points inﬂuence the speciﬁc curve segment Fi : there are m+1 de Boor points that inﬂuence the curve segment Fi . . and gj (v1 . . . omitting the subscript i. . ui ui+1 . . . . . . . uk+m ). . . uj . . B-SPLINE CURVES for k. . . that fi = fj . . uj }. . . ui+m . . . .. . . uj uj+1 . although for every knot ui such that ui < ui+1 . ui+1 . vl ∈ A. . . . such that ui < ui+1 . This means that fi and fj agree on the rows of the following parallelogram: uj−m+1 uj−m+2 . which by theorem 5. . . . the inequality k ≤ i ≤ k + m can be interpreted in two ways. it is convenient to denote polar values fi (uk+1 . ui . then gi and gj agree on the l + 1 sequences associated with the progressive sequence uj−m+1 . is deﬁned by a unique symmetric multiaﬃne map. . implies that gi = gj . ui+1 . . corresponding to control points.2. tm ). . and collapse the remaining two triangles to form a smaller parallelogram. . uj+1 . Given a knot ui in the knot sequence. ui ui+1 uj−m+2 . .208 CHAPTER 6. . . . . . and we will often do so. and thus. ui . . . . uj uj+1 .3. uj+1. . the curve segment Fi that is the restriction of F to [ui . . . We may even denote polar values fi (t1 . . . for all v 1 . since each element of the left half is strictly less than each element of the right half (since ui < ui+1 ≤ uj ). . Given a spline curve F : A → E. . . . with j − m ≤ k ≤ i. . it is important to realize that F itself is generally not the polar form of some (unique) symmetric multiaﬃne map. . .

. q[. ui+m ) (the de Boor points) are always well deﬁned. . and wild overloading. Under some reasonable conventions. . . . . Deﬁnition 6.ui+m+1 [ (ui+1 . . . where I is an open interval ]p. The interval I is called a validity interval . agree on the multiset {u1 .1. . Actually. . uj . ui+1 [ ∩ I = ∅}. Given any natural numbers m ≥ 1 and N ≥ 0. the interested reader is referred to Ramshaw [65]. the terms f]ui . . . 6. . . These adjustments are minor. multiplicity m will give the same results. . um ). Although the notation fS is helpful. . um }. tm ) is well deﬁned iﬀ ui < uj+1 . and the last curve segment is unconstrained at its right end. f]ui . For details. . . that is. . .uj+1 [ (ui+1 . .j} (ui+1 .3 Finite Knot Sequences. tame overloading. for any multiset of arguments {u1 . . We prefer using the notation fI . . . tj−i+1. tm ).3. According to this convention. . . A reasonable method is to assume that the end knots have multiplicity m + 1. Given a nonempty open interval I. . Ramshaw introduced two such conventions. . . we deﬁne fI (u1 . . In particular. where i ∈ S. . . we will next consider what kinds of adjustments are needed to handle ﬁnite knot sequences and inﬁnite cyclic knot sequences. where tj−i+1 . where S = {i | ]ui . . . FINITE KNOT SEQUENCES. we have to deal with the two end knots. . . um ). it is quite complex. . ui < ui+1 and uj < uj+1. . the common value of fi and fj is denoted as f{i. . . . We have to be a little careful to deal with knots of multiplicity ≥ 2. FINITE B-SPLINE CURVES 209 A clean way to handle this overloading problem is to deﬁne the notion of a validity interval . um ) as fS (u1 . It is possible to introduce conventions for overloading the notation. . omitting validity intervals in polar forms.3. the polar forms fi . . . Having considered the case of bi-inﬁnite knot sequences.7). uj . provided that this common value makes sense. um }. Finite B-Spline Curves In the case of a ﬁnite knot sequence. . . . um }. Then.6. . tm are arbitrary. Given a knot sequence uk k∈Z of degree of multiplicity at most m + 1. Another reason for letting the end knots have multiplicity m + 1 is that it is needed for the inductive deﬁnition of B-spline basis functions (see section 6. but multiplicity m + 1 allows us to view a ﬁnite spline curve as a fragment of an inﬁnite spline curve delimited by two discontinuity knots. . um ) as the common value fS (u1 . a ﬁnite knot sequence of degree of multiplicity at most m + 1 with N intervening knots is any ﬁnite nondecreasing . we denote this common value fi (u1 . when i < j ≤ i + m. . This way the ﬁrst curve segment is unconstrained at its left end. . . tj−i+1 . if we know that for some nonempty set S of knots. for any multiset of arguments {u1 . validity intervals can be inferred automatically.

Given a ﬁnite knot sequence uk −m≤k≤N +m+1 . uN . . . we now deﬁne the number L of subintervals in the knot sequence. . uN uN +m+1 n1 n2 . Given any natural numbers m ≥ 1 and N ≥ 0. . . Note that some authors use the terminology closed B-splines for such B-splines. . We can now deﬁne ﬁnite B-spline curves based on ﬁnite knot sequences. uN +1 . . uN +m+1 . . u1 . uN . . . . .. . . . We do not favor this terminology. . . . . . of degree of multiplicity at most m+1 and with N intervening knots. . . . then we let L − 1 ≥ 1 be the number of distinct knots in the sequence u1 .210 CHAPTER 6. . If N ≥ 1.2. . A ﬁnite knot sequence of length 2(m + 1) + N containing L + 1 distinct knots. Instead. where E is some aﬃne space (of dimension at least 2). un1 . . . . u0 . m+1 n1 n2 nL−1 m+1 where The picture below gives a clearer idea of the knot multiplicities. . the knot sequence uk −m≤k≤m+1 consists of 2(m + 1) knots. . . and 1 ≤ i ≤ L − 1). . . and the knot sequence uk −m≤k≤N +m+1 consists of 2(m + 1) + N = 2(m + 1) + n1 + · · ·+ nL−1 knots. A knot uk of multiplicity m + 1 is called a discontinuity (knot). . Given a ﬁnite sequence. . v i = ui . . . . a piecewise polynomial curve of degree m based on the ﬁnite knot sequence uk −m≤k≤N +m+1 is a function F : [u0 . when N ≥ 1 and L ≥ 2. . . . . . un1 +1 . . B-SPLINE CURVES sequence uk −m≤k≤N +m+1 . Deﬁnition 6. un1 un1 +n2 .. . . looks as follows: u−m . . . such that the following condition holds: .. . uN −nL−1 +1 . . since it is inconsistent with the traditional meaning of a closed curve. vL−1 is the subsequence of leftmost occurrences of distinct knots in the sequence u1 . u−m . and every knot uk has multiplicty at most m + 1. . A knot of multiplicity 1 is called a simple knot. and we let L = 1. with the convention that n1 + · · · + n0 = 0 when i − 1 = 0. . and 1 ≤ ni ≤ m + 1). given any ﬁnite knot sequence uk −m≤k≤N +m+1 of degree of multiplicity at most m+1 and with N intervening knots. . where u−m and um+1 are distinct and of multiplicity m + 1. . if the multiplicity of each knot v i is ni (where 1 ≤ i ≤ L − 1. then N = n1 + · · · + nL−1 . such that u−m < uN +m+1 . uN are n1 . . uN +1 . If N = 0. we propose to use the terminology ﬁnite B-splines. nL−1 m+1 N = n1 + · · · + nL−1 . if v1 . .3. . with L + 1 of them distinct. . then v i = un1 +···+ni−1 +1 . uN +1 ] → E. . un1 +1 . .. If the multiplicities of the L − 1 distinct knots in the sequence u1 . . . . . u0 m+1 u1 . un1 +n2 . in general. Instead. uN . uN −nL−1 +1 . . . nL−1 (where 1 ≤ ni ≤ m + 1.

.1. for some appropriate k. in the sense of deﬁnition 5. then the restriction of F to [ui . the following condition holds: 2. using multiplicity m + 1 allows us to view a ﬁnite spline as a fragment of an inﬁnite spline. uN uN +m+1 n1 n2 . when in fact.. with associated polar form fN . uN +1 ]) is called the trace of the ﬁnite spline F . u1 .7). when N ≥ 1 and L ≥ 2. uN −nL−1 +1 . Remark: The remarks about discontinuities made after deﬁnition 6. To simplify notation. . .. . . a ﬁnite B-spline.. polar values are denoted as triplets of consecutive knots ui ui+1 ui+2 . ui+1 [ agrees with a polynomial curve Fi of polar degree m with associated polar form fi . . then F : [u0 . then the restriction of F to [uN .6. When N ≥ 1. . . . . . . and if i = N. uN +1 . . um+1 ] → E agrees with a polynomial curve F0 of polar degree m. Fn1 . if ui+1 has multiplicity n ≤ m + 1. then we get the same curve. If N = 0. . . . is a piecewise polynomial curve F : [u0 . ui+1 .. .2 also apply. we also want the last curve segment FN to be deﬁned at uN +1 . in the following ﬁgures representing cubic splines. if 0 ≤ i ≤ N − nL−1 . . Note that a spline curve deﬁned on the ﬁnite knot sequence u−m .5. FINITE KNOT SEQUENCES. uN +1 ] agrees with a polynomial curve FN of polar degree m. . nL−1 m+1 .2. un1 un1 +n2 . Another reason for letting the end knots have multiplicity m + 1 is that it is needed for the inductive deﬁnition of B-spline basis functions (see section 6. Figure 6.3. then for any two consecutive distinct knots ui < ui+1 . with associated polar form f0 . uN +1 ] → E. for every two consecutive distinct knots ui < ui+1 (where 0 ≤ i ≤ N − nL−1 ). However. . . . u0 m+1 where N = n1 + · · · + nL−1 . Note that if we assume that u0 and uN +1 have multiplicity m. A spline curve F of degree m based on the ﬁnite knot sequence uk −m≤k≤N +m+1 or for short. consists of L curve segments. such that. FINITE B-SPLINE CURVES 211 1. . However. un1 +1 ..11 shows the construction of the control points for the B´zier curve segments e constituting the previous spline. they should be denoted as fk (ui . ui+2 ). Fn1 +···+nL−1 = FN . Let us look at some examples of splines. . F0 . The set F ([u0 . . The curve segments Fi and Fi+n join with continuity (at least) C m−n at ui+1 .

6.2.6.10: A cubic spline with knot sequence 1. 6. 3. 1.4. B-SPLINE CURVES 123 456 566 112 666 111 234 345 Figure 6.212 CHAPTER 6.1. 5. 6 . 1.

1. 6.3.4. FINITE KNOT SEQUENCES.6. 1. FINITE B-SPLINE CURVES 213 123 456 556 566 122 222 112 223 455 555 233 333 444 445 666 111 234 334 344 345 Figure 6. 3.1.6.11: Construction of a cubic spline with knot sequence 1. 5.2.6. 6 .

2.214 CHAPTER 6. 7. 0. 1/2. 0. 223.e. 0. where L. 10. 11. uj+N . 122. 8. 677. 0. 7. 12. 0. 10. such that there is some subsequence uj+1. 6. 556. 8. 6. 5. 13. 1. 7. We now consider closed B-spline curves over cyclic knot sequences. 13. 11. we deﬁne cyclic knot sequences. 335. 1/3. and 666. the nodes labeled 112. 7. L ≥ 2. 13. 1. 233. . 333. 1/4. Closed (Cyclic) B-Spline Curves First.1. 0. 3/4. and thus. 13.16 shows a cubic spline based on the non-uniform knot sequence 0. 2/3. For example. 2. 3/4. 7. 667.4. 566. 1/2. The non-uniformity of the knot sequence causes the B´zier control points 333 and 555 to e be pulled towards the edges of the control polygon. cycle length N. 2. 2/3. and N ≥ L. 4. 555. . 3. 3. 7. The next two ﬁgures show a cubic spline with knot sequence 0. 0. 355. 1/3. uk ≤ uk+1 for all k ∈ Z).12 shows another cubic spline with knot sequence 0. the interpolation ratios vary. 5. correspond to the ratios 1/3. and period size T . 10. . 9. 8. Since the knot sequence is non-uniform. and the construction of the control points of the B´zier curves that constitute the spline. 10. is any bi-inﬁnite nondecreasing sequence uk k∈Z of points uk ∈ A (i. 0. 4. 6.4 Cyclic Knot Sequences. A cyclic knot sequence of period L. Deﬁnition 6. 9. e The example in ﬁgure 6. the corresponding part of the spline is also pulled towards the control polygon. 0. 2/3. 6. 1/4. 1/2. Figure 6.13 shows the construction of the control points for the B´zier curve segments e constituting the previous spline. 222. 7. T ∈ N. N. 1/2. . 1. 3. B-SPLINE CURVES Figure 6. 10 with intervening double knot 8. 11. 5.

0. CLOSED (CYCLIC) B-SPLINE CURVES 215 677 777 123 456 567 012 001 000 234 345 Figure 6. 6.6.4. 7 . 4. 1. 7.12: A cubic spline with knot sequence 0. CYCLIC KNOT SEQUENCES. 0. 7. 7. 2. 0. 5. 3.

7 . 0.13: Construction of a cubic spline with knot sequence 0. 7. 4. 6.216 CHAPTER 6. 1. 0. 7. 5. 0. 2. 7. B-SPLINE CURVES 677 667 777 123 122 112 222 012 111 223 455 555 666 456 556 566 567 011 233 333 444 445 001 000 234 334 344 345 Figure 6. 3.

6.4. CYCLIC KNOT SEQUENCES, CLOSED (CYCLIC) B-SPLINE CURVES

217

89 10

10 11 11

11 11 11

9 10 11 789 677

11 11 12

11 12 13 13 13 13

778

123

456

12 13 13 567

012

001 000

234

345

Figure 6.14: Another cubic spline

218

CHAPTER 6. B-SPLINE CURVES

89 10 899 889 789

10 11 11 99 10 10 10 10 10 10 11 999 9 10 10 9 10 11 677

11 11 11

11 12 12 11 11 12 12 12 12

11 12 13 13 13 13

888 777

788

12 12 13 667 666 456 556

778 122 112 222 012 111

123

12 13 13 566 567

555 223 455

011

233 333 444

445

001 000

234 334 344 345

Figure 6.15: Construction of a cubic spline with knot sequence 0, 0, 0,0, 1, 2,3, 4, 5,6, 7, 7, 8, 9, 10,11, 11, 11,12, 13, 13,13, 13

6.4. CYCLIC KNOT SEQUENCES, CLOSED (CYCLIC) B-SPLINE CURVES

219

123

235

012 356

567 001 10 10 10 678 000

889

788

89 10

9 10 10

Figure 6.16: Another cubic spline

220

CHAPTER 6. B-SPLINE CURVES

123 223 233 122 112 012 222 333

235 335

355 356 555

111

011

556 566 666 567

001 677 678 000 778 889 899 89 10 888 788 777

667 10 10 10

999 99 10 9 10 10

Figure 6.17: Construction of another cubic spline

6.4. CYCLIC KNOT SEQUENCES, CLOSED (CYCLIC) B-SPLINE CURVES

221

of N consecutive knots containing exactly L distinct knots, with multiplicities n1 , . . . , nL , uj+N < uj+N +1, and uk+N = uk + T , for every k ∈ Z. Note that we must have N = n1 + · · · + nL (and ni ≥ 1). Given any natural number m ≥ 1, a cyclic knot sequence of period L, cycle length N, and period size T , has degree of multiplicity at most m, iﬀ every knot has multiplicity at most m. As before, a knot sequence (ﬁnite, or cyclic) is uniform iﬀ uk+1 = uk + h, for some ﬁxed h ∈ R+ . A cyclic knot sequence of period L, cycle length N, and period size T is completely determined by a sequence of N + 1 consecutive knots, which looks as follows (assuming for simplicity that the index of the starting knot of the cycle that we are looking at is k = 1) . . . , u1 , . . . , un1 , un1 +1 , . . . , un1 +n2 , . . . , un1 +···+nL−1 +1 , . . . , uN , uN +1 , . . . ,

n1 n2 nL

or showing the knot multiplicities more clearly, as . . . , u1 , un1 +1 , . . . , uN −nL +1 , uN +1 , . . . . . . . . . . . . . ... . . un1 un1 +n2 . . . uN uN +n1 n1 where N = n1 + · · · + nL , uN < uN +1 , and uk+N = uk + T , for every k ∈ Z. Observe that we can think of the knots as labeling N points distributed on a circle, where any two knots uk+N and uk + T label the same point. We will see when we deﬁne closed spline curves that it does not make sense to allow knots of multiplicity m + 1 in a cyclic knot sequence. This is why the multiplicity of knots is at most m. The following is an example of a cyclic knot sequence of period 3, cycle length N = 3, and period size 5: . . . , 1, 3, 4, 6, 8, 9, 11, 13, . . . . Another example of a cyclic knot sequence of period 4, cycle length N = 5, and period size 6 is: . . . , 1, 2, 2, 5, 6, 7, 8, 8, 11, 12, 13, . . . . We now deﬁne B-spline curves based on cyclic knot sequences. Some authors use the terminology cyclic B-splines, and we will also call them closed B-splines. n2 ... nL n1

222 uN +1 = u1 + T u1 · · · u n1 uN −nL +1 ···

CHAPTER 6. B-SPLINE CURVES

uN

un1 +n2

···

un1 +1

Figure 6.18: A cyclic knot sequence Deﬁnition 6.4.2. Given any natural number m ≥ 1, given any cyclic knot sequence uk k∈Z of period L, cycle length N, period size T , and of degree of multiplicity at most m, a piecewise polynomial curve of degree m based on the cyclic knot sequence uk k∈Z is a function F : A → E, where E is some aﬃne space (of dimension at least 2), such that, for any two consecutive distinct knots ui < ui+1 , if ui+1 is a knot of multiplicity n, the next distinct knot being ui+n+1 , then the following condition holds: 1. The restriction of F to [ui , ui+1 ] agrees with a polynomial curve Fi of polar degree m, with associated polar form fi , and Fi+N (t + T ) = Fi (t), for all t ∈ [ui , ui+1 ]. A spline curve F of degree m based on the cyclic knot sequence uk k∈Z or for short, a closed (or cyclic) B-spline, is a closed piecewise polynomial curve F : A → E, such that, for every two consecutive distinct knots ui < ui+1 , the following condition holds: 2. The curve segments Fi and Fi+n join with continuity (at least) C m−n at ui+1 , in the sense of deﬁnition 5.5.1, where n is the multiplicity of the knot ui+1 (1 ≤ n ≤ m). The set F (A) is called the trace of the closed spline F . Remarks:

6.4. CYCLIC KNOT SEQUENCES, CLOSED (CYCLIC) B-SPLINE CURVES

223

(1) Recall that a cyclic knot sequence of period L, cycle length N, and period size T , satisﬁes the property that uk+N = uk + T , for every k ∈ Z. We must have ui+N = ui + T , ui+1+N = ui+1 + T , and thus, the curve segment Fi+N is deﬁned on the interval [ui + T, ui+1 + T ], and it makes sense to require the periodicity condition Fi+N (t + T ) = Fi (t), for all t ∈ [ui , ui+1 ]. (2) In the case of a closed spline curve, it does not make sense to allow discontinuities, because if we do, we obtain a spline curve with discontinuities, which is not a closed curve in the usual mathematical sense. Thus, we allow knots of multiplicity at most m (rather than m + 1). When every knot is simple, we have L = N. Note that a closed spline curve based on the following cyclic knot sequence of period L, cycle length N, and period size T , . . . , u1 , un1 +1 , . . . , uN −nL +1 , uN +1 , . . . . . . . . . . . . . . ... . un1 un1 +n2 . . . uN uN +n1 n1 where N = n1 + · · · + nL , uN < uN +1 , and uk+N = uk + T for every k ∈ Z, consists of L curve segments (the period of the knot sequence). Some examples of closed splines are given next. The example in ﬁgure 6.19 shows a closed cubic spline based on the uniform cyclic knot sequence . . . , 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, . . . of period L = 16, cycle length N = 16, and period size T = 16. As usual, for simplicity of notation, the control points are denoted as triplets of consecutive knots ui ui+1 ui+2 . The cyclicity condition is fk (ui , ui+1 , ui+2 ) = fk+16 (ui + 16, ui+1 + 16, ui+2 + 16), which we write more simply as ui ui+1 ui+2 = (ui + 16)(ui+1 + 16)(ui+2 + 16). n2 ... nL n1

Since the knot sequence is uniform, all the interpolation ratios are equal to 1/3, 2/3, or 1/2.

224

CHAPTER 6. B-SPLINE CURVES

123

012 = 16 17 18 14 15 16

234 456

15 16 17 13 14 15 11 12 13

345 567 789

12 13 14 10 11 12 89 10

678

9 10 11 Figure 6.19: A closed cubic spline with cyclic knot sequence

6.4. CYCLIC KNOT SEQUENCES, CLOSED (CYCLIC) B-SPLINE CURVES

225

123

222 012 = 16 17 18 14 15 16 16 16 16 15 15 15 15 16 17 13 14 15 14 14 14 11 12 13 13 13 13 12 12 12 12 13 14 10 11 12 11 11 11 999 89 10 789 777 888 678 444 555 345 666 567 234 456

111

333

10 10 10

9 10 11 Figure 6.20: Construction of a closed cubic spline with cyclic knot sequence

226

CHAPTER 6. B-SPLINE CURVES

The example in ﬁgure 6.21 shows a closed cubic spline based on the non-uniform cyclic knot sequence . . . , −3, −2, 1, 3, 4, 5, 6, 9, 11, 12, . . . of period L = 5, cycle length N = 5, and period size T = 8. As usual, for simplicity of notation, the control points are denoted as triplets of consecutive knots ui ui+1 ui+2 . The cyclicity condition is fk (ui , ui+1 , ui+2 ) = fk+5 (ui + 8, ui+1 + 8, ui+2 + 8), which we write more simply as ui ui+1 ui+2 = (ui + 8)(ui+1 + 8)(ui+2 + 8).

Because the knot sequence is non-uniform, the interpolation ratios vary. The interpolation ratios for the nodes labeled 113, 133, 334, 344, 445, 455, 556, 566, 669, −2 11 = 699, 111, 333, 444, 555, 666, are 1/3, 5/6, 1/2, 3/4, 1/3, 2/3, 1/5, 2/5, 1/6, 2/3, 3/5, 2/3, 1/2, 1/2, 1/4. It is easy to show that lemma 6.2.3 can be simply modiﬁed to hold for ﬁnite spline curves, and for closed spline curves. We can also modify theorem 6.2.4 as follows. Theorem 6.4.3. (1) Given any m ≥ 1, and any ﬁnite knot sequence uk −m≤k≤N +m+1 of degree of multiplicity at most m + 1, for any sequence d−m , . . . , dN of N + m + 1 points in some aﬃne space E, there exists a unique spline curve F : [u0 , uN +1 ] → E, such that dk = fi (uk+1 , . . . , uk+m ), for all k, i, where −m ≤ k ≤ N, ui < ui+1 , and k ≤ i ≤ k + m. (2) Given any m ≥ 1, and any ﬁnite cyclic knot sequence uk k∈Z of period L, cycle length N, period size T , and of degree of multiplicity at most m, for any bi-inﬁnite periodic sequence dk k∈Z of period N of points in some aﬃne space E, i.e., a sequence such that dk+N = dk for all k ∈ Z, there exists a unique closed spline curve F : A → E, such that dk = fi (uk+1 , . . . , uk+m ), for all k, i, where ui < ui+1 and k ≤ i ≤ k + m. Proof. (1) The ﬁnite knot sequence is of the form u−m , . . . , u0 , u1 , . . . , un1 , un1 +1 , . . . , un1 +n2 , . . . , un1 +···+nL−2 +1 , . . . , uN , uN +1 , . . . , uN +m+1 ,

m+1 n1 n2 nL−1 m+1

21: Another cubic spline with cyclic knot sequence . CYCLIC KNOT SEQUENCES.4.6. CLOSED (CYCLIC) B-SPLINE CURVES 227 134 −2 13 = 69 11 345 569 456 Figure 6.

228 CHAPTER 6.22: Construction of another closed cubic spline . B-SPLINE CURVES 134 133 333 113 334 344 −2 13 = 69 11 111 444 345 −2 11 = 699 445 455 555 669 569 666 566 556 456 Figure 6.

and since the ﬁrst and the last knot have multiplicity m + 1. .6. . . given a periodic sequence dk k∈Z of control points with period N. we have Fi+N (t + T ) = Fi (t). .4 applied to the distinct knots in the above cycle shows that the curve segments Fi are uniquely determined. . . un1 +···+nL−1 +1 . we can assume that u0 and uN +1 have multiplicity m. (2) It will be enough to consider a cycle of the knot sequence. and dN = fN (uN +1 . . . . . and that the periodicity condition on curve segments. . n1 n2 nL where and uk+N = uk + T . . . Also observe that there are m + N + 1 subsequences of m consecutive knots in the knot sequence u−m+1 . . for all t ∈ [ui . . ui+1 ]. uN +m . from a practical point of view.4 applied to the distinct knots in the knot sequence shows that the curve segments Fi are uniquely deﬁned and ﬁt well together for joining knots. . and we present the very useful “knot insertion” method. . . and since the cycle length N of the knot sequence agrees with the periodicity N of the sequence of control points. Note that there are (2(m + 1) + N) − m + 1 = m + N + 3 subsequences of m consecutive knots in the entire knot sequence. say . the proof of theorem 6. . . N = n1 + · · · + nL uN < uN +1 . . ui+1 ]. . uk+m ) is periodic of period N. Conversely. and it consists of 2(m + 1) + N knots. u0 ). the condition Fi+N (t + T ) = Fi (t). Since there are m + N + 1 subsequences of m consecutive knots in the knot sequence u−m+1 . . .3 (1). uN +m ). shows that the ﬁrst and the last knots u−m and uN +m+1 can be safely ignored. . . . We now reconsider the de Boor algorithm.2. this means that the sequences u−m . . . Observe that d−m = f0 (u−m+1 . . . .2. uN . The rest of the proof is similar to the proof of theorem 6. Thus. . .4. ui+1 ]. . uN +m . . and consider the sequence of 2m knots ui−m+1 . . . . . . . uN +m+1 do not correspond to any control points (but this is ﬁne!). namely. .4. . . u1 . uN +m .4. they do not contribute to any control points. implies that the sequence of control points dk = fi (uk+1 . u−1 and uN +2 . for every k ∈ Z. . . Indeed.2. . . . Let ui < ui+1 be two consecutive distinct knots in the sequence u−m+1 . . uN +1 . . . . . the proof of theorem 6. Assume that a spline satisfying conditions (1) exists. un1 . CYCLIC KNOT SEQUENCES. un1 +n2 . . ui+m . and look at some properties of spline curves. Remark: Theorem 6. for all t ∈ [ui . . there are indeed m + N + 1 control points associated with the original knot sequence. . un1 +1 . and ui+N = ui + T . . . If a closed spline curve exists. CLOSED (CYCLIC) B-SPLINE CURVES 229 where N = n1 + · · · + nL−1 . centered at the interval [ui .

0 = f (uk+1 .j . we just have to ﬁnd the interval [uI . um+k+1 − uk+j where bk.j−1 + um+k+1 − uk+j t − uk+j bk+1. .5 The de Boor Algorithm In section 5. . given a knot sequence uk (inﬁnite. ﬁnite. in case of a ﬁnite spline). . . . in this case [um . um+k ). u2m of length 2m. . and then to apply the progressive version of the de Casteljau algorithm. For the general case. given any parameter t ∈ A (where t ∈ [u0 .j = um+k+1 − t bk. . um+1 ] is the middle of the sequence u1 . since the indexing will be a bit more convenient. B-SPLINE CURVES 6. uI+k−1 .j = f (t uk+j+1 .j . where 1 ≤ k ≤ m + 1. . . or cyclic). Recall that F (t) = f (t. Such a computation can be conveniently represented in the following trian- . and a sequence dk of control points (corresponding to the nature of the knot sequence). . 0 ≤ k ≤ m − j. starting from the m + 1 control points indexed by the sequences uI−m+k . in order to compute the point F (t) on the spline curve F determined by uk and dk . . Its relationship to spline curves is that. j for 1 ≤ j ≤ m. . . . bm−j. . . . The computation proceeds by rounds. b1.3. . the points b0.3. . we translate all knots by I − m. uN +1 ]. let us assume for simplicity that I = m. and with bk. we presented the progressive version of the de Casteljau algorithm. Indeed. .j−1. for 0 ≤ k ≤ m. and during round j.m determined by the inductive computation bk.j are computed. . um+k ) = dk . As in section 5. uI+1 ] for which uI ≤ t < uI+1 .230 CHAPTER 6. . t) is computed by iteration as the point b0.

. rather than indexing the points on the j-th column with an index k always starting from 0. . indexed using our original indexing. . . Thus. bm−j. . = b0. .j−1 bk+1.j differently.j . . Rewriting the m−r . in this case. we notice that b0. bm−r−j.j . bk.j−1 . . . at round j. bm−j. .. um ) = F (t).. and t = because b0. bk. . . . .j−1 b0.m um−r+1 . . .0 .0 . .0 ... .m−r = f (t um−r+1 = . .0 If t = um and the knot um has multiplicity r (1 ≤ r ≤ m). . . and we only need to construct the part of the triangle above the ascending diagonal bm−r. bm−1.m−r+1 = . . b0. m 231 .j .1 bm.j . . . since um is of multiplicity r.j . . .j . bm−r−1. . we will label the starting control points as d1. . b0. . . . . b0. br.j . . dj+2.j . .6. and second. it will be convenient to index the points in the j-th column with an index i starting at j + 1.j . .0 .0 b0. for every round j.0 bm−1. it is convenient to index the points bk. bk.1 . . .. . . . In order to present the de Boor algorithm. . b1. . bm−j+1. dm+1. bm−r. the points b0. dm+1. . . .j .m−r . we only need to start with the m − r + 1 control points b0.j .1 bm−r. First. Thus. dk+j+1. and running up to m − j. .5.m−r b0.1 b1. will correspond the to the points dj+1.j−1 . . under our new indexing.m−r . .j . .m−r = b0.. . bm−r−j+1.j−1 bm−r−1..0 . . . . .. 0 . = um . . THE DE BOOR ALGORITHM gular form: 1 b0.m . and always ending at m + 1. . j−1 j . m− r . 0 .

0 ≤ k ≤ m − r − j.0 = di−1 .j dm+1−r. .j−1. um+i−j − ui−1 where 1 ≤ j ≤ m − r.j−1 .j dm+1. we get the following: 1 d1. and we get the equation .j−1 . dm+1−r. dj. in order to deal with the general case where t ∈ [uI . is given by the equation: dk+j+1..j = um+i−j − t di−1. Finally.m−r .. Letting i = k + j + 1. j−1 j .j−1 dj+1. .j−1.0 . and with di. . .0 = dk . .j .. when 0 ≤ k ≤ m − r. . .0 . . . j + 1 ≤ i ≤ m + 1 − r. . and r = 0 otherwise. .j−1 As we can easily see. m .m dm+1−r. .j−1 + um+k+1 − uk+j t − uk+j dk+j+1. dm+1. the above equation becomes di.j−1 and dk+j. . and with dk+1. dm+1−r.j−1. dk+j+1. uI+1 [.j in terms of dk+j+1. dk+j.0 .. .0 dm+1. B-SPLINE CURVES triangular array representing the computation with our new indexing. we translate all the knot indices by I − m.j = um+k+1 − t dk+j.j−1 + um+i−j − ui−1 t − ui−1 di. .1 dm+1. .. The point F (t) on the spline curve is dm+1−r. .m−r . dm+1. when 1 ≤ i ≤ m + 1 − r. .232 CHAPTER 6.j . the inductive relation giving dk+j+1.m−r dm+1.0 d2. .1 dm+1−r. where r is the multiplicity of um when t = um . . which does not change diﬀerences of indices. dk+j+1.1 d2. dm. .. .j−1 .. m−r . um+k+1 − uk+j where 1 ≤ j ≤ m − r..

Note that other books often use a superscript for the “round index” j. uI+k ) associated with the sequences .0 := di−1 endfor. for i := I − m + 1 to I + 1 − r do di. If I is the largest knot index such that uI ≤ w < uI+1 . and even e for computing a point on a spline curve. The knot w may be new or may coincide with some existing knot of multiplicity r < m. F (t) := dI+1−r.j−1 endfor endfor. I − m + j + 1 ≤ i ≤ I + 1 − r.6.m−r . .j = where 1 ≤ j ≤ m − r.j−1 + um+i−j − ui−1 t − ui−1 di.m−r end 6.6. Knot insertion can be used either to construct new control points. The point F (t) on the spline curve is dI+1−r. This is the de Boor algorithm. and with di. if t = uI then r := multiplicity(uI ) else r := 0 endif. the eﬀect will be to increase the degree of multiplicity of w by 1. for j := 1 to m − r do for i := I − m + j + 1 to I + 1 − r do di. THE DE BOOR ALGORITHM AND KNOT INSERTION 233 di. and r = 0 when uI < t < uI+1 (1 ≤ r ≤ m). B´zier control points associated with the curve segments forming a spline curve. where r is the multiplicity of the knot uI when t = uI .j−1.0 = di−1 .j as dj .6 The de Boor Algorithm and Knot Insertion The process of knot insertion consists of inserting a knot w into a given knot sequence without altering the spline curve. i The de Boor algorithm can be described as follows in “pseudo-code”: um+i−j − t di−1. and in the latter case.j := um+i−j −t um+i−j −ui−1 di−1. . . inserting the knot w will aﬀect the m − 1 − r control points f (uI−m+k+1 .j−1 + t−ui−1 um+i−j −ui−1 di. um+i−j − ui−1 begin I = max{k | uk ≤ t < uk+1 }. when I − m + 1 ≤ i ≤ I + 1 − r. and write our di. .

2. . . . 6. . Thus. 6. 14. f (678). . . . 15. 3. . . for i := I − m + 1 to I + 1 − r do di. . . . . 8. . 14. . . 11. . . . . . 8. After insertion of w. for which the two polar values f (568) and f (689) no longer exist.0 := di−1 endfor. . uI−m+2 . . 8. . . . 1. 11. . . note that these points constitute the ﬁrst column obtained during the ﬁrst round of the de Boor algorithm. for all k ≤ I. uI−r . . where 1 ≤ k ≤ m − r (w = vI+1 belongs to one of these subintervals). . . 3. . v I+2 . 5. . . uI+k containing the subinterval [uI . We can use the de Boor algorithm to compute the new m − r control points. For example. 2. v I+k . . . 5. if w = uI then r := multiplicity(uI ) else r := 0 endif. 11. . and where r is the multiplicity of uI if w = uI (with 1 ≤ r < m). . where v k = uk . for all k ≥ I + 1: . . . where 1 ≤ k ≤ m − 1 − r. 14. . . . 9. 9. v I+m . . . . . 1. 1. uI . 2. . 15. . 8. . vI−r+1 . v I+1 . 6. 2. . 5. . In fact. 7. 11. we have the knot sequence . we can describe knot insertion in “pseudo-code”. . v I+1 . . 6. . . . 14. 9. . and vk+1 = uk . as follows: begin I = max{k | uk ≤ w < uk+1 }. . uI+1 .234 CHAPTER 6. . v I+1 = w. uI+m−1−r . Thus. 1. . given the knot sequence . . . . 3. we need to compute the m − r new control points f (v I−m+k+1. 5. 15. . . . . vI . . . . . . For example. . v I+1 . . . . and f (789). 9. . . . . . and the three polar values f (567). B-SPLINE CURVES uI−m+k+1 . . 3. . 15. v I−r . . need to be computed. . . after insertion of the knot 7 in the knot sequence . uI+m−1 . v I−m+2 . . 7. . . which are just the polar values corresponding to the m − r subsequences of m − 1 consecutive subintervals vI−m+k+1 . . uI−r+1 . we get the new sequence (vk ). and r = 0 if uI < w: . . . uI+1 ]. . insertion of the knot 7 yields the knot sequence . vI+k ). . v I+m−r .

. . . We leave the formulation of such an algorithm as an exercise to the reader (for help. 11. . for any three consecutive control points bi . 2.1 . Lyche. d2. THE DE BOOR ALGORITHM AND KNOT INSERTION for i := I − m + 2 to I + 1 − r do di.23 illustrates the process of inserting the knot t = 7. The interpolation ratios associated with the points d1. one may consult Risler [68] or Barsky [4]). in the case of a quadratic B-spline curve speciﬁed by a closed polygon of de Boor control points. 3. . .1 . For a cubic B-spline curve speciﬁed by a closed polygon of de Boor control points. 2 2 and b′2i+2 = 1 6 1 bi + bi+1 . 15. An amusing application of this algorithm is that it yields simple rules for inserting knots at every interval midpoint of a uniform cyclic knot sequence. 9.1 . .1 . 9−5 4 2 7−6 1 = . as opposed to a single knot.0 endfor return dI−m+2. Such an algorithm often coined “Olso algorithm”. 4 4 This is Chaikin’s subdivision method [18]. It is also possible to formulate a version of knot insertion where a whole sequence of knots. 11 − 6 5 Evaluation of the point F (7) on the spline curve above. bi+1 . are 4 7−3 = .0 + w−ui−1 um+i−1 −ui−1 di. dI+1−r. 8−3 5 7−5 2 1 = = . . two new control points b′2i+1 and b′2i+2 are created according to the formulae b′2i+1 = 1 1 bi + bi+1 . bi+1 ).6. is inserted. for every edge (bi . 1. Figure 6. 5. 8.1 end Note that evaluation of a point F (t) on the spline curve amounts to repeated knot insertions: we perform m − r rounds of knot insertion. + bi+2 . . 6.6. It is rather straightforward to give an equivalent (and in fact. 4 4 and b′2i+2 = 1 3 bi + bi+1 . in the knot sequence . in the spirit of our treatment of single knot insertion.1 := um+i−1 −w um+i−1 −ui−1 235 di−1. to raise the original multiplicity r of the knot t to m (again. r = 0 if t is distinct from all existing knots). 14. and bi+2 . . create two new control points according to the formulae b′2i+1 = 3 1 bi + bi+1 . and Riesenfeld [20]. . 8 8 8 . and d3. more illuminating) presentation of this algorithm in terms of polar forms. is described in Cohen. For instance. consists in inserting the knot t = 7 three times.

1 : 789 d2.23: Part of a cubic spline with knot sequence . 15. 14. 11.. . 2. 5.1 : 567 d4 : 568 d2. . 6. insertion of the knot t = 7 . .3 : 777 d3. 3.2 : 677 d3. 1. 9.1 : 678 d5 : 689 Figure 6.2 : 778 d3. 8. B-SPLINE CURVES d6 : 89 11 d2 : 235 d3 : 356 d7 : 9 11 14 d1 : 123 d8 : 11 14 15 d1.236 CHAPTER 6. . . .

which we now brieﬂy e review. If one of de Boor control points is changed. given a straight line of the form l(u) = au + b. uN . . m where 0 ≤ i ≤ N + m. u2 .6. a B´zier curve e e is viewed as the result of blending together its B´zier control points using the Bernstein e . Strong convex hull property. Aﬃne invariance. uN +1 . for which the variation diminishing e property has been established. 6. . This is a variation diminishing property. Linear Precision. This property is inherited from the corresponding property of the B´zier e segments. where u0 and uN +1 are of multiplicity m + 1. Endpoint interpolation. . a spline curve passes through the ﬁrst and the last de Boor control points. given the N + m + 1 knots (known as Greville abscissas) 1 ξi = (ui + ui+1 + · · · + ui+m−1 ). . Insert every knot until it has multiplicity m. where u0 and uN +1 are of multiplicity m + 1. uN −1 . this aﬀects m+1 curve segments. At that stage.7 Polar forms of B-Splines Our multiaﬃne approach to spline curves led us to the de Boor algorithm. and we learned that the current point F (t) on a spline curve F can be expressed as an aﬃne combination of its de Boor control points. uN . the control polygon consists of B´zier subpolygons. u1 . . . Variation diminishing property. Local control. In case of a ﬁnite knot sequence u0 . Every point on the spline curve lies in the convex hull of no more than m + 1 nearby de Boor control points. if we read oﬀ control points on this line at the Greville abscissas. the resulting spline curve reproduces the straight line.7. POLAR FORMS OF B-SPLINES 237 Spline curves inherit a number of the properties of B´zier curves. u1 . uN −1 . In the traditional theory of B´zier curves. . The spline curve is not intersected by any straight line more often that is the control polygon. since it consists in aﬃne interpolation steps. Given a ﬁnite knot sequence u0 . . uN +1 . u2 . The coeﬃcients of the de Boor points can be viewed as real functions of the real parameter t. An easy way to prove this property is to use knot insertion.

B-splines are usually not deﬁned in terms of the de Boor algorithm. uk+1[.m+1. is the unique spline curve whose de Boor control points are the reals xi = δi.m+1 is actually of degree ≤ m. such that δi. .238 CHAPTER 6. .m. where δi. .k (t) f]uj . we will only consider an inﬁnite knot sequence uk k∈Z where every knot has multiplicity at most m + 1. for every t ∈ [uk . and δi.j . Given any spline curve (really. or by a formula involving so-called divided diﬀerences. In the traditional theory. Similarly.m+1. Remark: The normalized B-spline Bj. which may cause a slight confusion with the (normalized) B-splines Bj. . Thus. but instead in terms of a recurrence relation. . If we polarize both sides of the equation Fk (t) = j Bj.j is the Kronecker delta symbol. . to avoid ambiguities.1. . B-splines are piecewise polynomial functions that can be viewed as spline curves whose range is the aﬃne line A. .k (t1 . We deﬁne B-splines as follows.m+1 : A → A (or Nj. .m+1. uk+1 [. for every k such that uk < uk+1 .uk+m+1 [ (uk+1 . . . the normalized B-splines Bj. . Remark: The word “curve” is sometimes omitted when referring to B-spline curves. uj+m).k (t) f]uj .m+1 which are piecewise polynomial functions. . B-SPLINE CURVES basis polynomials as the weights. . For simplicity.m+1 over [uk . We will now show brieﬂy how the multiaﬃne approach to spline curves leads to the recurrence relation deﬁning B-splines.m+1.m+1. uk+m ).m+1. any B-spline curve) F : A → E over the knot sequence uk k∈Z (where E is an arbitrary aﬃne space). Given an inﬁnite knot sequence uk k∈Z.m+1 is well established.j = 0 otherwise. uj+m ). since the adaptations needed to handle a ﬁnite or a cyclic knot sequence are quite trivial. tm ) f]uj . . uj+m ). Deﬁnition 6. . . a spline curve can be viewed as the result of blending together its de Boor control points using certain piecewise polynomial functions called B-splines as the weights. Thus. . and it would perhaps make more sense to denote it as Bj. .uj+m+1 [ (uj+1 . . and not curves in the traditional sense. we tried to be careful in keeping the word “curve” whenever necessary.7. where dk = f]uk .m+1 : A → A) of order m + 1. . the j-th normalized (univariate) B-spline Bj.k is the segment forming the spline curve Bj. . tm ) = j bj.k (t) dj = j Bj.k are indeed the weights used for blending the de Boor control points of a spline curve. where every knot has multiplicity at most m+1.uj+m+1 [ (uj+1 . but the notation Bj.uj+m+1 [ (uj+1 . and deﬁned by the de Boor control points dk k∈Z . where Bj. we know that Fk (t) = j Bj. Some authors use the notation Njm . we get fk (t1 . . .j = 1 iﬀ i = j. . thus showing the equivalence with the traditional approach.

. . . . . . this lattice will have triangular notches cut out of the bottom of it. . . tm }. . where we let k vary. . . forming a kind of lattice with m + 1 rows. . we have computed the inﬂuence of the chosen j-th de Boor control point on all segments fk of the spline. . if we let xi = δi. . uj+m ). .k (t1 . Our goal is to ﬁnd a recurrence formula for bj. More speciﬁcally. If there are multiple knots. . . . We discover that the triangles used in the computation of the polar values fk (t1 . . . . . . . and the bottom row consists of the values yk = fk (t1 . k].m+1.24 shows a portion of this lattice in the case of a cubic spline. and bj. . Intuitively. .k is null. . the polynomial Bj. where each subsequent row is computed from the row above by taking aﬃne combinations controlled by one of the arguments ti .k (t1 . . .m+1. tm ) is the sum of the products of the edge-weights along all descending m paths from xj to yk (there are such paths). omitting the node labels. . . tm ) overlap in large parts. tm ). Fixing an index j. and the bottom row with the polar values yk . If we label the intermediate nodes as we do this top-down computation. and this is what generates a triangular hole of height n at the bottom of the lattice. .m+1.k . Recall that the computation can be arranged into a triangle.m+1. tm ) (where uk < uk+1). tm ) associated with the normalized B-spline Bj. if tk+1 = . It is easily seen that the label assigned to every node is the sum of the products of the edge-weights along all descending paths from xj to that node.k (t1 . and it is just the de Boor algorithm. POLAR FORMS OF B-SPLINES 239 where bj. Thus. whose heights correspond to knot multiplicity.uj+m+1 [ (uj+1 . Recall that the above sum only contains at most m + 1 nonzero factors. .m+1 . . at the ﬁxed multiset {t1 .7. but this time. as speciﬁed in the de Boor algorithm. . The top row consists of the de Boor control points dj = f]uj . tm } and we compute fk (t1 .k . . . The de Boor algorithm provides a method for computing the polar value fk (t1 . It is interesting to draw the same lattice in the case where the object space is the aﬃne line A. .k is the polar form of Bj.k (t1 . .m+1. = tk+n . since Fk is inﬂuenced by the m + 1 de Boor control points dj only when j ∈ [k − m. The nodes of the above lattice are labeled to reﬂect the computation of the polar value fk (t1 . k]. . tm ). . The ﬁrst method proceeds from the top-down. . . m−j . for j outside [k − m. tm ). . then we have a computation triangle for fk and for fk+n . .m+1. . We now show how this second lattice can be used to compute the polar value bj. . we obtain another interpretation for bj.uj+m+1 [ (uj+1 . An interesting phenomenon occurs if we ﬁx the multiset {t1 . .j for the top nodes. Figure 6. but there are no triangles for fk+1 through fk+n−1 . uj+m).m+1. . then the bottom row node yk gives the value bj.m+1. tm ) from the de Boor control points f]uj . except that we label the top row with the de Boor points xi (which are real numbers). . . . tm ). .6. .

u5 [ (t1 . u5 ) t2 − u1 u3 − u1 f]u1 .u3 [ (t1 .u7 [ (u4 . u5 ) f]u2 . t2 . t3 ) f]u3 .u6 [ (t1 .u6 [ (t1 . u3 . u3 .u3 [ (t1 . u2 . u4 . B-SPLINE CURVES f]u0 .u5 [ (t1 .u6 [ (u3 .u5 [ (u2 .u4 [ (t1 . u2 . u4 ) f]u2 .24: Computation scheme of the de Boor algorithm when m = 3 . u4 ) f]u3 . u4 . u3 ) f]u1 . u4 ) u3 − t3 u3 − u2 t3 − u2 u3 − u2 u4 − t3 u4 − u3 t3 − u3 u4 − u3 u5 − t3 u5 − u4 t3 − u4 u5 − u4 f]u2 .240 CHAPTER 6. u5 . u2 ) u4 − t2 u4 − u2 t2 − u2 u4 − u2 u5 − t2 u5 − u3 t2 − u3 u5 − u3 u6 − t2 u6 − u4 f]u4 . t2 .u4 [ (t1 . t3 ) Figure 6. u6 ) u4 − t1 u4 − u1 t1 − u1 u4 − u1 u5 − t1 u5 − u2 t1 − u2 u5 − u2 u6 − t1 u6 − u3 t1 − u3 u6 − u3 f]u1 . u3 ) f]u2 . t2 . t2 . u5 ) f]u3 . t3 ) f]u4 . t2 .u4 [ (t1 .u5 [ (t1 .u4 [ (u1 . u3 ) f]u3 . t2 . t2 .

7.6.25: The computation lattice underlying the de Boor algorithm when m = 3 . POLAR FORMS OF B-SPLINES 241 x0 x1 x2 x3 u4 − t1 u4 − u1 t1 − u1 u4 − u1 u5 − t1 u5 − u2 t1 − u2 u5 − u2 u6 − t1 u6 − u3 t1 − u3 u6 − u3 t2 − u1 u3 − u1 u4 − t2 u4 − u2 t2 − u2 u4 − u2 u5 − t2 u5 − u3 t2 − u3 u5 − u3 u6 − t2 u6 − u4 u3 − t3 u3 − u2 y2 t3 − u2 u3 − u2 u4 − t3 u4 − u3 y3 t3 − u3 u4 − u3 u5 − t3 u5 − u4 y4 t3 − u4 u5 − u4 Figure 6.

m+1. . . the sequence of tensors (b⊙ j. . Bj. tm ).k (t1 . . As a consequence. If we consider the tensored version b⊙ j. the above equations show that the sequence of symmetric tensors (b⊙ j.k (t1 .k of bj. and Cox: 1 if t ∈ [uj .k .m+1. the above recurrence equations show the necessity for the knots u−m and uN +m+1 . .k )k−m≤j≤k forms a basis of the vector space m m A .m+1.j . t − uj uj+m+1 − t Bj. and we let yi = δi. . . We then assign labels to the nodes in a bottom-up fashion. This justiﬁes the introduction of end knots u0 and uN +1 of multiplicity m + 1. This time. . . the polynomials (Bj.k () = δj. . .m. it is easy to see that the label of the top row node xj is the polar value bj. tm ): We have bj.k . uj+1 [ 0 otherwise. . One will observe that bj. . the bottom-up approach chooses a spline segment fk and computes how this segment is inﬂuenced by all de Boor control points.m (t) + Bj+1. Remark: In the case of a ﬁnite spline based on a ﬁnite knot sequence uk −m≤k≤N +m+1 .k )k−m≤j≤k is the dual basis of the basis of symmetric tensors (uj+1 · · · uj+m )k−m≤j≤k .m+1 (t) = uj+m − uj uj+m+1 − uj+1 Bj. . . the dual space of ∗ A. The bottom-up approach yields a recurrence relation for computing bj. B-SPLINE CURVES The symmetry of this second approach suggests a bottom-up computation. uj+m+1 [. tm ) = bj. Note that the combinations involved are no longer aﬃne (as in the top-down approach). .m+1. de Boor. . due to Mansﬁeld.m.m+1 . . taking linear combinations as speciﬁed by the edge-weights.k for the bottom nodes. Thus. Intuitively. we choose a ﬁxed k with uk < uk+1 . . A nice property about this recurrence relation is that it is not necessary to know the degre m ahead of time if we want to compute the B-spline Bj. uj+m+1 − uj+1 If we set all ti to t. . we get the standard recurrence relation deﬁning B-splines. . .m+1. .1.m+1.k (t1 . tm − uj bj.242 CHAPTER 6. and drop the subscript k. tm−1 ) + uj+m − uj uj+m+1 − tm bj+1. .m+1.k of the homogenized version bj. ti+m ) = δi.1 (t) = It is easily shown that Bj.m+1.k (t1 .k )k−m≤j≤k are linearly independent. but under the symmetric path interpretation.k (t1 .m+1.m+1 is null outside [uj . tm−1 ).k (ti+1 .m+1. .m (t).

the reader is referred to de Boor [22]. 0. 4 . 2. or in various other ways. 1. 2. 0. 1.6. 1) f (0. 1) = 2f (0. 0. 2) Figure 6. t2 . 2. 2. 2) 243 f (0. it is immediate that the answer is f (t1 . 2. We would like to view this spline curve F as a spline curve G of degree 4. 0. t2 . For example. 1) f (0.26 shows a cubic spline curve F based on the knot sequence 0. 1. in order for the spline curve to retain the same order of continuity. 2. Then. POLAR FORMS OF B-SPLINES f (0. 1. or Piegl and Tiller [62]. Risler [68]. 2) f (1. 1. First. First. 0) f (2. 2. 0. 0. 1. 1). Hoschek and Lasser [45]. We conclude this section by showing how the multiaﬃne approach to B-spline curves yields a quick solution to the “degree-raising” problem. 0. 1) = f (0. t3 . we have g(t1 .5. we must increase the degree of multiplicity of each knot by 1. Figure 6. 1. t3 . t4 ) . 2. 0. 2) f (1. 0. 4 It is then an easy matter to compute the de Boor control points of the quartic spline curve G. 0. For this. 0.26: A cubic spline F with knot sequence 0. 1. 2. 2 It is also possible to deﬁne B-splines using divided diﬀerences. 0. 1. t2 . Farin [32]. 2. 1) f (1. 2. 0. 1. 1) + 2f (0. t4 ) + f (t2 . we need to ﬁnd the polar form g of this spline curve G of degree 4. For more on B-splines. so that the quartic spline curve G is based on the knot sequence 0. 0. t3 . t4 ) = g(0.7. 0. 2. in terms of the polar form f of the original cubic spline curve F . 2. t4 ) + f (t1 . 2. we consider an example. t3 ) + f (t1 .

1. 1. . 2) and f (1.244 g(0. i=1 where the hat over the argument ti indicates that this argument is omitted. vk+1 ]. 1. it is ﬁrst necessary to form a new knot sequence vk k∈Z .1.27 shows the de Boor control points of the quartic spline curve G and its associated control polygon. 1. In the general case. 1. 1) g(1. 1. 2. 2. . as in section 5. ti . 0. 1) + 2f (0. Figure 6. 1. . 1. 0. 2) = We note that the control polygon of G is “closer” to the curve. . 2) g(0. 0) g(2. 1. tm+1 ) = m+1 i=m+1 fk (t1 . since if the left-hand side is well deﬁned. 2. . 0. This is a general phenomenon. 0. 2) Figure 6. 1. 4 The point g(0. . 1). 1. the control polygon converges to the spline curve. given a spline curve F of degree m based on this knot sequence. where vk < vk+1 . 2. B-SPLINE CURVES g(0. 1) g(1. . 1. 2) + f (1. 2) . Then. where each knot is of multiplicity at most m + 1. It can be shown that as the degree is raised. . with the multiplicity of each knot incremented by 1. given a knot sequence sequence uk k∈Z. 1. . 0. We observe that the above expression is indeed well deﬁned. . . g(0.27: Degree raising of a cubic spline as a quartic G and 2f (0. 2. 1. since the multiplicity of every knot goes down by 1. 2) g(1. 1) CHAPTER 6. 2) is easily seen to be the middle of the line segment between f (0. the polar form gk of the spline curve segment G of degree m + 1 over the interval [uv . 2) g(0. 1. then the right-hand side is also well deﬁned. . which consists of the knots in the sequence uk k∈Z . . in bold lines. is related to the polar form fk of the original spline curve F of degree m by the identity 1 gk (t1 . tm+1 ).

. vl+m+1 ) = 1 m+1 i=m+1 f]uk . and we consider one of the most common problems which can be stated as follows: Problem 1: Given N +1 data points x0 . which amounts to the assumption that the de Boor control points d0 and dN are known. for all i. uN . and where the hat over the argument v l+i indicates that this argument is omitted. Thus. . . v l+m+2 ] = [uk . . and the last control point dN +1 coincides with xN . since Lagrange interpolants tend to oscillate in an undesirable manner. However. e 6. . dN . for a uniform knot sequence. Actually. and a C 2 cubic spline curve F passing through these points. The control points d0 and d7 = dN were chosen arbitrarily. 0 ≤ i ≤ N. we are looking for N + 1 de Boor control points d0 . . . since the ﬁrst control point d−1 coincides with x0 . we only come up with N − 1 equations expressing x1 . uN . using the de Boor evaluation algorithm. Unlike the problem of approximating a shape by a smooth curve. dN +1 . v l+m+1 ). u1 . and a sequence of N +1 knots u0 . Cubic spline curves happen to do very well for a large class of interpolation problems. u0 . . the de Boor control points of the spline curve G of degree m + 1 are given by the formula g]vl . we can try to ﬁnd the de Boor control points of a C 2 cubic spline curve F based on the ﬁnite knot sequence u0 . . . . .vl+m+2 [ (v l+1 . There are a number of interpolation problems. such that F (ui ) = xi . Thus. the above problem has two degrees of freedom. uk+m+1 ].6. . xN −1 in terms of the N + 1 unknown variables d0 . . . as in the case of B´zier curves. 0 ≤ i ≤ N − 1. uN . . In general. . . . dN .8. with ui < ui+1 for all i. and it is under-determined. Thus. . . . xN . . interpolation problems require ﬁnding curves passing through some given data points and possibly satisfying some extra constraints.28 shows N + 1 = 7 + 1 = 8 data points. . . . . uN −1 . . . ﬁnd a C 2 cubic spline curve F . It is well known that Lagrange interpolation is not very satisfactory when N is greater than 5. . . For example. To remove these degrees of freedom. . . . CUBIC SPLINE INTERPOLATION 245 as we move from F to G. u2 . it is not possible to obtain a more explicit formula. .8 Cubic Spline Interpolation We now consider the problem of interpolation by smooth curves.vk+m+1 [ (vl+1 . we turn to spline curves. In order to solve the above problem. . We note that we are looking for a total of N + 3 de Boor control points d−1 . uN −2 . v l+i . i=1 where [v l . Figure 6. . . uN . . we can add various “end conditions”. . u0 . we can .

with 1 ≤ i ≤ N −1.1 = di + di+1 . correspond respectively to u0 u0 u0 . u0 u0 u1 . xi corresponds to the polar label ui ui ui .1 = ui − ui−2 ui+1 − ui di−1 + di . x1 . x2 . for all i. In order to derive this system of linear equations. x7 specify that the tangent vectors at x0 and xN be equal to some desired value. we have di−1. di+1 using the following diagram representing the de Boor algorithm: Thus. di . x4 . .246 d2 d1 x1 x3 d0 d3 x2 CHAPTER 6.1 + di. B-SPLINE CURVES d7 d6 x6 x4 d4 x5 d5 x0 = d−1 x7 = d8 Figure 6. . x3 . xi = ui+1 − ui−1 ui+1 − ui−1 . and d−1 .1 .28: A C 2 cubic interpolation spline curve passing through the points x0 . For every i. 1 ≤ i ≤ N − 1. and uN uN uN . dN and dN +1 . Note that for all i. the de Boor control point di corresponds to the polar label ui−1 ui ui+1 . x6 . we use the de Boor evaluation algorithm. xi can be computed from di−1 . ui+1 − ui−2 ui+1 − ui−2 ui+2 − ui ui − ui−1 di. We now have a system of N − 1 linear equations in the N − 1 variables d1 . uN −1 uN uN . . with 1 ≤ i ≤ N − 1. ui+2 − ui−1 ui+2 − ui−1 ui − ui−1 ui+1 − ui di−1. . x5 . dN −1 . d0 .

= uN − uN −2 uN − uN −2 uN − uN −2 x1 = xN −1 . ui+1 − ui−1 ui+1 − ui−1 ui+1 − ui−1 xi = with (ui+1 − ui )2 . we get αi βi γi di−1 + di + di+1 . it is easily veriﬁed that we get α1 β1 γ1 d0 + d1 + d2 .29: Computation of xi from di−1 . βi = ui+1 − ui−2 ui+2 − ui−1 (ui − ui−1 )2 .8.1 : ui−1 ui ui xi : ui ui ui di.6.1 : ui ui ui+1 di−1 : ui−2 ui−1 ui di+1 : ui ui+1 ui+2 Figure 6. di+1 From the above formulae. di . ui+1 − ui−2 (ui+1 − ui )(ui − ui−2 ) (ui − ui−1)(ui+2 − ui ) + . u2 − u0 u2 − u0 u2 − u0 βN −1 γN −1 αN −1 dN −2 + dN −1 + dN . CUBIC SPLINE INTERPOLATION di : ui−1 ui ui+1 247 di−1. γi = ui+2 − ui−1 αi = At the end points.

and r0 and rN be arbitrary points. using an LU-decomposition.. βi . 1 ≤ i ≤ N − 1. . . . dN −2 rN −2 0 αN −2 βN −2 γN −2 αN −1 βN −1 γN −1 dN −1 rN −1 rN 1 dN . There are methods for solving diagonally dominant systems of linear equations very eﬃciently. u2 − u0 u3 − u0 (u1 − u0 )2 . Such conditions should not come as a surprise. uN − uN −2 ri = (ui+1 − ui−1 ) xi . It is also easy to show that αi + γi + βi = ui+1 − ui−1 for all i. for example. . . u3 − u0 (uN − uN −1 )2 . we obtain the following (N + 1) × (N + 1) system of linear equations in the unknowns d0 . . 1 ≤ i ≤ N − 1. = . 1 ≤ i ≤ N − 1. In the case of a uniform knot sequence. it is an easy exercise to show that the linear system can be written as for all i. u2 − u0 (u2 − u1 )(u1 − u0 ) (u1 − u0 )(u3 − u1 ) + . . . this is the case for a uniform knot sequence. uN − uN −3 uN − uN −2 (uN −1 − uN −2)2 . In particular.248 where α1 = β1 = γ1 = αN −1 = βN −1 = γN −1 = Letting CHAPTER 6. The matrix of the system of linear equations is tridiagonal. uN − uN −3 (uN − uN −1 )(uN −1 − uN −3 ) (uN −1 − uN −2 )(uN − uN −1 ) + . which means that the matrix is diagonally dominant. for all i. B-SPLINE CURVES (u2 − u1 )2 . dN : 1 r0 d0 α1 β1 γ1 d 1 r1 d 2 r2 α2 β2 γ2 0 . since the di and the xi are points. then it can be shown that the matrix is invertible. If αi + γi < βi . . γi ≥ 0. . and it is clear that αi .

. that is. . so that we can write a system of N linear equations in the N unknowns d0 . For details. . and such that uk+N = uk + uN − u0 . . uN . dN −1 . . It can also be shown that the general system of linear equations has a unique solution when the knot sequence is strictly increasing. For example. . . . Problem 2: Given N data points x0 . The following system of linear equations is easily obtained: r0 d0 β0 γ0 α0 d 1 r1 α1 β1 γ1 d 2 r2 α2 β2 γ2 0 . N. see Farin [32]. .. uN .6. since the condition x0 = xN implies that d0 = dN . . dN −1 . . for all k ∈ Z. this can be shown by expressing each spline segment in terms of the Hermite polynomials. . where we let xN = x0 . . . . . where . dN −3 rN −3 0 αN −3 βN −3 γN −3 αN −2 βN −2 γN −2 dN −2 rN −2 rN −1 dN −1 γN −1 αN −1 βN −1 ri = (ui+1 − ui−1 ) xi . . when ui < ui+1 for all i. such that F (ui ) = xi .. uN for i = 0. .8. for all i. formulated as follows. 0 ≤ i ≤ N. . CUBIC SPLINE INTERPOLATION 249 3 7 1 2 2 1 4 1 0 . . . = . . . . We can also solve the problem of ﬁnding a closed interpolating spline curve. . 0 ≤ i ≤ N − 1. . . . . . This time. . and a sequence of N +1 knots u0 . 0 1 4 1 1 7 2 1 = dN −2 6xN −2 3 dN −1 6xN −1 2 rN 1 dN d0 d1 d2 . with ui < ui+1 for all i. . . . Writing the C 2 conditions leads to a tridiagonal system which is diagonally dominant when the knot sequence is strictly increasing. xN −1 . . and we observe that we are now looking for N de Boor control points d0 . which means that we consider the inﬁnite cyclic knot sequence uk k∈Z which agrees with u0 . . . ﬁnd a C 2 closed cubic spline curve F . r0 6x1 6x2 . 0 ≤ i ≤ N − 1. . we consider the cyclic knot sequence determined by the N +1 knots u0 . .

ui+1 − ui−2 (ui+1 − ui )(ui − ui−2 ) (ui − ui−1)(ui+2 − ui ) βi = + . and for all i. βi . if we let ∆i = ui+1 −ui . ui+1 − ui−2 ui+2 − ui−1 (ui − ui−1 )2 γi = . The coeﬃcients αi . It is immediately veriﬁed . uN − uN −2 + u1 − u0 (u1 − u0 )(uN − uN −2 ) (uN − uN −1 )(u2 − u0 ) + . u3 − u0 (uN − uN −1 )2 . uN − uN −2 + u1 − u0 The system is no longer tridiagonal. uN − uN −1 + u2 − u0 u3 − u0 (u1 − u0 )2 . γi can be written in a uniform fashion for both the open and the closed interpolating C 2 cubic spline curves. 2 ≤ i ≤ N − 2. uN − uN −3 uN − uN −2 + u1 − u0 (uN −1 − uN −2 )2 . 1 ≤ i ≤ N − 1. uN − uN −1 + u2 − u0 (u2 − u1 )(uN − uN −1 + u1 − u0 ) (u1 − u0 )(u3 − u1 ) + . B-SPLINE CURVES for all i.250 CHAPTER 6. uN − uN −3 (uN − uN −1 )(uN −1 − uN −3 ) (uN −1 − uN −2)(uN − uN −1 + u1 − u0 ) + . we have αi = (ui+1 − ui )2 . uN − uN −2 + u1 − u0 uN − uN −1 + u2 − u0 (uN − uN −1 )2 . since the knot sequence is cyclic. but it can still be solved eﬃciently. we also get α0 = β0 = γ0 = α1 = β1 = γ1 = αN −1 = βN −1 = γN −1 = (u1 − u0 )2 . uN − uN −1 + u2 − u0 (u2 − u1 )2 . ui+2 − ui−1 Since the knot sequence is cyclic. r0 = (uN − uN −1 + u1 − u0 ) x0 .

3 uN − uN −1 mN . (a) The ﬁrst method consists in specifying the tangent vectors m0 and mN at x0 and xN . the method consists in picking the tangent vector to this parabola at x0 . CUBIC SPLINE INTERPOLATION that we have αi = ∆2 i . ∆i−2 + ∆i−1 + ∆i ∆i (∆i−2 + ∆i−1 ) ∆i−1 (∆i + ∆i+1 ) βi = + . we have the not-a-knot condition. which forces the ﬁrst two cubic segments to merge into a single cubic segment and similarly for the last two cubic segments. In this method. ∆−2 = ∆N −2 . (d) Finally. we require that D2 F (u0 ) = D2 F (u1 ) and D2 F (uN −1 ) = D2 F (uN ). In this method. and in the case of a closed spline curve. This amounts to requiring that D3 F is continuous at u1 and at uN −1 . 3 One speciﬁc method is the Bessel end condition.8. x1 . ∆i−2 + ∆i−1 + ∆i ∆i−1 + ∆i + ∆i+1 ∆2 i−1 . ∆−1 = ∆N −1 . xN . we require that − → D2F (u0 ) = D2 F (uN ) = 0 . several end conditions have been proposed to determine r0 = d0 and rN = dN . If we consider the parabola interpolating the ﬁrst three data points x0 .6. ∆−1 = ∆N = 0. and we quickly review these conditions. (c) Another method is the natural end condition. A similar selection is made using the parabola interpolating the last three points xN −2 . usually called the clamped condition method. In the case of an open C 2 cubic spline interpolant. (b) Another method is the quadratic end condition. x2 . Since the tangent vector at x0 is given by DF (u0) = we get r0 = d0 = x0 + and similarly rN = dN = xN − 3 (d0 − x0 ). γi = ∆i−1 + ∆i + ∆i+1 251 where in the case of an open spline curve. . xN −1 . u1 − u0 u1 − u0 m0 .

d−2 = (0. labeled 666. 4. derived from physical heuristics. In practice. d4 = (14. when attempting to solve an interpolation problem. this method may produce bad results when the data points are heavily clustered in some areas. . ui+2 − ui+1 xi+2 − xi+1 where xi+1 − xi is the length of the chord between xi and xi+1 . The simplest method consists in choosing a uniform knot sequence. 1. where we set 1/2 xi+1 − xi ui+1 − ui = . We now brieﬂy survey methods for producing knot sequences. 2. 6. 5. .252 CHAPTER 6. labeled 345. 2). ui+2 − ui+1 xi+2 − xi+1 There are other methods. d0 = (6. labeled 566. 9). the reader is referred to Farin [32]. d2 = (9. 6. the knot sequence u0 . Consider the following cubic spline F with ﬁnite knot sequence 1. 6. 1. Another method is the so-called centripedal method. . it is necessary to ﬁnd knot sequences that produce reasonable results. 6. labeled 111. In this method. 6. d1 = (9. in particular due to Foley. Another popular method is to use a chord length knot sequence. and with the de Boor control points: d−3 = (2. 5). d−1 = (4. B-SPLINE CURVES We leave the precise formulation of these conditions as an exercise. For details. This method usually works quite well. Thus. labeled 234. or Hoschek and Lasser [45]. 3. 1. 0). uN is not given. . 0). d3 = (11. after choosing u0 and uN . and refer the interested reader to Farin [32]. we determine the other knots in such a way that ui+1 − ui xi+1 − xi = . labeled 112.9 Problems Problem 1 (30 pts). 9). labeled 456. 0). labeled 123. . Although simple. 9).

4. d2 = (0. 3). d3 = (3.6. Determine the tangent at F (5/4). labeled 45. 5). d1 = (−1. and plot the e B´zier segments forming the spline as well as possible (you may use programs you have e written for previous assignments). 22. 3.5.5 three times. PROBLEMS 253 (i) Compute the B´zier points corresponding to 222. d4 = (3. then this point is associated with a knot. labeled 00. 0. labeled 01. 5. Implement the de Boor algorithm for ﬁnite and closed splines. 1). labeled 23. 3). (iii) Which knot should be inserted to that (−3/4. Design a subdivision version of the de Boor algorithm Problem 6 (30 pts). labeled 12. 444. and whose knot sequence is uniform and consists of simple knots. Implement the knot insertion algorithm. Problem 2 (10 pts). and with the de Boor control points: d−2 = (−6. 0. (ii) Compute F (5/4) by repeated knot insertion. 4. 333. Repeat the problem for a uniform knot sequence in which all knots have multiplicity 2. d0 = (−3. d5 = (1. and plot the B´zier e e segments forming the spline as well as possible (you may use programs you have written for previous assignments). 2. labeled 34. 3/2) becomes a control point? Problem 4 (30 pts). and 555. 5. −1). Show that if a B-spline curve consisting of quadratic curve segments (m = 2) has an inﬂexion point. (ii) Insert the knot 3.9. Consider the following quadratic spline F with ﬁnite knot sequence 0. Problem 5 (50 pts). labeled 55. labeled 44. 5. 2). Contruct (geometrically) the tangent at t = 3. 2). (i) Compute the B´zier points corresponding to 11. and 33. Find the B´zier control points of a closed spline of degree 4 whose e control polygon consists of the edges of a square.5) on the spline curve. . Give a geometric construction of the point F (3. 0). Problem 3 (30 pts). d−1 = (−5. 1.

t − uj uj+m+1 − t Bj. . prove Marsden’s formula: N (x − t) = m i=−m (x − ti+1 ) · · · (x − ti+m )Bi.. uj+m+1 [.. . .m+1 is null outside [uj .m+1(t) = 1 j=−m for all t. nL−1 m+1 and a sequence of de Boor control points dj −m≤j≤N . Problem 9 (10 pts).4 associated with problem 1. Bj. . uN +1 . . un1 un1 +n2 .. .m+1 (t). . .m (t). (iii) Show that N Bj.m (t) + Bj+1. . uj+m − uj uj+m+1 − uj+1 . . u0 m+1 u1 . (i) Given a ﬁnite knot sequence u−m . (iv) Compute the functions Bj. .m+1 (t). The B-spline basis functions Bj. . . .m+1 . . . . Problem 8 (20 pts).. un1 +1 .m+1 are deﬁned inductively as follows: 1 if t ∈ [uj . uN −nL−1 +1 .m (t). .m+1 (t) = m m Bj. . prove that ′ Bj. show that the ﬁnite B-spline F deﬁned by the above knot sequence and de Boor control points can be expressed as N F (t) = j=−m dj Bj.m+1 (t) = uj+m − uj uj+m+1 − uj+1 Bj. . . It is shown in section 6.m+1 (t) being deﬁned as in problem 7. .254 CHAPTER 6.1 (t) = (ii) Show that Bj. where the B-splines Bj. nder the assumptions of Problem 7.m (t) − Bj+1. B-SPLINE CURVES Problem 7 (40 pts). uj+1 [ 0 otherwise.7 how a B-spline of degree m can be deﬁned in terms of the normalized B-splines Bj. uN uN +m+1 n1 n2 .

show that the spline basis function B0. . . Given any two spline functions F. Given any uniform sequence t0 . . PROBLEMS Problem 10 (20 pts). Given the knot sequence 0. where ti+1 = ti + h for some h > 0 (0 ≤ i ≤ m − 1). F ′ (tm ) = β. 4. .9. F ′ (t0 ) = α. . that minimizes the integral tm t0 [ϕ′′ (t)]2 dt. 4. F ′ (t0 ) = α.4 is given by: 1 3 6t 1 (−3t3 + 12t2 − 12t + 4) B0. F ′ (tm ) = β. tm ] such that F (ti ) = yi . . . . Problem 11 (40 pts). Hint. tm of real numbers. 4. . .4 (t) = 6 3 1 6 (3t − 24t2 + 60t − 44) 1 (−t3 + 12t2 − 48t + 64) 6 255 for for for for 0≤t<1 1≤t<2 2≤t<3 3 ≤ t < 4. Prove that the above B-spline function F is the only C 2 -function among all C 2 -functions ϕ over [t0 . . . tm−1 . ym of real numbers. .6. 4. given any two real numbers α. 2. (i) Prove that αi + γi + βi = ui+1 − ui−1 . β. t0 . prove that there exists a unique cubic B-spline function F based on the knot sequence t0 . tm . show that t m t0 [F ′′ (t) − G′′ (t)]h(t)dt = 0. . . . . 0. t1 . . given any sequence y0 . 0. tm . and any piecewise linear function h. 1. . Problem 12 (20 pts). . G as above. 0. m m such that F (ti ) = yi . 3.

1 ≤ i ≤ n. B-SPLINE CURVES for all i. βi are deﬁned in section 6. . 0 1 4 1 7 1 2 3 2 1 Problem 14 (40 pts). i. (ii) Prove a similar result for quadratic and cubic polynomial curves.. 1 3 7 1 2 2 1 4 1 0 . γi . 1 ≤ i ≤ N − 1. and assume that each point yi is an aﬃne combination of the points xj . provided that the end conditions are clamped. Show that n ai j = 1 j=1 for every i. Problem 13 (20 pts). (ii) Let (x1 . Prove that the matrix below is invertible.e. (b) Quadratic end condition (c) Natural end condition (d) Not-a-knot condition. . xn ) and (y1 . then the sum of the elements of every row of A−1 is also 1. . say n yi = j=1 ai j xj . yn ) be two sequences of n points (say in A3 ). where αi . Problem 16 (30 pts). . . Study the interpolation problem when the end conditions are to prescribe the ﬁrst and the second derivative at x0 .8 (Problem 1 and Problem 2). .. the sum of the elements of every row is 1. . . (i) Show that interpolating cubic B-splines reproduce straight lines. . . Compute the de Boor points d0 and dN arising in the interpolation problem (Problem 1) in the following cases: (a) Bessel end condition. Let A = (ai j ) be the n × n matrix expressing the yi in terms of the xj . Problem 15 (20 pts).256 CHAPTER 6. and that the tangents are read oﬀ the straight line. Prove that if A is invertible.

Implementent the interpolation method proposed for solving Problem 2. Implementent the interpolation method proposed for solving Problem 1. Problem 19 (20 pts). ﬁnd (or look up) a linear-time method for solving a tridiagonal linear system. for any three consecutive control points bi . and bi+2 . bi+1 . (1) For a quadratic B-spline curve speciﬁed by a closed polygon of de Boor control points. In solving this problem. prove that knot insertion at midpoints of intervals yields new control points deﬁned such that for every edge (bi . Problem 18 (40 pts). ﬁnd (or look up) a linear-time method for solving a tridiagonal linear system. + bi+2 . In solving this problem.9. 2 2 and b′2i+2 = 1 6 1 bi + bi+1 . prove that knot insertion at midpoints of intervals yields new control points deﬁned such that. PROBLEMS 257 Problem 17 (40 pts). 4 4 and b′2i+2 = 1 3 bi + bi+1 . b′2i+1 = 1 1 bi + bi+1 .6. 4 4 (2) For a cubic B-spline curve speciﬁed by a closed polygon of de Boor control points. 8 8 8 . bi+1 ). b′2i+1 = 3 1 bi + bi+1 .

258 CHAPTER 6. B-SPLINE CURVES .

5 -5 -5 0 y 5 259 .Part III Polynomial Surfaces and Spline Surfaces -5 x 0 5 5 2.5 z 0 -2.

260 .

(−1 . we investigate the possibility of deﬁning polynomial surfaces in terms of polar forms.1 Polarizing Polynomial Surfaces In this chapter. Recall that the aﬃne line is denoted as A.Chapter 7 Polynomial Surfaces 7. In the case of rectangular nets. we take up the study of polynomial surfaces. We begin with the traditional deﬁnition of polynomial surfaces. The de Casteljau algorithm splits into two versions. typically. . Let E be some aﬃne space of ﬁnite dimension 0 1 → → n ≥ 3. We assume that − − → → some ﬁxed aﬃne frame (O. We show how these versions of the de Casteljau algorithm can be turned into subdivision methods. i1 = . there are two natural ways to polarize a polynomial surface. and let (Ω1 . ( i1 . and we give an eﬃcient method for performing subdivision. the canonical aﬃne frame − → − → 1 0 where O = (0. . As we shall see. things are more tricky. and the second approach yields total degree surfaces. e en 261 . Because the polynomials involved contain two variables. − )) be an aﬃne frame for E. This is one of the many indications that dealing with surfaces is far more complex than dealing with curves. we will also denote the aﬃne plane as P. i2 )) for P is chosen. Intuitively. We also show how to compute a new control net with respect to a new rectangle (or new triangle) from a given net. and the other one for triangular nets. and i2 = . . there are two natural ways to polarize a polynomial surface. this depends on whether we decide to tile the parameter plane with rectangles or with triangles. 0). To reduce the amount of superscripts. and total degree surfaces are completely determined by triangular nets of control points. in the case of triangular nets. . it is easy to use the algorithms developed for polynomial curves. one for rectangular nets. and that the aﬃne plane is denoted as A2 . However. After a quick review of the traditional parametric deﬁnition in terms of polynomials. The ﬁrst approach yields bipolynomial surfaces (also called tensor product surfaces). Bipolynomial surfaces are completely determined by rectangular nets of control points.

. . . v ∈ R. e en where F1 (U. V ) = V − 3 F3 (U. for simplicity of notation. The trace of the surface F is the set F (P). V ) are polynomials in R[U. Enneper’s surface is a surface of total degree 3. V ) is ≤ p. Given a polynomial surface F : P → E. V ) = UV + U + V + 1. V ) has total degree ≤ m. we say that F is a polynomial surface of total degree m. . and the maximum degree of V in all the Fi (U. ( i1 . A polynomial surface is a function F : P → E. but not symmetric in all its arguments. V ) = U − As deﬁned above. and a bipolynomial surface of degree 3. V ) = U 2 + V 2 + UV + 2U + V − 1 F2 (U. Given natural numbers p. the following polynomials deﬁne a polynomial surface of total degree 2 in A3 : F1 (U. and m. POLYNOMIAL SURFACES Deﬁnition 7. if p and q are such that F is a bipolynomial surface of degree p. such that. 2 . we say that F is a bipolynomial surface of degree p. v)−1 + · · · + Fn (u. and polarize separately in u and v. v). we have − → − → → → F (O + u i1 + v i2 ) = Ω1 + F1 (u. if each polynomial Fi (U. The above is also a bipolynomial surface of degree 2. F1 (U. we denote − → − → F (O + u i1 + v i2 ) as F (u.1. q. q . If the maximum degree of U in all the Fi (U. − − → → The aﬃne frame (O. a polynomial surface is obtained by bending and twisting the real aﬃne plane A2 using a polynomial map. This way. there are two natural ways to polarize. V ]. 3 . q . . Intuitively. for all u. V ) = U 2 − V 2 . which is symmetric separately in its ﬁrst p arguments and in its last q arguments. The ﬁrst way to polarize. i2 )) for P being ﬁxed. For example. V ) = U − V + 1 F3 (U. Fn (U. we get a (p + q)-multiaﬃne map f : (A)p × (A)q → E. V ) is ≤ q.262 CHAPTER 7. is to treat the variables u and v separately. V ). v)− .1. Another example known as Enneper’s surface is as follows: U3 + UV 2 3 V3 + U 2V F2 (U.

. up . and investigate appropriate generalizations of the de Casteljau algorithm. v) of the form uh v k with respect to the bidegree p. It is easily seen that f (u1 . However. since e F (u. v) = f (u. . since F (u.. V ) = U 2 + V 2 + UV + 2U + V − 1 F2 (U. First. We begin with the ﬁrst method for polarizing. v) = f ((u. we can view F as a polynomial surface F : P → E. v). to avoid the confusion between the aﬃne space Ap and the cartesian product A × · · · × A. . ui j∈J vj . which is symmetric in all of its m arguments. 2 : F1 (U. . V ) = U − V + 1 F3 (U. . m the surface F is really a map F : A × A → E. Consider the following surface viewed as a bipolynomial surface of degree 2. POLARIZING POLYNOMIAL SURFACES Note that we are intentionally denoting A × ···× A×A × ···× A p q 263 as (A) × (A) . vq ) = 1 p h q k I⊆{1. namely as the coordinates of a point (u. and to polarize the polynomials in both variables simultaneously. if m is such that F is a polynomial surface of total degree m. p p q We get what are traditionally called tensor product surfaces. |I|=h J⊆{1... The advantage of this method is that it allows the use of many algorithms applying in the case of curves. v1 . . . . Example 1. u. (u. p q The second way to polarize. we consider several examples to illustrate the two ways of polarizing. V ) = UV + U + V + 1. where h ≤ p and k ≤ q. . since A × A is isomorphic to P = A2 .p}. it is enough to explain how to polarize a monomial F (u. |J|=k i∈I the surface F is indeed a map F : P → E.. this method is the immediate generalization of B´zier curves. in which we polarize separately in u and v. In some sense. is to treat the variables u and v as a whole. v) in P. .. This way.. v). . q . . . .. instead of Ap × Aq . . . . We will present both methods. Indeed. . Using linearity. . . we get an m-multiaﬃne map f : P m → E. v)).7. with parentheses around each factor A.1.. v. Note that in this case. .q}.

the coordinates of the control point bu1 +u2 . v2 ). V2 ) = U1 U2 + V1 V2 + U1 + U2 V + U1 + U2 + V − 1 2 V1 + V2 (U1 + U2 )(V1 + V2 ) + U1 + U2 + −1 4 2 U1 + U2 V1 + V2 f2 (U1 . U2 . viewed as a bipolynomial surface of degree 2. u2 . U2 .264 CHAPTER 7. u u u Letting i = u1 + u2 . v 2 ) can be expressed as a barycentric combination involving 9 control points bi. U2 . V1 . 1 i. v2 0. U2 . b2 1 +u2 . (1 − u2 )0 + u2 1. v 2 }. 1 0 4 2 1. v1 +v2 . u2 . v2 ) = fi ((1 − u1)0 + u1 1. It is quite obvious that the same result is obtained if we ﬁrst polarize with respect to U.j u1 . V ) = V + + V + 1. V1 . v1 . v2 . v1 +v2 . After polarizing with respect to U. {0. we see. U2 . 2 .j . V ) = U1 U2 + V 2 + U f2 (U1 . v1 . and similarly for the multiset of polar arguments {v 1 . similarly to the case of curves. v1 +v2 ). 2 2 and after polarizing with respect to V . 1}.j u1 . V1 . v1 . u2 . V ) separately in U and V . v1 +v2 . v2 0. V ) = U1 + U2 −V +1 2 U1 + U2 U1 + U2 U f3 (U1 . V2 ) of F . 0 0. and v1 . u2 . 1). 0 −1 − 1 1 2 3 5 0. 1 i. 1 2 2 . and then with respect to V . 1}. 1 1. 0 1 0 2 1 3 1 0.v1 +v2 . 1 2 3 5 b2 v 1 . we have U f1 (U1 . or conversely. U2 . u (b1 1 +u2 . we polarize each of the Fi (U. where the multiset of polar arguments {u1 . (1 − v2 )0 + v2 1). 1 2 2 3 1 1. v1 . are Denoting fi (u1 . as a barycentric combination u = u = (1 − u)0 + u1 = (1 − u)0 + u1. (1 − v1 )0 + v1 1. 1 1. v2 ) as bi 1 +u2 . V2 ) = 4 2 2 Now. we can express every point u ∈ A over the aﬃne basis (0. {1. u2 . f3 (U1 . u2 } can take as values any of the 3 multisets {0. using multiaﬃneness and symmetry in u1 . we get b1 v 1 . j ≤ 2. and expanding fi (u1 . corresponding to the 27 polar values fi (u1 . u2 1 0. we have f1 (U1 . U2 . 0 0. that fi (u1 . and j = v1 + v2 . b3 1 +u2 . u2 0. V2 ) = − +1 2 2 (U1 + U2 )(V1 + V2 ) U1 + U2 V1 + V2 + + + 1. 0 ≤ i. V1 . POLYNOMIAL SURFACES In order to ﬁnd the polar form f (U1 . 0}.

. we get f1 ((U1 . f3 ((U1 . 0. f (1. (um . One should note that a bipolynomial surface can be viewed as a polynomial curve of polynomial curves. 1. 1 i. POLARIZING POLYNOMIAL SURFACES b3 v 1 . V1 ). 1. 1 1. |J|=k i∈I j∈J vj . . (U2 . 2 . 1). A similar interpretation applies if we exchange the roles of u and v. . V ) = UV + U + V + 1.j j 0 1 2 i 1 1 3 0 (−1.1. Starting from F1 (U. 3) 2 4 2 2 (2. b0. V2 )) = − +1 2 2 U1 V2 + U2 V1 U1 + U2 V1 + V2 + + + 1. 1. 1 3 2 4 1. 3) (5. 2.m} I∩J=∅ |I|=h. Using linearity. According to lemma 4.0 . 0.. we get a polynomial curve in v.7. given the monomial U h V k . 1. V2 )) = U1 U2 + V1 V2 + V1 + V2 U1 V2 + U2 V1 + U1 + U2 + −1 2 2 U1 + U2 V1 + V2 f2 ((U1 . 2) (3. and b2.1. u2 3 2 0. but the other 5 control points are not on the surface.. v1 ). V1 ). 0). 0. in preparation for example 2. 0). . Let us now review how to polarize a polynomial in two variables as a polynomial of total degree m.0 . Example 2. V2 )) = 2 2 2 . 1). we get the following polar form of degree m: f ((u1. and the surface is obtained by letting this curve vary as a function of u. v2 0. Indeed. V1 ). 0 0. with h + k = d ≤ m. Let us now polarize the surface of Example 1 as a surface of total degree 2. (U2 . 2 ) (1.2 . 0 1 2 3 9 0. 1) (− 2 .2 . f (0. 4) 2 265 Note that the surface contains the 4 control points b0.j have coordinates: bi.j u1 . 2) 3 3 5 1 (0. vm )) = h!k!(m − (h + k))! m! ui I∪J⊆{1. 3 . if we ﬁx the parameter u. it is enough to deal with a single monomial. 3 ) ( 4 . 1. 0. 1 .5. 9 ) ( 2 . V ) = U 2 + V 2 + UV + 2U + V − 1 F2 (U. V ) = U − V + 1 F3 (U. 0. 1. corresponding to the polar values f (0. 2 .. . 1 2 3 4 and the nine control points bi. and f (1. 1. b2. (U2 . There are also some pleasant properties regarding tangent planes.

266 CHAPTER 7. the symmetric multiaﬃne map f : P 2 → E is deﬁned in terms of the coordinates (u. the polar form f needs to be expressed in terms of barycentric coordinates. r) (2. v)-coordinates. f3 . s) (1. ( i1 . 2. the aﬃne frame (O. b2 ) = f (λ1 r + µ1 s + ν1 t. ( i1 . 1 − u − v). and we now go back to example 2. In order to expand f (u1 . v) coordinates of (r. t). f (t. and (0. O). . Example 2. V1 ) and (U2 . s. O). So. ) 2 2 2 . (U2 . with respect to the barycentric − → − → aﬃne frame (r. t) (−1. V2 ) ranging over (1.t. f (r. 1). 1) f (r. with respect to the barycentric aﬃne frame − → − → (r. (U2 . where λi + µi + νi = 1. r). s. O). v. t) 1 1 3 (− . f3 ((U1 . t) = (O + i1 . v) of − − → → points in the plane P. O + i2 . expressed in terms of the (u. t). t). and we leave such a conversion as an exercise. v1 . v2 ) by multiaﬃneness. ( i1 . 2) f (s. can be expressed as λi r + µi s + νi t. is also represented by its barycentric coordinates (u. there is no problem in computing the coordinates of control points using the original (nonbarycentric) polar forms. V1 ). ) 2 2 f (t. where i = 1. s). and not the barycentric − → − → aﬃne frame (O + i1 . (U2 . The way around the problem is to observe that − − → → every point in P of coordinates (u. 0). t). f (s. v) with respect to the aﬃne frame (O. namely (1. 0). if we just want the coordinates of the six control points. u2 . 2. b2 ) can be expressed as a barycentric combination of the six control points f (r. 0. O + i2 . 1. and (0.r. 0). V2 )) = U1 U2 + V1 V2 + f (r. Thus. V1 ). 1. Indeed. but there is a simple way around this apparent problem. on the (u. V1 ). and we see that f (b1 . we just have to evaluate the original polar forms f1 . λ2 r + µ2 s + ν2 t) by multiaﬃneness and symmetry. every point bi ∈ P. i2 )). s). t) 3 3 (0. POLYNOMIAL SURFACES This time. (0. f2 . (continued) Evaluating the polar forms U1 V2 + U2 V1 V1 + V2 + U1 + U2 + −1 2 2 U1 + U2 V1 + V2 f2 ((U1 . and we can expand f (b1 . V2 )) = − +1 2 2 U1 V2 + U2 V1 U1 + U2 V1 + V2 + + + 1. (0. we get the following coordinates: f1 ((U1 . f (s. It is possible to − − → → convert polar forms expressed in terms of coordinates w. s. O + i2 . with respect to the aﬃne frame (O. t) = (O + i1 . Now. in fact. i2 )) to barycentric coordinates. 0). f (r. if we want to ﬁnd control points. ) 2 f (s. 1). it appears that we have a problem. V2 )) = 2 2 2 for argument pairs (U1 . s) 5 (1. i2 )). 2) f (r. .

V ) = U − We get U1 + U2 + U3 U1 U2 U3 − 3 3 U1 V2 V3 + U2 V1 V3 + U3 V1 V2 + 3 V1 + V2 + V3 V1 V2 V3 − f2 ((U1 . (U2 . V ) = V. 0). 0. V2 ). 0. r. r. V3 )) = − . 1) 3 f (r. . r) 2 ( . 0) 3 f (r. t) 1 1 ( . t) 1 (0. ) 3 3 f (s. V1 ). V1 ). 0). . (U3 . V1 ). (U2 . . 0) 3 3 f (r. POLARIZING POLYNOMIAL SURFACES Example 3. s. r. s) 2 2 1 ( .− ) 3 3 3 f (s. V3 ). (0. ) 3 3 3 f (s. t) 1 ( . (U3 . . 0. F3 (U.7. 267 Let us also ﬁnd the polar forms of the Enneper’s surface. (U2 . V3 )) = f (r. ranging over (1.1. (U2 . t. V3 )) = 3 3 U1 U2 V3 + U1 U3 V2 + U2 U3 V1 + 3 U1 U2 + U1 U3 + U2 U3 V1 V2 + V1 V3 + V2 V3 f3 ((U1 . . Example 4. and (U3 . . s) 2 2 1 ( . − ) 3 3 f (r. 0. V2 ). we ﬁnd the following 10 control points: f1 ((U1 . V2 ). V ) = U 2 − V 2 . V2 ). 0) 3 f (t. F1 (U. t) 2 1 ( . V ) = V − + U 2V 3 F3 (U. . F2 (U. s. V ) = U 2 − V 2 . s. V ) = U. Let F be the surface considered as a total degree surface. V1 ). 1) and (0. 3 3 and evaluating these polar forms for argument pairs (U1 . considered as a total degree surface (of degree 3): U3 + UV 2 3 V3 F2 (U. s. (U3 . 0) f (r. t. t) (0. −1) 3 Let us consider two more examples. t. and deﬁned such that F1 (U. t) 2 1 (0. s) 2 (0.

0) 2 2 f (s. namely (1. V1 ). (U2 . and (0. Its intersection with planes parallel to the plane yOz is a parabola. V2 )) = − → − → With respect to the barycentric aﬃne frame (r. the control net consists of the following six points. s. t) 1 (0. 2 ′ F3 (U. O + i2 . f2 . . t) = (O + i1 . V2 )) = U1 U2 − V1 V2 . V ) = After the change of parameters u= U −V . 0): f (r. 2 V1 + V2 f2 ((U1 . Its general shape is that of a saddle. If we rotate the x. Y = 2 we get the parametric representation U −V . . ′ F1 (U. 1) f (r. 0. 2 f3 ((U1 . s. Its intersection with planes parallel to the plane xOy is a hyperbola. 0) 2 f (t. t) 1 ( . 0) f (r. f3 on the (u. V1 ). v) coordinates of (r. O). 0) 2 The resulting surface is an hyperbolic paraboloid . POLYNOMIAL SURFACES U1 + U2 . 2 x+y . obtained by evaluating the polar forms f1 . (U2 . t). f1 ((U1 . s) (0. Y axes such that X= x−y . 0). (U2 . 2 U +V . t) (0. r) (1. V ) = U 2 − V 2 . (0.268 The polar forms are: CHAPTER 7. 0. −1) f (s. V ) = . of implicit equation z = x2 − y 2 . 0. V2 )) = . 2 U +V ′ F2 (U. 1). v= 2 . V1 ). 1. s) 1 1 ( . and similarly with its intersection with planes parallel to the plane xOz. y axes to the X.

v) = v. Let F be the surface considered as a total degree surface. s) 1 1 ( . s. V ) = 2U 2 + V 2 . 0) 2 f (t. (U2 . . of implicit equation z = 2x2 + y 2. (0. and similarly with its intersection with planes parallel to the plane xOz. V2 )) = 2U1 U2 + V1 V2 . − → − → With respect to the barycentric aﬃne frame (r. Its intersection with planes parallel to the plane xOy is an ellipse. s) (0. ′′ F3 (u. Its intersection with planes parallel to the plane yOz is a parabola. 0). 0) 2 f (r. Its general shape is that of a “boulder hat”. f2 . 2) f (r. O). 1. 0. 1). 0) 2 2 f (s. 0) f (s. We say that the hyperbolic paraboloid is a ruled surface. O + i2 . 0.7. f3 on the (u. we see that the same hyperbolic paraboloid is also deﬁned by ′′ F1 (u. t). obtained by evaluating the polar forms f1 . the control net consists of the following six points. s. V ) = V. v) coordinates of (r. The polar forms are: f1 ((U1 .1. t) = (O + i1 . (U2 . t) 1 ( . and deﬁned such that F1 (U. r) (1. V ) = U. when u is constant. and (0. F2 (U. V1 ). 0): f (r. V1 ). 1) The resulting surface is an elliptic paraboloid . f2 ((U1 . 2 V1 + V2 . 0. F3 (U. the curve traced on the surface is a straight line. namely (1. V2 )) = 2 f3 ((U1 . (U2 . t) 1 (0. POLARIZING POLYNOMIAL SURFACES 269 since U 2 − V 2 = (U + V )(U − V ) = 4uv. Example 5. v) = u. and similarly when v is constant. t) (0. . ′′ F2 (u. v) = 4uv. . V2 )) = U1 + U2 . Thus. V1 ).

deﬁned by an implicit equation of the form z= x2 y 2 + 2. ﬁrst with respect to U. y = u2 + v. and total degree surfaces. which is of degree 4.1 to each polynomial Fi (U.270 CHAPTER 7. except for some degenerate cases. we go back to bipolynomial surfaces. It should be noted that parametric polynomial surfaces of degree 2 may correspond to implicit algebraic surfaces of degree > 2. the algebraic surfaces deﬁned by implicit equations of degree 2 that are also deﬁned as polynomial surfaces of degree 2 are. V ) deﬁning F . 7. applying lemma 4.5. where E is of dimension n. POLYNOMIAL SURFACES Generally. we get polar forms fi : (A)p × (A)q → A. either an hyperbolic paraboloid . z = v2. . An implicit equation for this surface is z = (y − x2 )2 . deﬁned by an implicit equation of the form x2 y 2 z = 2 − 2. a b or an elliptic paraboloid . as shown by the following example: x = u. deﬁne a (p + q)-multiaﬃne map f : (A)p × (A)q → E. we leave as an exercise to show that. q . which together. deﬁned by an implicit equation of the form y 2 = 4ax.2 Bipolynomial Surfaces in Polar Form Given a bipolynomial surface F : P → E of degree p. and it is easy to see that this is the smallest degree. We will now consider bipolynomial surfaces. in more details. and then with respect to V . First. a2 b or a parabolic cylinder .

. By analogy with polynomial curves. . v ∈ A. Deﬁnition 7. and let f : (A)p × (A)q → E be its polar form. such that there is some multiaﬃne map f : (A)p × (A)q → E. We also say that f is p. v). and with F (u.2. .1. . . u. E). . and can be safely omitted by other readers. Vq ) is symmetric in its ﬁrst p-arguments. is a map F : A × A → E. and symmetric in its last q-arguments. p q for all u. p q for all u. Remark: This note is intended for those who are fond of tensors. q -symmetric multilinear map f : (A)p × (A)q → E is in bijection with the symmetric p-linear map g : (A)p → SML((A)q . Let f : (A)p × (A)q → E be its homogenized p. Given any aﬃne space E of dimension ≥ 3. BIPOLYNOMIAL SURFACES IN POLAR FORM 271 such that f (U1 . V1 . a bipolynomial surface of degree p.2. . v). v) = f (u. A). . Remark: It is immediately veriﬁed that the same polar form is obtained if we ﬁrst polarize with respect to V and then with respect to U. . . . The trace of the surface F is the set F (A. which is symmetric in its ﬁrst p-arguments. . q . and thus. it is natural to propose the following deﬁnition. . the p. and with F (u. v. Let F : A × A → E be a bipolynomial surface of degree p. . It is easily veriﬁed that the set of all symmetric q-linear maps h : (A)q → E forms a vector space SML((A)q . The advantage of deﬁnition 7.7. v ∈ R. . . and symmetric in its last q-arguments.2. . q -symmetric multilinear map. . . E). v. v) = f (u. . q -symmetric. . . . q in polar form.1 is that it does not depend on the dimension of E. . . u. Up .

s1 ) and (r 2 . . . but this also explains why this terminology is slightly unfortunate. the tensor product involved p q ( A) ( A) is “mixed”. the linear map h⊙ is in bijection with a bilinear map f⊙.⊙ : ( A) × ( A) → E. . up . .272 CHAPTER 7. E). and the ordinary tensor product ⊗. This explains the terminology “tensor product surface”. but not necessarily symmetric. is in bijection with a linear map q p q Ap . . Every point u ∈ A can be written as s1 − u u − r1 u= r1 + s1 . Indeed. let (r 1 . E). f (u1 . s1 − r1 s1 − r1 . Now. . s2 ) be two aﬃne frames for the aﬃne line A. However. E). in the sense that it uses both the symmetric tensor product ⊙.⊙ is bilinear.⊙ : ( is in bijection with a linear map p A) × ( q A) → E f⊗ : ( and it is immediately veriﬁed that A) ( A) → E. the symmetric p-linear g⊙ : A → SML((A)q . p We can now view the linear map g⊙ as a symmetric q-linear map h : (A) → SML( which in turn. we note that the bilinear map p q f⊙. v q ) = f⊗ ((u1 ⊙ · · · ⊙ up ) ⊗ (v 1 ⊙ · · · ⊙ v q )). POLYNOMIAL SURFACES p From the universal property of the symmetric tensor power map g is in bijection with a linear map p A. But then. using the universal property of the tensor product ( p A) ( q A). . Note that f⊙. v 1 . h⊙ : A → SML( p q A. .

. s2 ).. . . . For example.7.. given any family (bi. u2 . u2 . u2 . . s2 . v2 .p} K∩L=∅ K∪L={1. . r2 . up . v2 . . . . . . it is easily seen that the map deﬁned such that f (u1 . . . conversely. . r1 . . v q ) s2 − r2 s2 − v1 f (s1 . . . . |L| . .q} s1 − u i s1 − r1 j∈J u j − r1 s1 − r1 k∈K s2 − vk s2 − r2 l∈L vl − r2 s2 − r2 b|J|. . v 1 .. . up . . .2. v1 . vq ) = i∈I I∩J=∅ I∪J={1. . .. BIPOLYNOMIAL SURFACES IN POLAR FORM and similarly any point v ∈ A can be written as v= We can expand f (u1 . r2 . . j )0≤i≤p. . . v 1 . . . v q ) s2 − r2 v1 − r2 f (s1 . s2 . . 273 By induction. . vq ) = s1 − u 1 s1 − r1 s1 − u 1 + s1 − r1 u 1 − r1 + s1 − r1 u 1 − r1 + s1 − r1 s2 − v1 f (r1 . up . . . . . vq ) = i∈I I∩J=∅ I∪J={1. . . v2 . . the following can easily be shown: f (u1 . . . .. . . |L| = f (r1 . the ﬁrst two steps yield: f (u1 . v2 .. . . up ... s2 .. . . v 1 . . vq ). s1 . where b|J|. . r 1 . . . . up . r 2 .q} s1 − u i s1 − r1 j∈J u j − r1 s1 − r1 k∈K s2 − vk s2 − r2 l∈L vl − r2 s2 − r2 b|J|...p} K∩L=∅ K∪L={1. |I| |J| |K| |L| The polar values b|J|. . .. s2 − r2 s2 − v s2 − r2 r2 + v − r2 s2 − r2 s2 . . . . . . 0≤j≤q of (p + 1)(q + 1) points in E. . |L| = f (r 1 . r 2 . . . |L| . v q ). up . v q ) s2 − r2 v1 − r2 f (r 1 . . . r2 .. . . . . . . . . u2 . . . . . .. . . . . using multiaﬃneness. . up . .. . . . s1 . s2 . . . . . . . r2 . . s2 ) |I| |J| |K| |L| can obviously be treated as control points. . . s1 . Indeed. up . s1 . ..

. . v 1 . .2.. . . s1 ) and (r 2 . . |L| . r 2 . j) ∈ N2 | 0 ≤ i ≤ p. . there is a unique bipolynomial surface F : A × A → E of degree p. . r 2 . Indeed. r2 .1 is a bipolynomial surface in polar form. s1 .1. or B´zier net. for any family (bi. we use the isomorphism between A × A and P.. j )0≤i≤p. s1 . r 1 . .1 is identical to the class of polynomial surfaces as in deﬁnition 7. .3. . A point F (u. . up .p} K∩L=∅ K∪L={1. r2 . r1 . p−i i q−j j Thus. s1 . . . we see that the Bernstein polynomials show up again. .2. . q -symmetric. and let E be an aﬃne space (of ﬁnite dimension n ≥ 3). For any natural numbers p. s1 ](u) Bj [r2 . . r1 . Furthermore. 0≤j≤q of (p + 1)(q + 1) points in E. . . . j )0≤i≤p. 0 ≤ j ≤ q}. simply apply lemma 4. 0≤j≤q of (p + 1)(q + 1) points in E. q. s2 ) = b|J|. .1. s1 .. Let (r1 . .. . . with polar form the (p + q)-multiaﬃne p. . . For the converse. . We can also show that when E is of ﬁnite dimension. POLYNOMIAL SURFACES deﬁnes a multiaﬃne map which is p.2. s1 . s1 ](u) and Bj [r2 . . vq ) = i∈I I∩J=∅ I∪J={1.1. . such that f (r 1 . A family N = (bi. . . they are used in the deﬁnition itself. . 1 ≤ j ≤ q. s2 ](v). . v) on the surface F can be expressed in terms of the Bernstein polynomials q Bip [r1 . . p−|J| |J| q−|L| |L| We summarize the above in the following lemma. and indeed. v) = 0≤i≤p 0≤j≤q q Bip [r1 . j . .274 CHAPTER 7. . p−i i q−j j for all i. s2 .. .. s2 . s1 .. .2. . .q} s1 − u i s1 − r1 j∈J u j − r1 s1 − r1 k∈K s2 − vk s2 − r2 l∈L vl − r2 s2 − r2 b|J|. .2. s2 ](v) f (r 1 . . . . s2 . . . q -symmetric map f : (A)p × (A)q → E. is often called a (rectangular) control net. s2 ). and such that f (r1 . Note that we can view the set of pairs e p.. . the class of bipolynomial surfaces in polar form as in deﬁnition 7. . . . 1 ≤ i ≤ p and all j. . we have already shown using polarization that a polynomial surface according to deﬁnition 7. . as F (u. Lemma 7. r 2 . r2 . q . . as in deﬁnition 7. . s2 ) = bi. s2 ) be any two aﬃne frames for the aﬃne line A. .q = {(i. .1. |L| . f is given by the expression f (u1 . . Note that in both cases. in traditional presentations of bipolynomial surfaces.

Such curves are called isoparametric curves. and [r 2 . where 0 ≤ i ≤ p − 1. we ﬁrst compute the points bj k . Similarly. k + v − r2 s2 − r2 j−1 bi∗. THE DE CASTELJAU ALGORITHM. where bi∗ is obtained by applying the de Casteljau algorithm to the B´zier control points e bi. . [s1 . and F ([r1 . when we ﬁx the parameter v. and bp. j . For every i.q Given a rectangular control net N = (bi. The portion of the surface F corresponding to the points F (u. and i∗. .q in the aﬃne space E. 0 . we can ﬁrst compute the points b0∗ . Remark: We can also consider curves on a bipoynomial surface. q . j . . we obtain a B´zier curve of degree p. deﬁned such that Fv (u) = F (u. Note that there is a natural way of connecting the points in a control net N : every point bi. .7.j)∈ p. bp. is connected to the three points bi+1. By lemma 7.3. k+1 . is called a rectangular (surface) patch. for some λ ∈ R. bp∗ . . the map Fv : A → E. they form a polyhedron which gives a rough approximation of the surface patch. s1 ]. and 0 ≤ j ≤ q − 1. bi. e The de Casteljau algorithm can be generalized very easily to bipolynomial surfaces. . deﬁned by the constraint u + v = λ. bi. with 0 ≤ i ≤ p. s2 ]. called the boundary curves of the rectangular surface patch. the images of the line segments [r 1 . and bi+1. and then compute bp . we obtain a B´zier curve of degree q. bj k = i∗. the corners of the surface patch. . . 0 . by applying the de Casteljau algorithm to the control 0∗ points b0∗ . v) for which the parameters u.3 The de Casteljau Algorithm for Rectangular Surface Patches p. s1 ]. j+1 . i∗. q . is a curve on the bipolynomial surface F . deﬁned such that Fu (v) = F (u. j . 7. or rectangular B´zier patch. When we ﬁx u. such a control net N determines a unique bipolynomial surface F of degree p. and when we ﬁx v. . s2 ]. bp∗ . s2 − v s2 − r2 j−1 bi∗. When we ﬁx the parameter u. v satisfy the inequalities r1 ≤ u ≤ s1 and r2 ≤ v ≤ s2 . e e In particular. 0 . RECTANGULAR SURFACE PATCHES 275 as a rectangular grid of (p + 1)(q + 1) points in A × A. . Generally. j )(i. . s2 ]) is the trace of the rectangular patch. can be viewed as an image of the rectangular grid p.q . . and together. are B´zier e curve segments. . j )(i. the map Fu : A → E.j)∈ . where b0 j = bi. v) for all u ∈ A. j+1 . The e surface F (or rectangular patch) determined by a control net N . [r 1 . The control net N = (bi. with 0 ≤ i ≤ p. r2 ]. but of degree p + q. pq quadrangles are obtained in this manner. q .2. v) for all v ∈ A. is a curve on the bipolynomial surface F . They are B´zier curves.2. q . [r2 . b0. contains the four control points b0.

and then bq . where b0 = bi∗ . u. . . . . v). s1 . s2 ). . . . 0∗ p q Alternatively. It is easily shown by induction that p−i bj k = f (r1 . i∗. b∗q . bp∗ . p−i i q Next. and then compute bq . . s1 . we can ﬁrst compute the points b∗0 . . j . . . . the version of the algorithm in which we compute ﬁrst the control points b0∗ . . v) = bp . The same result bp = bq . b∗0 . by applying the de Casteljau algorithm to the control ∗0 points b∗0 . s1 . . . . . . . . POLYNOMIAL SURFACES with 1 ≤ j ≤ q and 0 ≤ k ≤ q − j. . . v. . . i∗ p−i−j i q and thus. bp. . . . . 0∗ ∗0 We give in pseudo code. . v . . depending on p and q. where b∗j is obtained by applying the de Casteljau algorithm to the B´zier control points e b0. u. . we compute the points bj . . . . . . . . .q . b∗q . and then bp . 0∗ It is easily shown by induction that j u − r1 s1 − r1 j−1 bi+1∗ . we have bi∗ = f (r1 . . v). F (u. s1 . . and we let F (u. . . with 0 ≤ j ≤ q. v. . i∗. . r2 . . . . . bj = f (u. . . r 2 . r1 . . . . The reader will easily write the other version where the control points 0∗ are computed ﬁrst. . . r 1 . . r1 . . j . v. s1 . . i j q−j−k k and since bi∗ = bq 0 .j)∈ p. and i∗ i∗ bj = i∗ s1 − u s1 − r1 j−1 bi∗ + with 1 ≤ j ≤ p and 0 ≤ i ≤ p − j. . . v. one of these two programs ∗0 may be more eﬃcient than the other. s1 . v ). s2 .276 CHAPTER 7. . . . is obtained. v) = bp = f (u. . . r1 . . . . . and we let bi∗ = bq 0 . i∗. . . . . . Although logically equivalent. j )(i. We assume that the input is a control net N = (bi. . b∗q .

s. f (u. x. j i∗. for j := 1 to p do for i := 0 to p − j do j−1 bi∗ + v−r2 s2 −r2 j−1 bi∗. u3 . i∗. v 3 ) is denoted as u1 u2 u3 . v) := bp 0∗ end j−1 −v bj k := ss22−r2 bi∗. bi∗ = bq . and for simplicity of notation. 3 . The computation shows that the points f (u. y. v. endfor. u. using these points (shown as square dark dots) as control point.7. u. u. f (r. u. are computed ﬁrst. and that the image by an aﬃne map h of a bipolynomial surface deﬁned by a control net N is the bipolynomial surface deﬁned by the image of the control net N under the aﬃne map h. u. not all points have been labeled. for i := 0 to p do b0 = bi∗ i∗ endfor. y).3. k+1 u−r1 s1 −r1 j−1 bi+1∗ The following diagram illustrates the de Casteljau algorithm in the case of a bipolynomial surface of degree 3. There are other interesting properties of tangent planes. The diagram shows the computation of the point F (u. F (u. and then. x. x. u2 . r. u. v 2 . s) and (x. u. it is clear that the rectangular surface patch deﬁned by a control net N is contained inside the convex hull of the control net. y) have been chosen as aﬃne bases of A. It is also clear that bipolynomial surfaces are closed under aﬃne maps. Since the ﬁgure is quite crowded. v. y. k + i∗. and f (u. for j := 1 to q do for k := 0 to q − j do −u bj := ss11−r1 i∗ endfor endfor. f (u. that will be proved at the end of From the above algorithm. x. y. v). v) (shown as a round dark dot) is computed. u. x. It is assumed that (r. y). v 1 . x. the point f (u. x) is denoted as rrs. for instance. u. x. RECTANGULAR SURFACE PATCHES 277 begin for i := 0 to p do for j := 0 to q do b0 j := bi. . THE DE CASTELJAU ALGORITHM. x). y). endfor endfor.0 endfor. xxx. the polar value f (u1 . v1 v2 v3 . u.

xxy rrr. POLYNOMIAL SURFACES rrs. xxx sss. yyy rss. yyy rrr. 3 . yyy uuu. xxx rss. xxy sss. xxx uuu. xxx Figure 7. xxy rrs.1: The de Casteljau algorithm for a bipolynomial surface of degree 3. xxy rrr.278 CHAPTER 7. vvv rss. yyy rrs. xxx sss.

and bp. Given any barycentric aﬃne frame (r. which determine one tangent line. . . . . and b0. . Deﬁnition 7. q . 0 . . b1. t) ∈ P. where E is of dimension n.1 is that it does not depend on the dimension of E. TOTAL DEGREE SURFACES IN POLAR FORM 279 this chapter. By analogy with polynomial curves. which determine another tangent line (unless these points are all collinear). Vm )) is multiaﬃne and symmetric in the pairs (Ui . for any ai = λi r + µi s + νi t. such that F (u. . (u. in general. v) is determined by the points bp−1 . . and the 1∗ 0∗ q−1 q−1 points b∗0 . is a map F : P → E. A similar property holds for the other corner control points b0. .4. with respect to both U and V .5. such that there is some symmetric multiaﬃne map f : P m → E. . q . . The trace of the surface F is the set F (P). Given any aﬃne space E of dimension ≥ 3. we get polar forms fi : P m → A. the four tangent planes at the corner points are known. 0 . am ) = f (λ1 r + µ1 s + ν1 t. if the control points b0. The advantage of deﬁnition 7. where λi + µi + νi = 1. . and 1 ≤ i ≤ m. Vi ). 0 . V1 ).1 to each polynomial Fi (U. v) = f ((u. we can expand f (a1 . the tangent plane to the surface at F (u. a surface of total degree m in polar form. bp. . . . (Um .1. . λm r + µm s + νm t) . then the plane that they deﬁne is the tangent plane to the surface at the corner point b0. m for all a ∈ P. We now go back to total degree surface. V ) deﬁning F . b∗1 . m Note that each f ((U1 .7. 1 .4. . bp−1 . 7. are not collinear. . s. For example.4. v)). applying lemma 4.4 Total Degree Surfaces in Polar Form Given a surface F : P → E of total degree m. a). it is also natural to propose the following deﬁnition. 0 . . Also. . deﬁne an m-multiaﬃne and symmetric map f : P m → E. v). and with F (a) = f (a. Thus. which together.

deﬁned such that m Bi. .m} I. λm r + µm s + νm t) + ν1 f (t. . am ) = I∪J∪K={1. . . By induction.280 CHAPTER 7. . . λm r + µm s + νm t) = λ1 f (r.. t.. are also called Bernstein polynomials. We summarize all this in the following lemma. Then. . . . . . given any family (bi. . t. i j k m+2 2 = where i + j + k = m. . . k )(i. . Clearly. .k)∈∆m of f (a1 . r . as ∆rst. s.K disjoint i∈I (m+1)(m+2) 2 points in E. From now on. the map deﬁned such that µj j∈J k∈K λi νk b|I|. . . . . . s. i j k where i + j + k = m. . . s. . .j. t). . |K| . λm r + µm s + νm t). when a1 = a2 = . we get F (a) = f (a. t). i!j!k! i+j+k=m i j k where a = λr + µs + νt. the points f (r.. . . . . . . . and call it a reference triangle. .. s. . r. = am = a. t).J. . . . . . .m} I. . . . . can be viewed as control points. . . |J|. . .J. . s. . we have f (λ1 r + µ1 s + ν1 t.. t) in the aﬃne plane P. . POLYNOMIAL SURFACES by multiaﬃneness and symmetry. t). . |I| |J| |K| (m+1)(m+2) 2 There are 3m terms in this sum. but one can verify that there are only terms corresponding to the points f (r. a) = m m! i j k λ µ ν f (r.k (U.. . t. t. s. λm r + µm s + νm t) + µ1 f (s. r. Let ∆m = {(i. . V. j. . j. s.. we will usually denote a barycentric aﬃne frame (r. . . r . λ2 r + µ2 s + ν2 t. . . k) ∈ N3 | i + j + k = m}.K pairwise disjoint i∈I λi j∈J µj k∈K νk f (r. .. . Also. . s. . . The polynomials in three variables U. . . . is symmetric and multiaﬃne. . . s. with λ + µ + ν = 1. As a ﬁrst step. . . . . . it is easily shown that f (a1 . . λ2 r + µ2 s + ν2 t. V. .j. i!j!k! where i + j + k = m. T . λ2 r + µ2 s + ν2 t. T ) = m! U iV j T k. am ) = I∪J∪K={1. . .

K pairwise disjoint i∈I λi j∈J µj k∈K νk f (r. . r. as F (a) = f (a.. . . . .4. t. .7. i j k for all (i. j. . . TOTAL DEGREE SURFACES IN POLAR FORM 281 Lemma 7.2. with λi + µi + νi = 1. For the converse. j. . j. ..j. a) = m (i. s. am ) = I∪J∪K={1.1. . . V. . . ν) f (r. as in deﬁnition 7.k)∈∆m of (m+1)(m+2) points in E.4. We can show that when E is of ﬁnite dimension.j.1. we have the following grid of 21 points: 500 401 302 203 104 005 014 113 023 212 122 032 311 221 131 041 410 320 230 140 050 We intentionally let i be the row index. .4. T ) = i!j!k! U i V j T k .1. j. A point F (a) on the surface m! m F can be expressed in terms of the Bernstein polynomials Bi.k)∈∆m of (m+1)(m+2) points in E.J. . and 1 ≤ i ≤ m. simply apply lemma 4. is called a (triangular) control 2 net. k )(i. . . starting from the left lower corner.2. . k . deﬁned by a symmetric m-aﬃne polar form f : P m → E.1 is identical to the class of polynomial surfaces. . . |I| |J| |K| where ai = λi r + µi s + νi t. when m = 5. k )(i. t) = bi. . t.. j. Given a reference triangle ∆rst in the aﬃne plane P. k) ∈ ∆m . . k )(i. µ.. s. . . such that f (r. t. The control net N = (bi. . . can be thought of as a triangular grid of points in P. . s.k (λ. Note that the points in e ∆m = {(i. k) ∈ N3 | i + j + k = m}. . . . . For example. r.k)∈∆m . that a polynomial surface according to deﬁnition 7. f is given by the expression f (a1 .j. t). . given any family (bi. with λ + µ + ν = 1.4. k)∈∆m m Bi. j. . .j. s. t). j.1. s. Furthermore. r. s. . we have already shown using polarization. . also starting from the left lower corner. . the class of polynomial surfaces of total degree m in polar form as in deﬁnition 7. i j k where a = λr + µs + νt. or B´zier net.j.m} I. .1 is a polynomial surface of total degree m in polar form as in deﬁnition 7. . A family N = (bi.k (U. . .3. . . there is a unique surface F : P → E of total degree 2 m. and j be the column index. Indeed.

with λ + µ + ν = 1. 0. . i+1. 0 . . i. a. Boissonnat and Yvinec [13]. we can associate with each triple (i. . 0. The portion of the surface F corresponding to the points F (a) for which the barycentric coordinates (λ. k + µbl−1 k + νbl−1 k+1 . j. An easy induction shows that 0. for every (i. t. b0. Contrary to rectangular patches.j. recall that in terms of the polar form f : P m → E of the polynomial surface F : P → E deﬁned by N . s. . . . in order to compute F (a) = f (a. j. j. are computed such that bl j. j+1. . k = f (r. .5 The de Casteljau Algorithm for Triangular Surface Patches Given a reference triangle ∆rst. k )(i. . t. 0 ≤ ν. where λ+µ+ν = 1. bl j. . s.k)∈∆m . . . r. k )(i. Then. m . and F (∆rst) is called the trace of the triangular surface patch. . Preparata and Shamos [64]. contains the three control points bm. . to form triangular faces that form a polyhedron. To be deﬁnite.j. is called a triangular (surface) patch. . j.2. 0 . the points (bl j. .282 CHAPTER 7. where at stage l. i. . . m. k = λbl−1 j.j. r. layers are computed in m stages. The other i. s. or Risler [68]). t). or triangular B´zier patch. . and b0. given a triangular control net N = (bi. 1 ≤ l ≤ m.k)∈∆m . By lemma 7. we can assume that we pick a Delaunay triangulation (see O’Rourke [58]. we have bi. . ν) of a (with respect to the reference triangle ∆rst) satisfy the inequalities 0 ≤ λ. i. which are also denoted as (b0 j. . t). . the single point bm 0.k)∈∆m . During the last stage. there isn’t a unique natural way to connect the nodes in N . . j. a). 7. l i j k .k)∈∆m−l i. 0 is computed. yield various ways of forming triangular faces using the control net. . The surface e F (or triangular patch) determined by a control net N . the computation builds a sort of tetrahedron consisting of m + 1 layers.4. m m m where ∆rst is the reference triangle. 0 ≤ µ. the points i j k r + s + t. the corners of the surface patch. The de Casteljau algorithm generalizes quite easily. . i j k Given a = λr +µs+νt in P. POLYNOMIAL SURFACES can be viewed as an image of the triangular grid ∆m in the aﬃne space E. . j. k = f (a. various triangulations of the triangle ∆rst using the points just deﬁned. r. k )(i. k )(i. i. k) ∈ ∆m . . . k) ∈ ∆m .j. The base layer consists of the original control points in N . s. given a triangular control net N = (bi. . such a control net N determines a unique polynomial surface F of total degree m. . µ. . In order to understand the diﬃculty. .

0. νl . r. r. . . THE DE CASTELJAU ALGORITHM FOR TRIANGULAR SURFACE PATCHES283 where (i. s. am ) = bm 0. 0). 0. i. Let a = λr + µs + νt. we simply replace λ. e2 = (0. k )(i. . . . . . for all i ∈ ∆m . the base layer of the tetrahedron consists of the original control points in N . stage l. It is assumed that the triangle ∆rst has been chosen as an aﬃne . k = f (a1 . . . e3 = (0. am ) as follows. . 0. we can compute the polar value f (a1 .k)∈∆m . . am in P. j+1.5. and 0 = (0. is contained in the convex hull of the control net N . . k )(i. 0). i+1. . F (a) = bm 0. i. i. . . . bl := λbl−11 + µbl−12 + νbl−13 i i+e i+e i+e endfor endfor endfor. ν by λl . a triple (i. where 1 ≤ l ≤ m. Then. where λ + µ + ν = 1. s. k) ∈ ∆m−l . 0. . where al = λl r+µl s+νl t. and we let e1 = (1.j. . . It is clear from the above algorithm. t. i j k where (i. j. 1. with λl + µl + νl = 1.k)∈∆m−l are computed such that i. am ). . t). bl j. F (a) := bm 0 end In order to compute the polar value f (a1 . and thus. we can i i describe the de Casteljau algorithm as follows. . it may be helpful to introduce some abbreviations. k) ∈ ∆m−l . An easy induction shows that bl j. j. 0. . For example. such that b0 = bi . 0 . k = λl bl−1 j. f (a1 .7. 0). begin for l := 1 to m do for i := 0 to m − l do for j := 0 to m − i − l do k := m − i − j − l. the points (bl j. that the triangular patch deﬁned by the control net N . In order to present the algorithm. j. and thus. for m points a1 .j. Again. . . . . . k). 1). i := (i. . µl . . µ. . At i. We are assuming that we have initialized the family (b0 )i∈∆m . . given m points a1 . which are also denoted as (b0 j. . k) ∈ ∆m is denoted as i. . k + µl bl−1 k + νl bl−1 k+1 . where al = λl r + µl s + νl t. 0 . . i. Similarly. The following diagram illustrates the de Casteljau algorithm in the case of a polynomial surface of total degree 3. . al . am in P. j. with λl +µl +νl = 1. j. .

0. There are other interesting properties of tangent planes. A similar property holds for the other corner control points b0. For example.284 ttt CHAPTER 7. u3 ) is denoted as u1 u2 u3 . 0. and bm−11 (unless these 1. the polar value f (u1 . bm−1. 0 and b0. Thus. m. then the plane that they deﬁne is the tangent plane to the surface at the corner point bm. 0. r.2: The de Casteljau algorithm for polynomial surfaces of total degree 3 base of P. the tangent plane to the surface at F (a) is determined by the points bm−10 . 0. by considering points a ∈ P of barycentric . and for simplicity of notation. 0. 0 . and bm−1. f (r. bm−10 . for example. each one obtained via two-dimensional interpolation steps. 1. 0 . and it consists of three shells. Given a polynomial surface F : P → E. u2. if the control points bm. points are all collinear). m . The diagram shows the computation of the point F (u). in general. 1 . s) is denoted as rrs. 0. are not collinear. POLYNOMIAL SURFACES ttu rtt tuu rrt rtu tsu tss tts uuu rrr rru ruu rst suu ssu sss rsu rrs rss Figure 7. 0 . that will be proved at the end of this Chapter. Also. 0. 0. 1. the three tangent planes at the corner points are known.

q . .7. . we get a curve on the surface F passing through F (r) and F (s). lies on a line. for any u a ∈ E. obtained by holding λ constant. µ. . Other isoparametric curves are obtained by holding µ constant. . considering points of barycentric coordinates (λ. This way. . we get a curve on the surface F passing through F (s) and F (t). . let F : A × A → E be a bipolynomial surface of degree p. we can consider curves on a surface F of total degree m. These curves are B´zier curves of degree m. ∂− ∂− u v . q -symmetric multilinear map. More generally. . − ∈ R. q . − . Finally. . b) is given by → → u v Du Dv F (a. we also get a B´zier curve e of degree m. 7. b) = p q f (a. where E and E are normed aﬃne spaces.6 Directional Derivatives of Polynomial Surfaces In section 10. If we consider the points F (λr + µs + νt) on the surface F . determined by the constraint that the parameter point a = λr + µs + νt ∈ P. p−1 q−1 f : (A)p × (A)q → E This directional derivative is also denoted as ∂2F → → (a. . − ). 1 − λ). It is interesting to note that the same polynomial surface F . − ).5. . or ν constant. b. and when represented as a surface of total degree m. when represented as a bipolynomial surface of degree p. u v the directional derivative Du Dv F (a. and they are called the boundary curves of the e triangular patch. 1 − µ). it is shown that if F : E → E is an aﬃne polynomial function of polar degree − → → m. requires a control net of (p + 1)(q + 1) control points. they form a polynomial curve called an isoparametric curve. 1 − λ. b. → → it is easily shown that for any two nonnull vectors − . a. and similarly. a. Using some very similar calculations. 0). We conclude this chapter by considering directional derivatives of polynomial surfaces. for any two points a. . m−1 Now. b).6. and let f : (A)p × (A)q → E be its polar form. 0. and if − is any nonnull vector in E . requires a control net of (m+1)(m+2) 2 points. we get a curve on the surface F passing through F (r) and F (t). considering points of barycentric coordinates (λ. the directional derivative Du F (a) is given by → u Du F (a) = mf (a. Let be its homogenized p. b ∈ A. . DIRECTIONAL DERIVATIVES OF POLYNOMIAL SURFACES 285 coordinates (0.

and the vector (a. the directional derivative − − (a. . . . . . for any a. POLYNOMIAL SURFACES ∂2F → → When − = 1 and − = 1. with polar form f : (A)p × (A)q → E. . . → u 1 1 1 1 → and similarly. j. b) is also denoted as i j ∂ i+j F (a. −i → −j → ∂u ∂v → u If (r1 . b) = pi q j f (a.5. b) = q f (a. . . b. . b ∈ A. v ). i j p−i i q−j j The directional derivative Du . . → ∂v p q−1 . . the directional → → u v derivative Du . Dv F (a. . . by deﬁnition. . for any two points a. Dv F (a. b) = p f (a. b) can be computed from the homogenized polar form f of i j F as follows: → → → → u u v v Du . b). . →i →j ∂− ∂− u v and when i = 0 or j = 0. . u . b) is often called the twist at (a. Given a bipolynomial surface F : A × A → E of degree p. b. . b) or (a. it is denoted as ∂j F ∂iF (a. j ≤ m. b). b). Lemma 7. . − . .6. b. b). b. . b ∈ A. − ). b. every vector − ∈ R can be expressed as v − =λ s −λ r . . . respectively. . . → ∂u p−1 q and ∂F − → − (a. . . . for any i. → v 2 2 2 2 and thus. q . − . . . b). . s1 ) and (r 2 . s2 ) are any two aﬃne frames for the aﬃne line A. . Du Dv . can be shown easily. a. . .3. . . b) is the partial u v ∂ →∂ → u v 2 2 ∂ F ∂ F derivative (a. − . . where E is any normed aﬃne space. . every vector − ∈ R can be expressed as − =λ s −λ r . . analogous to lemma 10.286 CHAPTER 7. Du Dv . Dv F (a. a. Du Dv . for any nonzero vectors − . . − ∈ R. where 0 ≤ i. . a. . . .1. since ∂F − → − (a. ∂u∂v ∂u∂v The following lemma.

1 . b. . . . r2 ). r2 )) . . In particular. . . ∂− v p q−1 p q−1 Since these vectors. However. . . b∗1 . b. .1 = f (a. a. . . b. − . a. r1 . a. and b0. . .1 = f (a. b) = q λ2 (f (a. . a. . . since u v ∂2F → → u v (a. . . b. r1 . . . . . . a. bp−1 . . 1 . b. . . b. s1 . DIRECTIONAL DERIVATIVES OF POLYNOMIAL SURFACES we get ∂F → (a. . we ﬁnd that the control points b0. . b. . . . . . . . ∂− u p−1 q p−1 q 287 and ∂F → (a. . . b. . . . s2 ) − f (a. b. b) to the bipolynomial surface F . these points are just the points bp−1 . . . . . b). . . b. . b) = p λ1 (f (a. p q−1 f (a. . letting a = r 1 and b = r2 . . computed by the two versions of the de Casteljau 0∗ 1∗ algorithm (provided that they are not collinear). . b). . b) = p q f (a. p−1 q−1 d2. . a. . . b.1 . s1 . . . . b. b. . . . . a. . d1. b. . . provided that they are not linearly dependent. deﬁne the tangent plane to the surface at the corner point b0. . . a. b) − f (a. b)). . . b. . r 2 )). and provided that they are not collinear. . p−1 q−1 d2. . b1. . .2 . . r 1 . s2 ). a. . b. . b. s2 ). r1 . the four points f (a. . b∗1 . . we have proved the claim that the tangent plane at (a. . b. they determine the tangent plane at (a. b. . . p q−1 are coplanar. a.2 = f (a. b. b) to the bipolynomial surface F is determined q−1 q−1 by the points bp−1 . . . . s1 . p−1 q−1 d1. s1 . b.7. a. Thus. . . p−1 q f (a. − ). . 0 . . . b. . . . r1 . . a. . b. . . . → → Assuming − = 1 and − = 1. by a similar reasoning. d2. . . . b. . r2 ). . . . ∂u∂v p−1 q−1 we get ∂2F (a. d2. . . a. 0 . . b. . . . . s1 . b. . . r1 . r 2 ) ∂u∂v p−1 q−1 p−1 q−1 − (f (a. a. . b) = p q f (a. b. r2 ). b. s2 ) − f (a. . . b. p−1 q f (a. .2 = f (a. a. s2 ) − f (a. . . . be the points d1. s1 . a. . computed by the two versions of the de Casteljau algorithm. and b∗0 . . . .6. . b. 0 . s2 ). . . . . bp−1 . . . b) to F . p−1 q−1 . . . . span the tangent plane at (a. . . p−1 q−1 p−1 q−1 Letting d1.2 . a. 0∗ 1∗ q−1 q−1 and b∗0 .

d1.1 − d1. b) = p q ((d2. let c = d2. when a = r1 and b = r2 .1. up to the factor p q.1.1 − d1.2 + d1.2 − d2.288 we can express ∂2F (a.1 . the directional derivative u v 2 2 ∂ F ∂ F − ∂ − (a.1 ).2 − d1. b) as ∂u∂v CHAPTER 7. d1.1 . and c. b).1 = b0. POLYNOMIAL SURFACES ∂2F (a. ∂u∂v ∂2F (a.2 − d2. ∂− ∂− u v . d2. form a parallelogram.1 . d2.2 . b) is the partial derivative ∂u∂v (a.1 ) − (d1. and since → → ∂u v ∂2F → → (a.2 . d2.2 + d1.1 ). → → and thus (since − = 1 and − = 1). that is. b) = p q ((d2.2 .1 (when these points are not collinear). d1. The twist vector Then. deviates from the parallelogram d1.2 − d1.1 − d1.2 from the plane determined by d1. ∂u∂v which. b) can be given an interesting interpretation in terms of the ∂u∂v points d1.1 .1 .1 ) − (d1.2 − c. d2. and roughly. c. this plane in just the tangent plane at d1. b) = p q (d2.2 .2 − d1.1 + (d1. then by deﬁnition. ∂u∂v Thus.2 − d1.1 ). Let c be the point determined such that d1.1 )) .1 .2 . and d2. d2.2 − d2.1 )) . can be written either as ∂2F (a. → → Remark: If we assume that − = 1 and − = 1. d1.2 ) − (d2. in terms of vectors. In particular. ∂u∂v or as ∂2F (a.1. the twist vector is a measure for how much the quadrangle d1. we get u v ∂2F (a.0 . d1.2 . b) = p q (d2. d2.2 − c).2 − d2. b) = p q (d2. it is a measure of the deviation of d2. it is immediately veriﬁed that (d2.2 .1 ) = d2.

. subject to λ + µ + ν = 0. m−k ∂k F When u1 = . Duk F (a) can be computed as follows: → → u u Du1 . a. a. a. r). . . . . a. . b. . . i=0 j=0 q−1 where Bip−1 and Bj are the Bernstein polynomials. . . −k ). . b. and b = (1 − λ2 )r 2 + λ2 s2 . r) + µf (a. a. since every vector − ∈ R2 can be written as u − = λr + µs + νt. . . −k ∈ R2 . f (a.2 = f (a. = uk . . . . b. .6. s2 ). . . . . a.2 = f (a. DIRECTIONAL DERIVATIVES OF POLYNOMIAL SURFACES and d1. . t) m−1 m−1 m−1 . . . . ∂− u m−1 m−1 m−1 When λ. µ. we get ∂2F (a. p−1 q−1 letting a = (1 − λ1 )r 1 + λ1 s1 . s) + νf (a. f (a. → → Thus. . for u u any a ∈ P. . . provided that the points f (a. the k-th directional derivative Du1 . s1 .j+1 + bi. . . . . . . . b. after expansion using multiaﬃneness. a. . .1 = f (a. s).7. we also denote Du1 . . by lemma 10. r1 . Duk F (a) = mk f (a. . . . b) = p q ∂u∂v i=p−1 j=q−1 q−1 (bi+1. for k = 1. . . deﬁned by e m a symmetric m-aﬃne polar form f : P → E. . . . .5. p−1 q−1 289 d1. . →k ∂− u → Given any reference triangle ∆rst. . . .5 apply immediately. s2 ). we get ∂F → (a) = m(λf (a. . . . p−1 q−1 d2. a. .j+1 − bi+1. → u where λ + µ + ν = 0. r2 ). a.j − bi. . . r2 ). .j are the B´zier control points e of F . b. . ν vary. a. b. . . t)). . . where 1 ≤ k ≤ m. −1 . . b. Duk F (a) as (a). .j )Bip−1 (λ1 )Bj (λ2 ). s1 . . . . . a. . .1 = f (a. and the bi. . If we now consider a polynomial B´zier surface F : P → E of total degree m. b.3. . r1 . p−1 q−1 d2. the results of section 10. for any k nonzero vectors −1 . . . .

we recognize ∂→ u that the above points are just the points bm−1 0) .q−1 + up σh−1. and bm−1 1) . . . collinear). 0 . u u m−k we deduce that ∂ i+j F → → → → i+j f (a. − .k−1 + up vq σh−1. with respect to the aﬃne frames (0. then the plane that they deﬁne is the tangent plane to the surface at the corner point bm.k . Recall that the polar form of the monomial uh v k with respect to the bidegree p. a. and bm−1 1) (unless these points are all (1. POLYNOMIAL SURFACES ∂F are not collinear. bm−1 0) .. 0 . . − . 1. 0.q}.0 1 0 if 1 ≤ h ≤ p and 1 ≤ k ≤ q.. 1 .q−1 p−1. .. (0.k = p h q k p.q−1 p.q σh. otherwise.|I|=h J ⊆{1.k−1 σh.k = 1 p h q k I⊆{1. q (where h ≤ p and k ≤ q) is p.q−1 p−1. letting a = r. if 1 ≤ h ≤ p and k = 0 ≤ q.. −k ). In particular. u u v v i − j (a) = m → v ∂− ∂ → u m−i−j i j 7.7 Problems Problem 1 (10 pts). However. . . Remark: From → → Du1 . 0.|J |=k ui i∈I j∈J vj . − ). . Casteljau algorithm. the vectors − (a) span the tangent plane to F at a.q + u σ p−1. . .q Letting σh. −1 .. . . and q ≥ 0. p ≥ 0. − . . p.. prove that we have the following recurrence equations: p. . 0. . . . .q fh.q−1 σ 0.p}. Thus.q p h−1. we have proved the claim that the tangent plane to the surface at F (a) is determined by the points bm−1 0) . bm−1 0) . bm−1. (0. 1) and (0..0 h. and bm−1. . . Problem 2 (20 pts). 1]. .k + vq σ0.k + vq σh. 1.k p. Duk F (a) = mk f (a. (0. if h = 0 ≤ p and 1 ≤ k ≤ q. 0. 1. are not collinear. 1). . (0. . 0.k p−1.q−1 p−1.290 CHAPTER 7. computed by the de (1.q fh. 1] × [0. 0. if h = k = 0.k−1 = σ p−1. a. Compute the rectangular control net for the surface deﬁned by the equation z = xy. . 0. 0 .. Plot the surface patch over [0. if the control points bm.

vq ).k can be computed directly using the recurrence formula p.0 .k + vq fh.k = m h 1 m−h k ui I∪J ⊆{1.k + vm fh. prove that fh.q−1 h(q − k) (p − h)k hk p−1.k . m Alternatively.k = 291 (p − h)(q − k) p−1.k + um fh−1.q fh. k ≥ 0 and 1 ≤ h + k ≤ m. 1 ≤ k ≤ j.k m fh. v1 .q−1 p−1. Prove that the polar form of the monomial uh v k with respect to the total degree m (where h + k ≤ m) can be expressed as m fh..k−1 . p. k ≥ 0 and 1 ≤ h + k ≤ m. 1 ≤ i ≤ p.7. prove that fh. Problem 3 (30). . . . PROBLEMS p.k (u1 .k−1 .0 = (p − h) p−1.I∩J =∅ vj j∈J . .0 + up fh−1. up . . . .. Problem 4 (20 pts).k = (q − k) p.q−1 fh. p p where 1 ≤ h ≤ p and k = 0 ≤ q.|J |=k.q−1 p−1. Show that for any (u1 .k−1 + up vq fh−1.j fh. vq ).k + vq f0. and 1 ≤ j ≤ q.k + um σh−1. q q where h = 0 ≤ p and 1 ≤ k ≤ q.7. pq pq pq pq where 1 ≤ h ≤ p and 1 ≤ k ≤ q. . 0 otherwise. prove that we have the following recurrence equa- tions: m−1 m−1 m−1 σh. and p. can be done in time O(p2q 2 ).k−1 if h. i∈I m Letting σh.q fh. .q h p−1..q−1 f0.m} |I|=h. .q Alternatively.q−1 k p. . .q fh. write a computer program for computing the control points of a rectangular surface patch deﬁned parametrically.k + vm σh. . = 1 if h = k = 0 and m ≥ 0. .k = m h m−h k m fh.q f0.k−1. . m m m where h.. Using the result of problem 2. where 1 ≤ h ≤ i. v1 . .k + up fh−1.k can be computed directly using the recurrence formula m σh.k = (m − h − k) m−1 h k m−1 m−1 fh. computing all the polar values i. up . .

can be done in time O(m3 ). . 1). 0)). 1). z = u2 v − v 3 /3. 1) and (−1. 1 ≤ h + k ≤ i. vm )). . . This surface is a more complex kind of monkey saddle. Problem 7 (20 pts). . Problem 9 (20 pts). 1) and (−1. Compute a rectangular net for the surface deﬁned by the equation z = x3 − 3xy 2 with respect to the aﬃne frames (−1. 0). Problem 5 (30). 0). 0)). write a computer program for computing the control points of a triangular surface patch deﬁned parametrically. Using the result of problem 4. y = v(u2 + v 2 ).292 CHAPTER 7. Compute a triangular net for the same surface with respect to the aﬃne frame ((1. (0. (0. (0. (um . 0). . (0. Compute a rectangular net for the surface deﬁned by x = u(u2 + v 2 ). Note that z is the real part of the complex number (u + iv)4 . v1 ). vm )). . Compute a rectangular net for the surface deﬁned by the equation z = 1/6(x3 + y 3) with respect to the aﬃne frames (−1. 0)). . This surface is called a monkey saddle. 0). 0). (um . v) = (0. (0. Compute a triangular net for the same surface with respect to the aﬃne frame ((1. (0. computing all the polar values i fh. and 1 ≤ i ≤ m. Note that z is the real part of the complex number (u + iv)3 . where h. Problem 6 (20 pts). POLYNOMIAL SURFACES Show that for any ((u1 . Compute a triangular net for the same surface with respect to the aﬃne frame ((1. 1). Compute a rectangular net for the surface deﬁned by the equation z = x4 − 6x2 y 2 + y 4 with respect to the aﬃne frames (−1. 1) and (0. 1). . . Compute a triangular net for the same surface with respect to the aﬃne frame ((1. with respect to the aﬃne frames (0. Explain what happens for (u.k ((u1. Problem 8 (20 pts). 1). 0)). (0. k ≥ 0. (0. 1). v1 ). 1) and (−1. 1). 1).

j)∈ p. the class of polynomial surfaces of total degree m in polar form as in deﬁnition 7.4.2. (i) Prove that F (u. give the details of the induction steps. Give the details of the proof of lemma 7. What happens to the direct de Casteljau algorithm when p = q? Modify the direct de Casteljau algorithm to compute F (u. In the second version. q . Problem 12 (10 pts).j)∈ n−r. . Prove that when E is of ﬁnite dimension.q with respect to the aﬃne frames (r 1 . j + u(1 − v) bi+1. . j ≤ n. and then compute bp . . This method is known as the direct de Casteljau algorithm.7. Prove that when E is of ﬁnite dimension. . Deﬁne a sequence of nets (br j )(i. q given by some control net N = (bi.1.1. Problem 13 (20 pts). . where 0 ≤ i. j )(i. The ﬁrst version is the direct de Casteljau algorithm described in problem 13. and for r = 1. n given by some control net N = (bi. by applying the de Casteljau algorithm to the control 0∗ points b0∗ .4. Given a bipolynomial surface F of bidegree p. where bi∗ is obtained by applying the de Casteljau algorithm to the B´zier control points e bi. n. . Problem 11 (10 pts).1 is identical to the class of polynomial surfaces as in deﬁnition 7. we have br j = bi.j)∈ n. we ﬁrst compute the points b∗0 . s2 ). j with 0 ≤ i. .2. the class of bipolynomial surfaces in polar form as in deﬁnition 7. j+1 . bp∗ . . r−1 r−1 r−1 r−1 br j = (1 − u)(1 − v) bi. . . v) when p = q. v) on the surface F . . all i.2 and lemma 7. . i. . Let F be a bipolynomial surface of bidegree n. j )(i. v) = bn 0 . (ii) Let F be a bipolynomial surface of bidegree p. . . there are three versions of de Casteljau algorithm to compute a point F (u. j )(i. In the third version. s1 ) and (r2 .j)∈ p. . bi. bp∗ . PROBLEMS 293 Problem 10 (20 pts).n−r inductively as follows: For r = 0.n with respect to the aﬃne frames (r 1 . with 0 ≤ i ≤ p. we ﬁrst compute the points b0∗ . s2 ). that is.1 is identical to the class of polynomial surfaces as in deﬁnition 7. Problem 14 (20 pts). s1 ) and (r 2 .1.q with respect to the aﬃne frames (r 1 . . j for i. . b∗q . i. s2 ).7.1.2. 0. . j ≤ n − r. . s1 ) and (r 2 . j+1 + uv bi+1. j + (1 − u)v bi. . q given by some control net N = (bi. 0 .

. Compare the complexity of the three algorithms depending on relative values of p and q. j . . Write a computer program implementing the de Casteljau algorithm(s) for rectangular surface patches. . . .294 CHAPTER 7. . Problem 16 (30 pts). Write a computer program implementing the de Casteljau algorithm for triangular surface patches. b∗q . POLYNOMIAL SURFACES where b∗j is obtained by applying the de Casteljau algorithm to the B´zier control points e b0. Problem 15 (40 pts). by applying the de Casteljau algorithm to the control ∗0 points b∗0 . . with 0 ≤ j ≤ q. bp. and then compute bq . . . j .

i. . a). . j. r. 0 is computed. k besides the face correi. l i j k Assuming that a is not on one of the edges of ∆rst. The base layer consists of the original control points in N . . . During the last stage.Chapter 8 Subdivision Algorithms for Polynomial Surfaces 8. j. k) ∈ ∆m−l . recall that in terms of the polar form f : P m → E of the polynomial surface F : P → E deﬁned by N . 1 ≤ l ≤ m. i. k = f (r. i. j. j. The other i. . . where λ+µ+ν = 1. . .j. in order to compute F (a) = f (a. in order to obtain a triangulation of a surface patch using recursive subdivision. given a triangular control net N = (bi. j. .j. . k )(i. k )(i.k)∈∆m . . . . and thus. the single point bm 0. t).j. the crux of the subdivision method is that the three other faces of the tetrahedron of polar values bl j. the points (bl j. . . . A similar method is described in Farin [30]. which are also denoted as (b0 j. k = f (a. . j+1. the computation builds a sort of tetrahedron consisting of m + 1 layers. yield three control nets 295 where (i. . k )(i. . 0. t). . k = λbl−1 j. k + µbl−1 k + νbl−1 k+1 . An easy induction shows that 0. r. . . . r. . t. .1 Subdivision Algorithms for Triangular Patches In this section. s. i+1. . . i j k Given a = λr +µs+νt in P. sponding to the original control net. we explain in detail how the de Casteljau algorithm can be used to subdivide a triangular patch into subpatches. s. . Given a reference triangle ∆rst. . s. .k)∈∆m−l i. layers are computed in m stages. we have bi. s. k) ∈ ∆m . bl j. t. are computed such that bl j. . F (a) = bm 0. . 0 .k)∈∆m . for every (i. . i. where at stage l. a.

bi. . . s) f (s. the de Casteljau algorithm can be easily 0. . . i. . and N rsa = (bl j. s. The corresponding Mathematica functions are given below: . N rat = (bl 0.. . . . i. s. . 0..e. j. s. m−i. . . t) f (t. . . f (r. . . . bi. . t. r . SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES corresponding to the base triangle ∆ast. m−i .. and N ars. N ast = (bl j. r . such that net[i. the net N is listed (from top-down) as f (t. . . k (with i + j + k = m). 0 .k)∈∆m . j. s. k )(i. . . corresponding to the base triangle ∆rat. . . . f (t. . k )(l. . . j] = bi.. only a triangular portion of this array is ﬁlled. t. r . t). .296 CHAPTER 8.. k )(i.. t) f (r. . adapted to output the three nets N ast.j. . and in order to compute N rsa from N ars. In order to compute N rat from N art.. In implementing such a program. . t). From an implementation point of view. s. . s) m−1 m−1 f (r. t. . .l. . f (r. into a two-dimensional array. .k)∈∆m is represented as the list consisting of the concatenation of the m+1 rows bi. . We also have a function convtomat which converts a control net given as a list of rows. As a triangle. . r. . . . .l)∈∆m . m−i−1 . .j. r . . . . s). t). . 0 )(i. we found it convenient to assume that a triangular net N = (bi. i. . . corresponding to the base triangle ∆rsa. . i m−i i m−i−1 i m−i−1 i m−i where 0 ≤ i ≤ m. This way of representing control nets ﬁts well with the convention that the reference triangle ∆rst is represented as follows: Instead of simply computing F (a) = bm 0. N rat. N art. . r. We call this version of the de Casteljau algorithm the subdivision algorithm. . . .. 0.j. In fact. we found that it was convenient to compute the nets N ast. . . . . . we wrote a very simple function transnetj. 1. s) . f (r. we wrote a very simple function transnetk. f (r. . .. . f (r.k)∈∆m . s) m m−1 . . . and N rsa. . . r ) m The main advantage of this representation is that we can view the net N as a twodimensional array net. m−1 m . . . 0 . .

j. 1. *) *) . Do[ pt = cc[[j]].1: Reference triangle s (* Converts a triangular net into a triangular matrix *) convtomat[{cnet__}. row = Append[row. row. row]. n. i. j. pt. aux1 = Append[aux1. {j. m + 1} ]. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES r 297 a t Figure 8. ( aux1 = {}.i.m_] := Block[ {cc = {cnet}. r. n} ]. row}. {j.m_] := Block[ {cc = {cnet}. t) to (s. n]. Do[ pt = cc[[j]]. mat ) ]. t) transnetj[{cnet__}. mat = {}. aux2. {i. aux1. res}. mat = Append[mat. pt]. row = {}.8. (* To transpose a triangular net wrt to left edge (* Converts (r.1. 1. cc = Drop[cc. cc = Drop[cc. s. 1. i. m + 1} ]. ( Do[n = m + 2 . n. m + 1]. res = {}. {pt}].

t. aux2 = Append[aux1[[j]]. n} ]. m + 1]. m + 1} ]. res = Join[res. {i. aux1 = ReplacePart[aux1. (* To rotate a triangular net *) (* Converts (r. aux2. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES Do[n = m + 1 . {pt}]. aux1 = Append[aux1. {j. res ) ]. 1. cc = Drop[cc. s. res}. aux2. aux2. 1. j]. {j. r) *) transnetk[{cnet__}. row]. Do[ row = aux1[[j]]. row. 1. n]. {j. res ) . Do[ pt = cc[[j]]. j].i. Do[ pt = cc[[j]].i. cc = Drop[cc. 1. aux2 = Prepend[aux1[[j]]. pt]. aux1. res = {}.298 CHAPTER 8. t) to (s. 1. ( aux1 = {}. aux1 = ReplacePart[aux1. 1. res = Join[res. {j. m} ]. Do[ pt = cc[[j]]. pt]. {j. row]. m + 1} ]. n]. i. n. m + 1} ]. 1. Do[n = m + 1 . Do[ row = aux1[[j]]. m} ].m_] := Block[ {cc = {cnet}. n} ]. {i. pt. cc = Drop[cc. j.

row. j + 1]]].i . (* de Casteljau for triangular patches. Do[ pt = barcor[[1]] * net0[[i + 2.l} ]. the degree of the net. m .m . k. Do[ row1 = Append[row1.l} ]. ccts = Join[ccts. 1. net0[[1. 0. ∆art. pt]. ccts = {}. l. If[m .8. row0 = {}. net0[[1. and the barycentric coordinates of a point in the parameter plane. and a correspond to the input control net represented as a list of points. with subdivision *) (* this version returns the control net for the ref triangle (a. row]. i. row2 = {}. {j. row = Append[row. (m = mm. ccsr = {}. m . mm_. m} ].l]. j + 2]] + barcor[[3]] * net0[[i + 1. {l.l =!= 0. barcor = {a}. row1 = {}. m} .m]. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES ].1. mm. row0 = {}. row2 = {}. net0 = convtomat[dd. {i. cctr = {}. pt}. The variables net. ccts = Join[ccts. {a__}] := Block[ {cc = {net}. Do[ dd = {}. row1. row1]. j. Do[ row = {}. dd. {j. dd] ]. Do[ row1 = Append[row1. net0. dd = Join[dd. as explained above. The output is a control net. row2. t) *) subdecas3ra[{net__}. subdecas3sa. j + 1]]]. j + 1]]. ccts = row1. j + 1]] + barcor[[2]] * net0[[i + 1. and ∆ars. 0. and subdecas3ta. m. row1 = {}. {j. 0.l} ]. row0. m . 299 We found it convenient to write three distinct functions subdecas3ra. computing the control nets with respect to the reference triangles ∆ast. s. 0. net0 = convtomat[cc.

row = Append[row. net0[[i + 1. 0.l} ]. row0 = {}. net0 = convtomat[cc. {i. l. m} ]. 1]]]. mm_. net0 = convtomat[dd. cctr = {}.m]. If[m . Do[ row0 = Append[row0.l =!= 0. cctr = Join[cctr. j + 2]] + barcor[[3]] * net0[[i + 1. barcor = {a}.300 ]. cctr = Join[cctr. {l. m} ]. 0. row0. 1]]]. net0[[i + 1. m . row]. Do[ row = {}. 0.l} ]. m. m . row1. 1. {j. ccsr = {}. 0. ccts ) ]. m . k. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES (* de Casteljau for triangular patches.m . pt}. row1 = {}. (m = mm.i . cctr . with subdivision *) (* this version returns the control net for the ref triangle (a. row2. Do[ row0 = Append[row0. j. row2 = {}.l]. {i. row. j + 1]] + barcor[[2]] * net0[[i + 1. t) *) subdecas3sa[{net__}. cctr = row0. dd = Join[dd. Do[ dd = {}. j + 1]]. row1 = {}. row2 = {}. dd. row0]. dd] ]. CHAPTER 8. net0. Do[ pt = barcor[[1]] * net0[[i + 2. {a__}] := Block[ {cc = {net}. pt]. i.l} ]. {i. ccts = {}. r. row0 = {}.

barcor = {a}.1. Do[ dd = {}. net0[[i + 1. row2 = {}. row1. ccsr ) ]. ccsr = row2. mm_. row0 = {}. with subdivision *) (* this version returns the control net for the ref triangle (a. row. Do[ row2 = Append[row2. {a__}] := Block[ {cc = {net}. (m = mm. row = Append[row. row2]. Do[ pt = barcor[[1]] * net0[[i + 2. 0. cctr = {}.i . row0 = {}. {i. (* de Casteljau for triangular patches.i + 1]]]. 0.i + 1]]]. 0. pt}.l} ]. row1 = {}. row0. j + 1]]. net0 = convtomat[cc. If[m . {l.m]. .l =!= 0. {i. net0[[i + 1. ccsr = Join[ccsr. 1. net0 = convtomat[dd.l . m . m . dd = Join[dd. row2. m . SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES ) ]. j + 2]] + barcor[[3]] * net0[[i + 1. Do[ row = {}. i. pt].l} ]. j. ccsr = Join[ccsr. row1 = {}. ccsr = {}. m .l} ].8. {j. m. k. m} ].m . net0. row2 = {}. l. {i. row]. m} ]. s) 301 *) subdecas3ta[{net__}. Do[ row2 = Append[row2. r. dd] ]. dd. m .l]. j + 1]] + barcor[[2]] * net0[[i + 1. 0. ccts = {}.

pt].1]] * net0[[i + 2. There are many ways of performing such subdivisions. and a list a of mm points given by their barycentric coordinates. {i. *) If we want to render a triangular surface patch F deﬁned over the reference triangle ∆rst.3]] * net0[[i + 1. where a = (1/3.2]] * net0[[i + 1. barcor = {a}. and we will propose a rather eﬃcient method which yields a very regular triangulation. mm_. In fact. j + 1]] + barcor[[l. ∆ast. pt. because no progress is made on the edges rs. A naive method would require twelve calls. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES The following function computes a polar value. res}. {a__}] := Block[ {cc = {net}. and thus such a triangulation does not converge to the surface patch. l. m . 0. we give a method for subdividing a reference triangle using four calls to the de Casteljau algorithm in its subdivision version. j + 1]]. m.l} ]. and tr. j + 2]] + barcor[[l. net0 = convtomat[dd. j. dd. If[m . . Do[ row = {}.l} ]. getting new control nets N ars. in order to compute triangulations that converge to the surface patch. Therefore. res ) ].l =!= 0.m . it seems natural to subdivide ∆rst into the three subtriangles ∆ars. (m = mm. 1/3.m]. st. {j. 1/3) is the center of gravity of the triangle ∆rst. However. given a control net net. row.302 CHAPTER 8. net0. m} ]. row]. row = Append[row. k.l]. dd = Join[dd. and repeat this process recursively. {l. Do[ pt = barcor[[l. m . 1. this process does not yield a good triangulation of the surface patch. Do[ dd = {}. and ∆art. res = dd ]. 0. (* polar value by de Casteljau for triangular patches poldecas3[{net__}. net0 = convtomat[cc. i. N ast and N art using the functions described earlier.i . we need to subdivide the triangle ∆rst in such a way that the edges of the reference triangle are subdivided.

Farin. we need the barycentric coordinates of b with respect to the triangle ∆art. 1/2). . We rediscovered this algorithm. we split ∆art into the two triangles ∆bat and ∆bar. and we throw away N brt. where a = (0. N ast.8. and we throw away N ast (which is degenerate anyway). and they use a version of the de Casteljau algorithm computing a 5-dimensional simplex of polar values.1. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES s 303 sca a bac abt crb c t b r Figure 8. which turns out (0. 1/2. This algorithm is also sketched in B¨hm [12] (see pages o 348-349). B¨hm [12]. and c = (1/2. and N ars are obtained. and Kahman [10]. o However. the nets N art. 1/2)). rt and rs respectively. Then. ∆crb. For this. are the middle points of the sides st. The subdivision strategy that we will follow is to divide the reference triangle ∆rst into four subtriangles ∆abt. 1/2). Using the function sdecas3 (with a = (0. 1/2. This can be done using two steps. 1/2) is the middle of st. which is more expensive than the standard 3-dimensional version. where a = (0. 0). 1/2. Using the function sdecas3. Boehm and Farin [9]. 1/2). It was brought to our attention by Gerald Farin (and it is mentioned at the end of Seidel’s paper [75]) that Helmut Prautzsch showed in his dissertation (in German) [63] that regular subdivision into four subtriangles can be achieved in four calls to the de Casteljau algorithm. In the ﬁrst step. as shown in the diagram below: The ﬁrst step is to compute the control net for the reference triangle ∆bat. and present a variant of it below. and ∆sca. and Filip [34]).2: Subdividing a reference triangle ∆rst Subdivision methods based on a version of the de Casteljau algorithm which splits a control net into three control subnets were investigated by Goldman [38]. 0. some of these authors were not particularly concerned with minimizing the number of calls to the de Casteljau algorithm. 1/2. and N bar are obtained. N brt. split the triangle ∆rst into the two triangles ∆art and ∆ars. the nets N bat. ∆bac. 1/2. and Seidel [75] (see also Boehm. b = (1/2.

the net N cas is obtained. For this. N bar and N ars from N rst cas a c bat bar t b r Figure 8. transposek to N cba . Using the function subdecas3sa. we need the barycentric coordinates of c with respect to the reference triangle ∆bar which turns out to be (−1. and N cba are obtained. we need the barycentric coordinates of c with respect to the triangle ∆ars.4: Computing the net N cas from N ars We will now compute the net N cas from the net N ars. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES s a ars bat bar t b r s Figure 8. We can now compute the nets N cbr and N cba from the net N bar. and we throw away N car. 1/2). For this.3: Computing the nets N bat. Using the function sdecas3. N car. 1/2. which turns out to be (0. 1. the snet N cbr. Finally.304 CHAPTER 8. we apply transposej to the net N bat to get the net N abt. 1).

using four calls to the de Casteljau algorithm. transposej followed by transposek to the net N cbr to get the net N crb. N crb. Thus. N crb is green. and we found that they formed a particularly nice pattern under this ordering of the vertices of the triangles. and N sca is yellow. N bac is red. and transposek twice to N cas to get the net N sca.1. The subdivision algorithm just presented is implemented in Mathematica as follows. and ∆sca to get the net N bac. ∆crb.5: Computing the nets N cbr and N cba from N bar sca a bac abt crb c t b r Figure 8. In fact. N crb.8. and N sca. N bac. . SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES s 305 cas a cba bat cbr c t b r s Figure 8. Remark: For debugging purposes. we obtained the nets N abt. ∆bac.6: Subdividing ∆rst into ∆abt. we assigned diﬀerent colors to the patches corresponding to N abt. N bac. N abt is blue. and N sca.

stop = 1. oldm_. (m = netsize[cc]. m. stop.306 CHAPTER 8. debug_] := Block[ {cc = {net}. i = 1. by dividing the base triangle into four subtriangles using the middles of the original sides uses a tricky scheme involving 4 de Casteljau steps basic version that does not try to resolve singularities The triangles are abt. *) barcor1 = {-1. res = res + i] ]. art. barcor2 = {0. m]. (* (* (* (* (* Subdivides into four subpatches. ars. 1/2}. stop = 0. sca. barcor2}. res ) ]. res}. res = i . debug]. ars. bac. (ll = Length[cc]. i. res = 1. (* Print[" calling mainsubdecas4 with netsize = ". newnet ) ]. While[i <= s && stop === 1. If[(res === ll). newnet = nsubdecas4[art. by dividing the base triangle into four subtriangles using the middles of the original sides uses a tricky scheme involving 4 de Casteljau steps basic version that does not try to resolve singularities *) *) *) *) . barcor2]. s. (* (* (* (* Subdivides into four subpatches. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES (* Computes the polar degree m associated with a net cnet of sixe (m + 1)*(m + 2)/2 *) netsize[cnet__] := Block[ {cc = cnet. newnet = sdecas3[cc. m. 1/2. i = i + 1. art = newnet[[1]]. 1. and crb *) *) *) *) *) mainsubdecas4[{net__}. 1}. newnet. ars = newnet[[3]]. m. ll.1. s = N[Sqrt[2 * ll]].

m]. m]. in order to obtain a triangulation of the surface patch speciﬁed by the control net net. m]. barcor1]. newnet = Join[newnet. Print["*** bar: ". newnet = sdecas3[art.1. cas. bar = newnet[[3]]. bac. Print["*** ars: ". If[debug === -20. cba. ars = {net2}.bar]]. starting with an input control net net. barcor2}. newnet = Join[{abt}. debug_] := Block[ {newnet. sca = transnetk[sca. {sca}]. m]. abt = transnetj[bat. bac. bar. barcor2]. crb = transnetk[crb. crb = transnetj[cbr. If[debug === -20. cbr. cbr = newnet[[1]]. m]. crb. sca. bac = transnetk[cba. {bac}]. If[debug === -20. bat. art = {net1}. 1/2. 1}.ars]]. The function rsubdiv4 shown below performs n recursive steps of subdivision. sca = transnetk[cas. we can repeatedly subdivide the nets in a list of nets. (m = netsize[art]. *) barcor1 = {-1. (* performs a subdivision step on each net in a list *) . and crb *) 307 nsubdecas4[{net1__}. {net2__}. m. Print["*** abt: ". cba = newnet[[3]]. cas = subdecas3sa[ars. oldm_. barcor1. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES (* The triangles are abt. newnet = sdecas3[bar.8. m].abt]]. m. m]. sca. barcor2]. Using mainsubdecas4. barcor2 = {0. starting from a list consisting of a single control net net. 1. The function itersub4 takes a list of nets and subdivides each net in this list into four subnets. newnet = Join[newnet. newnet ) ]. bat = newnet[[1]]. Print["*** art: ". (* Print[" calling mainsubdecas4 with netsize = ".art]]. 1/2}. m. If[debug === -20. m. {crb}].

l. i}. {i.. polygons are considered nontransparent. that is. This can be done in a number of ways. where each net is a list of points.e. Do[ newnet = itersub4[newnet. i.308 CHAPTER 8. i}. mainsubdecas4[cnet[[i]]. 0. lnet = {}. n} ]. {6. m_. 1. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES *) (* using subdecas4. and the rendering algorithm automatically removes hidden parts. it is necessary to triangulate each net. The best thing to do is to use the Polygon construct of Mathematica. or color the polygons as desired.e. {4. This is very crucial to understand complicated surfaces. 2}. {2. ( newnet = {newnet}. recursively splits into 4 patches *) rsubdiv4[{net__}. to join the control points in a net by line segments. m_. debug_] := Block[ {cnet = {net}. 0. i. 0}. debug_] := Block[ {newnet = {net}. The function rsubdiv4 creates a list of nets. lnet ) ]. and is left as an exercise. (* performs n subdivision steps using itersub4. debug]. 2}. n_. . The subdivision method is illustrated by the following example of a cubic patch speciﬁed by the control net net = {{0. splitting into four patches itersub4[{net__}. debug]] . m. l} ]. 1. 0}. Do[ lnet = Join[lnet. It is also very easy to use the shading options of Mathematica. m. 0. (l = Length[cnet]. In order to render the surface patch. newnet ) ]. Indeed. {i.. 0.

2. 2}. {2. {6. 6. 2. 0}. 4.1. {3. 0. 3. 5}. 2}. {4. 2}. 0. {3. {2. 2. 2}. Another example of a cubic patch speciﬁed by the control net sink is shown below. sink = {{0.7: Subdivision. 2. 0. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES 309 6 y 4 2 50 4 z 3 2 1 0 0 2 x 4 6 Figure 8. . 4.8. 2}. After only three subdivision steps. 1 iteration {1. 2}. {5. 0. {4. 0}. 0}}. We show the output of the subdivision algorithm for n = 1. the triangulation approximates the surface patch very well.

8: Subdivision. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES 6 y 4 2 50 4 z 3 2 1 0 0 2 x 4 6 Figure 8. 2 iterations .310 CHAPTER 8.

9: Subdivision. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES 311 6 y 4 2 50 4 z 3 2 1 0 0 2 x 4 6 Figure 8. 3 iterations .8.1.

Such an algorithm is useful if we wish to render a surface patch over a bigger or diﬀerent reference triangle than the originally given one. b =λ2 r + µ2 s + ν2 t. ν) of d with respect to ∆rst. {5. ν ′ ) of d with respect to ∆abc can be computed from the coordinates (λ. 4. c =λ3 r + µ3 s + ν3 t. and let (λ1 . 2. using transnetj. we compute the control net N ast over the reference triangle ∆ast. we compute the control net N cab using subdecas3ta. µ1 . In the third step. 4. 2}. 2. we compute the control net N bat using subdecas3sa. µ. 2}. the coordinates (λ′ . c. we easily get ′ λ λ1 λ2 λ3 λ µ = µ1 µ2 µ3 µ′ ν′ ν ν1 ν2 ν3 and thus. given a reference triangle ∆rst and a control net N over ∆rst. 2}. Before discussing such an algorithm. and then the control net N abc over the reference triangle ∆abc. from a control net N over an original reference triangle ∆rst. In the ﬁrst step. {4. -8}. and then the control net N abt over the reference triangle ∆abt. with respect to ∆rts. 6. we can compute the new control net N abc over the new reference triangle ∆abc. using subdecas3ra. Given any arbitrary point d. ν2 ). µ. {3. In the second step. −1 ′ λ λ1 λ2 λ3 λ µ′ = µ1 µ2 µ3 µ ν ν1 ν2 ν3 ν′ . In this case. we need to review how a change of reference triangle is performed. µ2 . Now. ν1 ). This surface patch looks like a sink! Another pleasant application of the subdivision method is that it yields an eﬃcient method for computing a control net N abc over a new reference triangle ∆abc. {3. and coordinates (λ′ . b. 2}. µ3. and (λ3 . Let ∆rst and ∆abc be two reference triangles. 2. this is easily done using determinants by Cramer’s formulae (see Lang [47] or Strang [81]). {2. be the barycentric coordinates of a. ν3 ). since d = λr + µs + νt = λ′ a + µ′ b + ν ′ c and a =λ1 r + µ1 s + ν1 t. Thus. (λ2 . using transnetk. using three subdivision steps as explained below.312 CHAPTER 8. µ′ . µ′ . ν) with respect to ∆rst. ν ′ ) with respect to ∆abc. by inverting a matrix. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES {1. 0}}. if d has coordinates (λ.

1.5 1 z 0. 3 iterations .10: Subdivision.5 0 2 x 4 60 2 6 4 y Figure 8.5 0 -0. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES 313 1.8.

We used the strategy explained below. Compute N ars using subdecas3ta.314 CHAPTER 8. Case 1b: b ∈ at. and b does not belong to at. The function collin checks whether three points in the plane are collinear. First compute N art using transnetj. compute N abc using subdecas3ta. Finally. . Case 1: a ∈ st. and then N abc using transnetk. then compute N bas using subdecas3ra. and then go back to case 1. and then N abt using transnetj. In this case. This can be easily done by inverting a matrix of order 3. compute N bat using subdecas3sa. we need the coordinates of c with respect to the reference triangle ∆abt. compute N tas from N ast using transnetk twice. One should also observe that the above method is only correct if a does not belong to st. a ∈ st). The implementation in Mathematica requires some auxiliary functions. / Compute N ast using subdecas3ra. b ∈ at / / Note that in the second step. Case 2a: s = a (and thus. ∆rst = ∆rat. and implemented in Mathematica. compute N cab using subdecas3ta. and then N abs using transnetj. some adaptations are needed. Next. we need the coordinates of b with respect to the reference triangle ∆ast. Case 2b: a ∈ st and s = a.11: Case 1a: a ∈ st. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES s b a t r Figure 8. / First. Case 1a: b ∈ at. and then go back to case 1. and in the third step. as explained earlier. First. In general.

SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES 315 s a b t r Figure 8.8.12: Case 1b: a ∈ st.13: Case 2b: a ∈ st.1. b ∈ at / s a t r Figure 8. s = a .

c. d_] := Block[ {a1. b1. d1 = b1*c2 . x1 = det3[d. (* Computes a 3 X 3 determinant *) det3[a_. c1. b_. c1. c3. x1.b2*c1. c1 = c[[1]]. b1. d2. c]. res = 0]. a2 = a[[2]]. a3. b2 = b[[2]]. c2. d1. a3 = a[[3]].a2*b1.316 (* CHAPTER 8. . d2. If[d1 === d2. d are column vectors *) solve3[a_. b. res = a3*d1 . c2 = c[[2]]. c_] := Block[ {a1. a1 = a[[1]]. (* (* Solves a 3 X 3 linear system *) Assuming a. c2.a2). res ]. It uses the function det3 computing a 3 × 3 determinant. res}. If[dd === 0. c]/dd. b2. res = 1. c2 = c[[2]]. (dd = det3[a. a2. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES checks for collinearity of three points in the plane *) collin[a__. b2. d1 = (c1 . a2 = a[[2]]. b__. b3 = b[[3]]. Print["*** Null Determinant ***"]. a3. ( a1 = a[[1]]. The function solve3 solves a linear system of 3 equations in 3 variables. || dd === 0. dd = 10^(-10)].a2)*(b1 .a1)*(b2 . c_. c3 = c[[3]]. d3. d1. res}.a2*c1. a2. b3. c1. b. d2. b2. dd. c1 = c[[1]]. b1. d2 = (c2 . b1 = b[[1]]. b.b3*d2 + c3*d3. x3. c__] := Block[ {a1.a1). d3. x2. b3. b_. d2 = a1*c2 . res ) ]. c2. b1 = b[[1]]. b2 = b[[2]]. d3 = a1*b2 . a2. c3. d1. res}.

the algorithm first computes the control net w. nc. netb.r. Want (a. newtrig = {reftrig}. (* (* (* (* (* (* (* (* (* (* (* computes new control net. res = {x1. t) "]. res ) ]. the function newcnet3 computes a new net N abc over the triangle ∆rts. t) *) . b. t). a is on (s. newnet. Want (a. 1}.r. t) *) (* Print[" a IS on (s. b. c) using subdecas3ta and transnetk The other cases are also treated The function returns the net (a. *) If[a === ss. neta. t) *) newnet = fnewaux[neta. then the control net w. 0. m. If[collin[a. b. r. the net is (a. c. a = newtrig[[1]]. tt]. tt = {0. c) *) *) *) *) *) *) *) *) *) *) *) newcnet3[{net__}.t (a. 0}. x3}. a. nb.t (a. tt] === 0. s. b. s. b. m_. s.t (r. c. (newnet = {}. m. t). t are not collinear and b is not on (a. a. given barycentric coords for new reference triangle (a. a is on (s.t (a. rr = {1. tt}. c = newtrig[[3]]. (* In this case. 317 Finally. b. (* In this case. d.r. 1. t) using subdecas3sa and transnetj. (* In this case. x3 = det3[a.1. ss. *) neta = subdecas3ra[cc. and the control net w.r. s. rr. t) "]. d]/dd. s. The function newcnet3 uses the auxiliary function fnewaux. c]/dd. c) w. 0}. t) and a = s. b.8. 0. (* Now. x2. ss = {0. {reftrig__}] := Block[ {cc = {net}. t) using subdecas3ra. a]. ss. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES x2 = det3[a. a is not on (s. t) In the simplest case where a. from a net N over the triangle ∆rts. b = newtrig[[2]]. t) *) (* Print[" a NOT on (s. ss.

] ]. a. s) *) . b. m. b is not on (a. (* Now. b]. newnet ) ]. m. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES (* Print[" a on (s. (* In this case. m.s. ss. t) and a = ss"].t) *) *) fnewaux[{net__}. b. t) is (t. *) neta = transnetk[neta. t). Want (a. (* In this case. nb. m]. r. newnet. c]. (* Now. the net is (a. tt]. netb = subdecas3ra[neta. Want (a. netb = subdecas3sa[neta. t) *) nc = solve3[a. r. neta = transnetk[neta. c_. a. b. (* Now. a. (* Now. (* In this case. m. *) neta = transnetj[cc. s. tt] === 0. b. b. newnet = subdecas3ta[netb. s) *) nb = solve3[tt. t) *) newnet = fnewaux[neta. ss_. s) *) newnet = fnewaux[neta. r. rr. a. s) *) (* Print[" b IS on (a. netb = transnetj[netb. t) *) (* Print[" b NOT on (a. a_. *) neta = subdecas3ta[cc. m].318 CHAPTER 8. t) "]. m. m]. a is on (s. c. (* This routine is used by newcnet3 (* Initially. nc]. *) nb = solve3[a. a]. m_. t) and a <> s. m]. rr. tt. the net is (a. neta = (a. tt. Want (a. the net (a. c. (* Now. nc}. the net is (a. tt_] := Block[ {neta = {net}. m]. netb. b is on (a. t). the net is (a. b]. ( If[collin[b. nb]. nb]. m. b. a. t) "]. ss. newnet = transnetk[newnet. t) and a <> ss"]. b_. ss]. s) *) (* Print[" a on (s. m]. netb = transnetj[netb. b.

3). (1. 0. we get the following picture. 1. 0}. . b. (−1. We used the triangles ref trig1 = ((−1. . newnet = transnetk[newnet. m] ]. (1. speciﬁed by two points a and b. deﬁned by the equations x = u. {1/3. 1/3. What we need is to compute the control points di = f (a. 1. nc]. −1. 2/3. 1). 1. z = u3 − 3uv 2. and then subdividing both nets 3 times. m−i i . 0. 0}. . 1. . Note that z is the real part of the complex number (u + iv)3 . {0. 2/3. 1}}. {2/3. 0). 0. monknet = {{0. Another nice application of the subdivision algorithms is an eﬃcient method for computing the control points of a curve on a triangular surface patch. y = v. m. and t = (0. a. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES nc = solve3[a. where the curve is the image of a line in the parameter plane. 0}. 3)) with newcnet3. {1. . 0}. −1). ss. 0}. 0). .1. 0. {0. 1). 1/3. −1. we can display a portion of a well known surface known as the “monkey saddle”. It is easily shown that the monkey saddle is speciﬁed by the following triangular control net monknet over the standard reference triangle ∆rst. −1)) and ref trig2 = ((1. 1/3. b. -1}. 319 As an example of the use of the above functions. {1/3. s = (0. 0}. 0}. where r = (1. 0}. 0. . {0. . 1. Using newcnet3 twice to get some new nets net1 and net2. {2/3. newnet ) ]. c]. newnet = subdecas3ta[netb.8. (−1. 0. b). 1). −1. {1/3.

5 x -0.5 1 1 z 0 -1 -2 Figure 8.320 CHAPTER 8.5 0 0.14: A monkey saddle.5 -1 2 -1 1 0. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES y 0 -0. triangular subdivision .

assuming that the surface is deﬁned by some net N over the reference triangle ∆rts. Indeed. . we have the following cases. m−i−1 . bi. and we compute N tba using newcnet3. . bi. . m−i . 0 . Case 2b: r ∈ ab and a ∈ rt. . / We compute N rba using newcnet3.15: Case 1: r ∈ ab / where m is the degree of the surface. but there is a much faster method. we must have t ∈ ab. assuming the usual representation of a triangular net as the list of rows bi. if r does not belong to the line (a. / In this case. SUBDIVISION ALGORITHMS FOR TRIANGULAR PATCHES s b 321 t r a Figure 8. . since r ∈ ab. Case 2a: r ∈ ab and a ∈ rt. / The corresponding Mathematica program is given below. 0. m−i. . Case 1: r ∈ ab. More precisely. and the control points (d0 .8. . . . we simply have to compute N rba using newcnet3. We compute N sba using newcnet3. dm ) are simply the bottom row of the net N rba. b). We could compute these polar values directly. 1.1.

SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES s t a r b Figure 8.16: Case 2a: r ∈ ab and a ∈ rt s b t r a Figure 8.322 CHAPTER 8.17: Case 2b: r ∈ ab and a ∈ rt / .

*) given two points a = (a1. 323 *) *) *) curvcpoly[{net__}. m. res = {}. b. net1. j = f (r1 .r] === 0. net1[[i]]]. s1 . If[collin[aa. 0}. (r = {1. . . 1}. trigr. with end points a. .8. t) We use newcnet3 to get the control points of the *) curve segment defined over [0. . 1. s2 ). it is possible to subdivide a rectangular control net N in two ways. bb. i. a3) and b = (b1. . recall that in terms of the polar form f : (A)p × (A)q → E of the bipolynomial surface F : A × A → E of degree p. b3) in the parameter plane. If[collin[aa. p−i i q−j j Unlike subdividing triangular patches. res ) ]. . trigt] ] ]. given a rectangular control net N = (bi. 0. s2 ) for the aﬃne line A. j) ∈ p. 1. for every (i.t. .2 Subdivision Algorithms for Rectangular Patches We now consider algorithms for approximating rectangular patches using recursive subdivision. a_. s2 . aa}. trigt. trigs]. . Using the function curvcpoly.q . bb}. aa}. Do[ res = Append[res. s1 ) and (r2 . s1 .r] === 1. bb. m. b_] := Block[ {cc = {net}. SUBDIVISION ALGORITHMS FOR RECTANGULAR PATCHES (* (* (* (* (* computes a control net for a curve on the surface. bb. trigs. s. 0. s. . . trigt = {t. trigs = {s. r 2 . net1 = newcnet3[cc. bb = b. we have bi. wrt the reference triangle (r. subdividing rectangular patches is quite simple. m + 1} ].q . t = {0. it is easy to render a triangular patch by rendering a number of u-curves and v-curves. a2. . b2. 1]. res. The ﬁrst way is . s = {0. j )(i. t. r.bb. {i.2. . m_. Given two aﬃne frames (r 1 . aa. . . trigr = {r. m. q deﬁned by N . trigr]. aa = a. r2 . r 1 . Indeed. net1 = newcnet3[cc. 0}. aa}. . . net1 = newcnet3[cc.j)∈ p. 8.

. . j = f (r1 . s2]. s2 . r2 . p−i i q−j j with 0 ≤ i ≤ p. and N [u. . . p−i i q−j j with 0 ≤ i ≤ p. Then. given an input net N over [r1 . v. . . . and 0 ≤ j ≤ q. v] and N [∗. v. s2 ]. r2 . r 2 . . . . . v). . . v ∈ A. p−i i q−j j with 0 ≤ i ≤ p. This has the advantage that we can view N as a rectangular array net. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES to compute the two nets N [r1 . In order to implement these algorithms. s2 . . s2]i. 1 . r2 . j )(i. N [u. The function makerecnet converts an input net into such a two dimensional array. . s1 . . j = f (u. ∗] and N [u. . .324 CHAPTER 8. This algorithm has been implemented in Mathematica as the function vrecdecas. .q . . where 0 ≤ i ≤ p. r1 . s2]. with net[i. s1 . . and 0 ≤ j ≤ q. (* To make a rectangular net of degree (p. where N [r1. . s1. u. u. v]. . . as the list of p + 1 rows bi. u. u. bi. bi. . s1 . The second way is to compute the two nets N [∗. s1 . This can be achieved in p + 1 calls to the version of the de Casteljau algorithm performing subdivision (in the case of curves). v . . . s2 ). v. ∗]i. . s1. v] and N [∗. . j . . . . . s2]. ∗]. for any u. . .j)∈ p. r2 . we represent a rectangular control net N = (bi. r1 . . . and N [∗. s2 . v. . v]i. r 1 . . j = f (r1 . where N [∗. u. r 2 . ∗]i. r2 . . N [u. s2]. s1 ] × [r2 . . r2 . s2 ). This algorithm has been implemented in Mathematica as the function urecdecas. v. . . r2 . . v. . . . . . . s1. and 0 ≤ j ≤ q. . 0 . by ﬁrst subdviding N into N [∗. u. j = f (r 1 . j] = bi. s1 . . p−i i q−j j with 0 ≤ i ≤ p. and 0 ≤ j ≤ q. u. . . . q . . . v). . . The four nets have the common corner F (u. . s1 . . . . and then by splitting each of these two nets using urecdecas. we can subdivide the net N into four subnets N [r1 . . r2. . s2 ). v. . using the function vrecdecas. This can be achieved in q + 1 calls to the version of the de Casteljau algorithm performing subdivision (in the case of curves). s1 . v]. N [r1 . . q) from a list of length (p + 1)x(q + 1) *) . . r2.

In turn.8. debug_] := Block[ {net = {oldnet}.2. q. vs2 r1 v r1 u. newnet = {}. newnet = Append[newnet. ". n}. q_. ". (* (* De Casteljau algorithm with subdivision into four rectangular nets for a rectangular net of degree (p. q_] := Block[ {oldnet = {net}. j. s1_. r2 v s1 r2 us1 . 0. newnetb. The subdivision algorithm is implemented in Mathematica by the function recdecas. newnetb = {}. q} ]. newnet1. 0. which performs the subdivision of a control polygon. . p. v_. row]. {i. r1_. vs2 s1 v 325 Figure 8. Do[row = {}. q) *) *) recdecas[{oldnet__}. p_. {j. these functions use the function subdecas. r2 v r1 r2 ur2 uv us1 . row. r2_. s2_. q = ". Do[ pt = oldnet[[i*(q + 1) + j + 1]]. newneta. Print[" p = ". p_. row = Append[row. newnet ]. is needed. pt. i.pt]. SUBDIVISION ALGORITHMS FOR RECTANGULAR PATCHES us2 r1 s2 s1 s2 r1 u. in recdecas"]. newnet}. It turns out that an auxiliary function rectrans converting a matrix given as a list of columns into a linear list of rows. u_. newnet2. newneta = {}. which uses the functions vrecdecas and urecdecas. n = Length[oldnet]. p} ].18: Subdividing a rectangular patch makerecnet[{net__}. temp.

{row2}]. debug]. s2. Do[ bistar = {}. p. debug]. Print[" Subdivided nets: ". newnet}. {newnet2}]. q. v. newnet ]. {row1}]. r1. q} ]. 0. {j. pt. newnet1 = urecdecas[newneta. If[debug === 2. row2. {i. newneta = temp[[1]].326 CHAPTER 8. newnet1 = Join[newnet1. u. bistar]]. p_. Print[" Subdivided nets: ". Do[ pt = net[[(q + 1)*i + j + 1]]. p} ]. r1. (* (* De Casteljau algorithm with subdivision into two rectangular nets for a rectangular net of degree (p. row1 = {}. newnet]]. temp = subdecas[bistar. s1_. u. (* Converts a matrix given as a list of columns into a linear list of rows *) rectrans[{oldnet__}] := . newnet1. debug]. newnet2 = urecdecas[newnetb. s1. p. pt]. newnet]]. q). If[debug === 2. newnet = Join[{newnet1}. bistar = Append[bistar. r1_. q_. bistar. row2 = temp[[2]]. u]. newnet = Join[newnet1. q. s1. row2 = {}. newnetb = temp[[2]]. newnet2. debug_] := Block[ {net = {oldnet}. q. s1. row2 = {}. row1 = {}. j. newnet2]. r1. If[debug === 2. bistar = {}. newnet1 = {}. temp. newnet ]. row1. u_. i. Subdivision along the u-curves *) *) urecdecas[{oldnet__}. Print[" bistar: ". 0. newnet2 = {}. r2. row1 = temp[[1]]. newnet2 = rectrans[newnet2]. newnet1 = rectrans[newnet1]. newnet2 = Join[newnet2. p. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES temp = vrecdecas[net.

8.2. SUBDIVISION ALGORITHMS FOR RECTANGULAR PATCHES Block[ {net = {oldnet}, i, j, pp, qq, row, pt, newnet}, row = {}; newnet = {}; qq = Length[net]; pp = Length[net[[1]]]; Do[ Do[ row = net[[j]]; pt = row[[i]]; newnet = Append[newnet, pt], {j, 1, qq} ], {i, 1, pp} ]; newnet ]; (* (* De Casteljau algorithm with subdivision into two rectangular nets for a rectangular net of degree (p, q). Subdivision along the v-curves *) *)

327

vrecdecas[{oldnet__}, p_, q_, r2_, s2_, v_, debug_] := Block[ {net = {oldnet}, i, j, temp, bstarj, row1, row2, pt, newneta, newnetb, newnet}, bistar = {}; row1 = {}; row2 = {}; newneta = {}; newnetb = {}; Do[ bstarj = {}; row1 = {}; row2 = {}; Do[ pt = net[[(q + 1)*i + j + 1]]; bstarj = Append[bstarj, pt], {j, 0, q} ]; If[debug === 2, Print[" bstarj: ", bstarj]]; temp = subdecas[bstarj, r2, s2, v]; row1 = temp[[1]]; row2 = temp[[2]]; newneta = Join[newneta, row1]; newnetb = Join[newnetb, row2], {i, 0, p} ]; newnet = Join[{newneta}, {newnetb}]; If[debug === 2, Print[" Subdivided nets: ", newnet]]; newnet ]; Given a rectangular net oldnet of degree (p, q), the function recpoldecas computes the polar value for argument lists u and v. This function calls the function bdecas that computes a point on a curve from a control polygon. (* Computation of a polar value for a rectangular net of

328

CHAPTER 8. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES degree (p, q) *)

recpoldecas[{oldnet__}, p_, q_, r1_, s1_, r2_, s2_, u_, v_, debug_] := Block[ {net = {oldnet}, i, j, bistar, bstarj, pt, res}, bistar = {}; Do[ bstarj = {}; Do[ pt = net[[(q + 1)*i + j + 1]]; bstarj = Append[bstarj, pt], {j, 0, q} ]; If[debug === 2, Print[" bstarj: ", bstarj]]; pt = bdecas[bstarj, v, r2, s2]; bistar = Append[bistar, pt], {i, 0, p} ]; If[debug === 2, Print[" bistar: ", bistar]]; res = bdecas[bistar, u, r1, s1]; If[debug === 2, Print[" polar value: ", res]]; res ]; As in the case of triangular patches, using the function recdecas, starting from a list consisting of a single control net net, we can repeatedly subdivide the nets in a list of nets, in order to obtain an approximation of the surface patch speciﬁed by the control net net. The function recsubdiv4 shown below performs n recursive steps of subdivision, starting with an input control net net. The function recitersub4 takes a list of nets and subdivides each net in this list into four subnets. (* performs a subdivision step on each rectangular net in a list (* using recdecas4, i.e., splitting into four subpatches *) recitersub4[{net__}, Block[ {cnet = {net}, lnet (l = Length[cnet]; u Do[ lnet = Join[lnet, ]; lnet ) p_, q_, r1_, s1_, r2_, s2_, debug_] := = {}, l, i, u, v}, = 1/2; v = 1/2; recdecas[cnet[[i]], p, q, r1, s1, r2, s2, u, v, debug]] , {i, 1, l} *)

8.2. SUBDIVISION ALGORITHMS FOR RECTANGULAR PATCHES ]; (* performs n subdivision steps using recitersub4, i.e., splits (* a rectangular patch into 4 subpatches *) *)

329

recsubdiv4[{net__}, p_, q_, n_, debug_] := Block[ {newnet = {net}, i, r1, s1, r2, s2}, (r1 = 0; s1 = 1; r2 = 0; s2 = 1; newnet = {newnet}; Do[ newnet = recitersub4[newnet, p, q, r1, s1, r2, s2, debug], {i, 1, n} ]; newnet ) ]; The function recsubdiv4 returns a list of rectangular nets. In order to render the surface patch, it is necessary to link the nodes in each net. This is easily done, and is left as an exercise. The functions urecdecas and vrecdecas can also be used to compute the control net N [a, b; c, d] over new aﬃne bases [a, b] and [c, d], from a control net N over some aﬃne bases [r1 , s1 ] and [r2 , s2 ]. If d = r2 and b = r1 , we ﬁrst compute N [r1 , s1 ; r2 , d] using vrecdecas, then N [r1 , b; r2 , d] using urecdecas, and then N [r1 , b; c, d] using vrecdecas, and ﬁnally N [a, b; c, d] using urecdecas. It is easy to care of the cases where d = r2 or b = r1 , and such a program is implemented as follows. (* Computes a new rectangular net of degree (p, q) *) (* wrt to frames (a, b) and (c, d) on the affine line (* In the hat space recnewnet[{oldnet__}, p_, q_, a_, b_, c_, d_, debug_] := Block[ {net = {oldnet}, temp, newnet1, newnet2, newnet3, newnet4, rnet, r1, s1, r2, s2, ll, i}, r1 = 0; s1 = 1; r2 = 0; s2 = 1; newnet1 = {}; newnet2 = {}; newnet3 = {}; newnet4 = {}; If[d =!= r2, temp = vrecdecas[net, p, q, r2, s2, d, debug]; newnet1 = temp[[1]], newnet1 = net ];

*) *)

330

CHAPTER 8. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES

If[debug === 2, Print[" newnet1: ", newnet1]]; If[b =!= r1, temp = urecdecas[newnet1, p, q, r1, s1, b, debug]; newnet2 = temp[[1]], newnet2 = newnet1 ]; If[debug === 2, Print[" newnet2: ", newnet2]]; If[d =!= r2, temp = vrecdecas[newnet2, p, q, r2, d, c, debug]; newnet3 = temp[[2]], temp = vrecdecas[newnet2, p, q, r2, s2, c, debug]; newnet3 = temp[[1]] ]; If[debug === 2, Print[" newnet3: ", newnet3]]; If[b =!= r1, temp = urecdecas[newnet3, p, q, r1, b, a, debug]; newnet4 = temp[[2]], temp = urecdecas[newnet3, p, q, r1, s1, a, debug]; (* needs to reverse the net in this case *) rnet = temp[[1]]; newnet4 = {}; ll = Length[rnet]; Do[ newnet4 = Prepend[newnet4, rnet[[i]]], {i, 1, ll} ] ]; If[debug === 2, Print[" newnet4: ", newnet4]]; newnet4 ]; Let us go back to the example of the monkey saddle, to illustrate the use of the functions recsubdiv4 and recnewnet. It is easily shown that the monkey saddle is speciﬁed by the following rectangular control net of degree (3, 2) sqmonknet1, over [0, 1] × [0, 1]: sqmonknet1 = {{0, 0, 0}, {0, 1/2, 0}, {0, 1, 0}, {1/3, 0, 0}, {1/3, 1/2, 0}, {1/3, 1, -1}, {2/3, 0, 0}, {2/3, 1/2, 0}, {2/3, 1, -2}, {1, 0, 1}, {1, 1/2, 1}, {1, 1, -2}} Using recnewnet, we can compute a rectangular net sqmonknet over [−1, 1] × [−1, 1]: sqmonknet = {{-1, -1, 2}, {-1, 0, -4}, {-1, 1, 2}, {-1/3, -1, 2}, {-1/3, 0, 0}, {-1/3, 1, 2}, {1/3, -1, -2}, {1/3, 0, 0}, {1/3, 1, -2}, {1, -1, -2}, {1, 0, 4}, {1, 1, -2}} Finally, we show the output of the subdivision algorithm recsubdiv4, for n = 1, 2, 3. The advantage of rectangular nets is that we get the patch over [−1, 1] × [−1, 1] directly, as opposed to the union of two triangular patches.

8.2. SUBDIVISION ALGORITHMS FOR RECTANGULAR PATCHES

331

y 0 -0.5 -1 2

-1 1 0.5

x -0.5 0 0.5 1

1

z

0

-1

-2

Figure 8.19: A monkey saddle, rectangular subdivision, 1 iteration

332

CHAPTER 8. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES

y 0 -0.5 -1 2

-1 1 0.5

x -0.5 0 0.5 1

1

z

0

-1

-2

Figure 8.20: A monkey saddle, rectangular subdivision, 2 iterations

8.3. PROBLEMS

333

The ﬁnal picture (corresponding to 3 iterations) is basically as good as the triangulation shown earlier, and is obtained faster. Actually, it is possible to convert a triangular net of degree m into a rectangular net of degree (m, m), and conversely to convert a rectangular net of degree (p, q) into a triangular net of degree p + q, but we will postpone this until we deal with rational surfaces.

8.3

Problems

Problem 1 (10 pts). Let F : A2 → A3 be a bilinear map. Consider the rectangular net N = (bi, j ) of bidegree p, q deﬁned such that bi, j = F i j , p q

for all i, j, where 0 ≤ i ≤ p and 0 ≤ j ≤ q. Prove that the rectangular surface deﬁned by N is equal to F (we say that rectangular patches have bilinear precision). Problem 2 (20 pts). Give a method for recursively subdividing a triangular patch into four subpatches, using only three calls to the de Casteljau algorithm. Show the result of performing three levels of subdivision on the orginal reference triangle (r, s, t). Problem 3 (20 pts). Investigate the method for recursively subdividing a triangular patch into six subpatches, using four calls to the de Casteljau algorithm as follows: ﬁrst apply the subdivision version of the de Casteljau algorithm to the center of gravity of the reference triangle, and then to the middle of every side of the triangle. Show the result of performing three levels of subdivision on the orginal reference triangle (r, s, t). Problem 4 (40 pts). Implement your own version of the de Casteljau algorithm splitting a triangular patch into four triangles, as in section 8.1. Use your algorithm to draw the surface patch over [−1, 1] × [−1, 1] deﬁned by the following control net domenet3 = {{0, 0, 0}, {3/4, 3/2, 3/10}, {-1/4, -1/2, 3/10}, {1/2, 1, 0}, {3/2, 0, 3/10}, {1/2, 1/2, 1/2}, {5/4, -1/2, 3/10}, {-1/2, 0, 3/10}, {1/4, 3/2, 3/10}, {1, 0, 0}}; Problem 5 (40 pts). Implement your own version of the de Casteljau algorithm splitting a rectangular patch into four rectangles, as in section 8.2. Problem 6 (40 pts). Given a surface speciﬁed by a triangular net, implement a program drawing the u-curves and the v-curves of the patch over [a, b] × [c, d], using the function curvcpoly.

334

CHAPTER 8. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES

y 0 -0.5 -1 2

-1 1 0.5

x -0.5 0 0.5 1

1

z

0

-1

-2

Figure 8.21: A monkey saddle, rectangular subdivision, 3 iterations

8.3. PROBLEMS

335

Problem 7 (20 pts). Prove that any method for obtaining a regular subdivision into four subpatches using the standard de Casteljau algorithm requires at least 4 calls. Problem 8 (20 pts). Let F be a surface of total degree m deﬁned by a triangular control net N = (bi, j, k )(i,j,k)∈∆m , w.r.t. the aﬃne frame ∆rst. For any n points pi = ui r + vi s + wi t (where ui + vi + wi = 1), deﬁne the following (n + 2)-simplex of points bl1 ,...,ln where i + j + i,j,k k + l1 + . . . + ln = m, inductively as follows: b0,...,0 = bi,j,k , i,j,k

1 bi,j,k

l ,...,lh+1 ,...,ln

= uh bl1 ,...,lh ,...,ln + vh bl1 ,...,lh ,...,ln + wh bl1 ,...,lh ,...,ln , i,j,k+1 i,j+1,k i+1,j,k

where 1 ≤ h ≤ n.

**(i) If f is the polar form of F , prove that bl1 ,...,ln = f (r, . . . , r , s, . . . , s, t, . . . , t, p1 , . . . , p1 , . . . , pn , . . . , pn ). i,j,k
**

i j k l1 ln

(ii) For n = 0, show that F (p) = bn , as in the standard de Casteljau algorithm. 0,0,0 For n = 1, prove that (bl1 j, 0 )(i,j,l1 )∈∆m is a control net of F w.r.t. ∆rsp1 , (bl1 j, k )(j,k,l1)∈∆m i, 0, is a control net of F w.r.t. ∆stp1 , and (bl1 0, k )(i,k,l1 )∈∆m is a control net of F w.r.t. ∆trp1 . i, For n = 2, prove that (bl1 ,l20 )(i,l1 ,l2 )∈∆m is a control net of F w.r.t. ∆rp1 p2 , (bl1 ,l20 )(j,l1 ,l2 )∈∆m i, 0, 0, j, l1 ,l2 is a control net of F w.r.t. ∆sp1 p2 , and (b0, 0, k )(k,l1 ,l2 )∈∆m is a control net of F w.r.t. ∆tp1 p2 . For n = 3, prove that (bl1 ,l2 ,l3 )(l1 ,l2 ,l2 )∈∆m is a control net of F w.r.t. ∆p1 p2 p3 . 0, 0, 0 Problem 9 (20 pts). Given any two integers p, q ≥ 1, we deﬁne the rectangular grid as the grid consisting of all points of coordinates gi,j = i j , p q ,

p,q p,q

**where 0 ≤ i ≤ p and 0 ≤ j ≤ q. Prove that the rectangular patch G deﬁned by the property that
**

p q

satisﬁes

(x, y) =

i=0 j=0

q Bip (x)Bj (y) gi,j

for all (x, y) ∈ A , where

2

Bip

and

q Bj

are Bernstein polynomials. Given any rectangular net

**N = (bi,j )0≤i≤p, 0≤j≤q in A2 , we deﬁne the map from A2 to A2 as
**

p q q Bip (x)Bk (j) bi,j . i=0 j=0

(x, y) →

336

CHAPTER 8. SUBDIVISION ALGORITHMS FOR POLYNOMIAL SURFACES

Show that this map behaves like a global deformation of the original rectangular grid p,q . Show how to use such maps to globally deform a B´zier curve speciﬁed by its control points e (c0 , . . . , cm ) (where each ci is inside the grid deﬁned by p,q ).

the parameter space is the aﬃne line A. the parameter space is the aﬃne plane P. First. we will ﬁnd that C n continuity is only possible if m ≥ 2n + 2. As in the case of curves. and even if we just want to subdivide the plane into convex regions.Chapter 9 Polynomial Spline Surfaces and Subdivision Surfaces 9.4). we will see that there is a nice scheme involving de Boor control points. and where the initial control polyhedron consists 337 . we will restrict our attention to subdivisions of the plane into convex polygons. this is far more subtle than it is for curves. we will not be able to propose a nice scheme involving control points. As we shall see. We also need to decide what kind of continuity we want to enforce. This is not quite as good as in the triangular case. We will then consider spline surfaces of degree m based on a rectangular subdivision of the plane. ﬁnding such a scheme is still an open problem. but unfortunately. we will ﬁnd necessary and suﬃcient conditions on polar forms for two surface patches to meet with C n continuity. we will basically only consider subdivisions made of rectangles or of (equilateral) triangles. However. and the only reasonable choice is to divide the aﬃne line into intervals. In fact. Subdivision surfaces provide an attractive alternative to spline surfaces in modeling applications where the topology of surfaces is rather complex. To the best of our knowledge. Thus. Next. there is a tremendous variety of ways of doing so. We will discover that C n continuity is only possible if 2m ≥ 3n + 2. for bipolynomial splines of bidegree n.1 Joining Polynomial Surfaces We now attempt to generalize the idea of splines to polynomial surfaces. In the case of a curve. we will ﬁrst consider parametric continuity. but on the positive side. This time. and to view the curve as the result of joining curve segments deﬁned over these intervals. where the edges are line segments. We will ﬁnd necessary conditions on spline surfaces of degree m = 3n + 1 to meet with C 2n continuity. n meeting with C n−1 continuity. in the line of de Boor control points. in the case of a surface. We conlude this chapter with a section on subdivision surfaces (Section 9. we will take a closer look at spline surfaces of degree m based on a triangular subdivision of the plane.

−i ∈ R2 . Geri. and to apply recursively a subdivision scheme. . iﬀ Du1 . POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES of various kinds of faces. . since it is always possible to convert a rectangular net to a triangular net. . Let A and B be two adjacent convex polygons in the plane. The idea is to start with a rough polyhedron speciﬁed by a mesh (a collection of points and edges connected so that it deﬁnes the boundary of a polyhedron). iﬀ fA (u1 . . uk . A number of spectacular applications of subdivision surfaces can be found in the 1998 SIGGRAPH Conference Proceedings. . . FA and FB agree to kth order at a iﬀ their polar forms fA : P m → E and fB : P m → E agree on all multisets of points that contain at least m − k copies of a. . . .5 tells us that for any a ∈ (r. Dui G(a). s). m−k m−k for all u1. s ∈ P are distinct vertices of A and B). . . . s) be the line segment along which they are adjacent (where r. . FA and FB join with C k continuity along (r. We discuss Loop’s convergence proof in some detail. Dui F (a) = Du1 . a computer model of a character from the short movie Geri’s game.2. . a) = fB (u1 . Subdivision schemes typically produce surfaces with at least C 1 -continuity. a. . u u Deﬁnition 9.1. . we can prove the following crucial lemma. Catmull and Clark [17]. . Given two polynomial surface F and G of degree m. Lemma 9. in the limit. . Given two polynomial surface FA and FB of degree m. . . . . . . for all u1 . not just triangles or rectangles. a). Using this fact. . . . s). . which. → → for all −1 . we restrict our attention to total degree polynomial surfaces. and Charles Loop [50]. uk . s) iﬀ their polar forms fA : P m → E and fB : P m → E agree on all multisets of points that contain at least m − k points on the line (r. . uk ∈ P. recall from section 11. . a. . and let (r. This is not a real restriction. for any point a ∈ P. . . In this section. that is. uk . s ∈ P are distinct vertices of A and B). . . s). and all ak+1 . and for this. s) be the line segment along which they are adjacent (where r. that is. Recall that lemma B. we give a crash course on discrete Fourier transforms and (circular) discrete convolutions. ak+1. am ). . 29. . . . 28]. . . . uk ∈ P. ak+1 . notably. .1. . where 0 ≤ i ≤ k. .1.1 that we say that F and G agree to kth order at a. s).4. yields a smooth surface (or solid). . s).338CHAPTER 9. . Given two polynomial surfaces FA and FB of degree m. . FA and FB join with C k continuity along (r. am ∈ (r. . and we concentrate on the more diﬃcult case. Let A and B be two adjacent convex polygons in the plane. We present three subdivision schemes due to Doo and Sabin [27. . It is also easier to deal with bipolynomial surfaces than total degree surfaces. . where it is tangent plane continuous. . except for a ﬁnite number of so-called extraordinary points. iﬀ FA and FB agree to kth order for all a ∈ (r. and let (r. iﬀ fA (u1 . am ) = fB (u1 . uk . .

and k + l ≥ m − n (0 ≤ n ≤ m). However. . . s) iﬀ fA (pi q j r k sl ) = fB (pi q j r k sl ). Then. . . am ). and all ak+1 . . . . . For n = 0. . . uk . for every a ∈ (r. . . . . then the corresponding polar forms fA (u1 . s). . . . l such that i + j + k + l = m. as desired. j. . . . uk ∈ P. we have shown that fA (u1 . . . . . ak+1 . As a consequence of lemma 9. . s). . if we consider a → fA (u1 . . uk . . . for all u1.1: Two adjacent reference triangles Proof. m−k m−k for all u1. .4. . . . lemma 4. . s).2 tells us that FA and FB join with C n continuity along (r. . . . . . a.9. . FA and FB agree to kth order at a iﬀ fA (u1 . a) m−k as aﬃne polynomial functions FA (u1 . uk . ak+1. JOINING POLYNOMIAL SURFACES s 339 p A B q r Figure 9. uk ) agree for all points ak+1 . . . Because this holds for all u1 . . . . . uk . . . a) = fB (u1 . s). we obtain the necessary and suﬃcient conditions on control nets for FA and FB for having C n continuity along (r.1. uk ). uk ) and FB (u1. a. am ) = fB (u1 . . . . . a. s). uk ) and fB (u1 . for all i. we just have fA (r k sm−k ) = fB (r k sm−k ). . . . k.1. . Let A = ∆prs and B = ∆qrs be two reference triangles in the plane.2. sharing the edge (r. . am ∈ (r. . . . . . am ∈ (r. a). . . . . . uk ∈ P.1 shows that if these functions agree on all points in (r. . uk . a) m−k and a → fB (u1 . . . . . . . s). a. uk . . uk ∈ P. . .1. . . . . lemma 9. . As we just said. . . .

s. s) = fB (p. r. r. s). r). r) = fB (p. s) must agree. For C 1 continuity. r.B (r.B (q. s. r. images of the diamond (p. the following 10 polar values must agree: fA (r. s) = fB (p.B (q. fA (p. fA. fA (s. s).B (s. s).B (p. p. s). r. q. In particular. we have 6 additional equations among polar values: fA (p. p. s). 2. fA (q. s)). r. s. r) = fB (p. The above conditions are depicted in the following diagram: We can view this diagram as three pairs of overlaping de Casteljau diagrams each with one shell. the vertices of each of these diamonds must be coplanar. q. fA (p. fA (p. fA (q. s). p. q. q. n = 2. fA (p. r. In addition to the 10 constraints necessary for C 1 continuity. r. s). fA. s. s. s).. fA (r. s. s) = fB (r. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES with 0 ≤ k ≤ m. r). r) = fB (q. s). r). r. s).e. r).B (·. . fA. s). s) = fB (q. s. q. r.B (s. r. i. s. q. the two surfaces join along this curve! Let us now see what the continuity conditions mean for m = 3 and n = 1. s). r. (fA. s). r). r) = fB (p.B (q. s)). s) = fB (r. s). s) = fB (p. (fA.B (p.340CHAPTER 9. fA (p. s) = fB (q. s. r)). fA (q. r). r. fA (q. r. p. r). r. r. q. fA. r. r) = fB (r. r). r). r. Denoting these common polar values as fA. s) = fB (p. s. fA. fA (p. s) = fB (q. q. ·). fA (r. fA. fA. (fA. r. fA. s.B (p. This is natural. r. s.B (r. r) = fB (q. r. s). 3.B (s. s). fA (q. which means that the control points of the boundary curves along (r.B (r. but this is not enough to ensure C 1 continuity. s) = fB (s. q. fA. s). note that these polar values naturally form the vertices of three diamonds. ·. Let us now consider C 2 continuity.

r. s) fA. r. p. r) fB (q. s. r. r) fA. s) fB (q.B (p. r) fB (q.B (s.B (r. r) fA (p. JOINING POLYNOMIAL SURFACES 341 fA. s) fA. s.B (p. q) fA (p. r. s. p. p) Figure 9. q. s) fA. s) fA. q. r. r) fA (p. s) fA. p.B (q. s.2: Control nets of cubic surfaces joining with C 1 continuity .B (q. s) fA.9.1. s) fA.B (r. s) fA.B (p.B (q. r. q.B (r.

q. (fA. r).B (p. q. p. q. p. s). r). q. s) = fB (p. q. r) = fB (p. For example. s) fA. but this is not enough to ensure C 2 continuity. s).B (r.B (r. r) fA. p. ·.B (q. p. ·). note that these polar values naturally form the vertices of four diamonds. . Note that the polar values fA (p.B (p. fA. r)). fA.B (s. fA.B (·. denoting these common polar values as fA. q. q. p.B (s. r) fA. images of the diamond (p. s) fA.B (p.B (p. r. the left two diamonds are (fA.B (q. In particular. p.B (q. p.B (p.342CHAPTER 9.B (p.B (q. fA. p. p) Figure 9. s). q. p. q) fA (p. r) fB (q. the vertices of each of these diamonds must be coplanar. q. fA. fA. p. The above conditions are depicted in the following diagram: We can view this diagram as two pairs of overlaping de Casteljau diagrams each with two shells. p. s)). r). s) fA. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES fA. r) and fA (p.3: Control nets of cubic surfaces joining with C 2 continuity Again. q. s) are not control points of the original nets. s).

We now investigate the realizability of the continuity conditions in the two cases where the parameter plane is subdivided into rectangles. A and B join with C n continuity along (s.9. then fA and fB are completely determined. each with n shells. all the control points agree. there is no reference triangle containing all of these three edges! Lemma 9.. we will consider surface patches of degree m joining with the same degree of continuity n for all common edges. C n continuity is ensured by the overlaping of m − n + 1 pairs of de Casteljau diagrams. if the parameter plane is divided into equilateral triangles. in the case of C 3 continuity. This means that given any four . which means that changing some control points in a small area does not aﬀect the entire spline curve. SPLINE SURFACES WITH TRIANGULAR PATCHES p t B 343 C A q D s Figure 9.2. recall that it was possible to achieve C m−1 continuity with curve segments of degree m. Surface splines consisting of triangular patches of degree m ≥ 1 joining with C n continuity cannot be locally ﬂexible if 2m ≤ 3n + 1. we will prove that if 2m ≤ 3n + 1. n = 3.4: Constraints on triangular patches Finally. spline curves have local ﬂexibility.2 Spline Surfaces with Triangular Patches In this section. The diﬃculty is that even though A and D join with C n continuity along (s. We assume that the parameter plane has its natural euclidean structure.2. Also. then it is generally impossible to construct a spline surface. First. and B and C join with C n continuity along (s. we study what happens with the continuity conditions between surface patches. i. t). if fC and fD are known. The proof is more complicated than it might appear. given any 4 adjacent patches as shown in the ﬁgure below. p). More precisely. q). In the case of surfaces. or triangles. 9.1. For simplicity. In the case of spline curves.e. the situation is not as pleasant. which means that fA = fB . In general.

Recall that these conditions are fA (si tj pk q l ) = fB (si tj pk q l ) for all k + l ≤ n. the condition 2m ≤ 3n + 1 becomes 2m ≤ 6h + 4. C as in the previous ﬁgure. if fD and fC are known. and i + j + k = m in the second case. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES adjacent patches D. q) and (s. and B and C join with C n continuity along (s. which are the worse cases. then fA and fB are completely determined. we will be done. p + q = s + t. fA (si tj pk q l ) = fD (si tj pk q l ). This is an equation relating some aﬃne combination of polar values from a triangular net of (k+1)(k+2) polar values associated with A and some 2 aﬃne combination of polar values from a triangular net of (l+1)(l+2) polar values associated 2 with B. If n = 2h + 1. by lemma 9. The idea is to show that the two control nets of polar values fA (si tj q l ) and fB (si tj pk ) are completely determined. and fA (si tj pk q l ) = fB (si tj pk q l ).2. fB (si tj pk q l ) is determined for all j + l ≤ n. q). Since D and A meet with C n continuity along (s. A. p). Indeed. which implies m ≤ 3h + 2. and fB (si tj pk q l ) = fC (si tj pk q l ). If n = 2h. where i + j + l = m in the ﬁrst case. Similarly. we get fA (si tj pk q l ) = fB (si tj pk q l ).1. Proof. for all j + k ≤ n (where i + j + k + l = m). for all k + l ≤ n. i.e. Furthermore. note that (p. These conditions do not seem to be suﬃcient to show that fA and fB are completely determined. since A and B join with C n continuity along (s. A similar rewriting of the C n -continuity equations between A and D and between C . but we haven’t yet taken advantage of the symmetries of the situation. We will ﬁrst reformulate the C n -continuity conditions between A and B. B. for all k + l ≤ n. using the identity p + q = s + t. j1 !j2 !j3 ! where k + l ≤ n. we get (−1)i3 i1 +i2 +i3 =k k! fA (si+i1 tj+i2 q l+i3 ) = i1 !i2 !i3 ! j (−1)j3 1 +j2 +j3 =l l! fB (si+j1 tj+j2 pk+j3 ). for all j + l ≤ n (where i + j + k + l = m).344CHAPTER 9. fA (si tj pk q l ) is determined for all j + k ≤ n. there is at most one free control point for every two internal adjacent patches. If we can show that the polar values for FA and FB are completely determined when n = 2h and m = 3h. and i + j + k + l = m. which implies that m ≤ 3h. when 2m = 3n + 2. Replacing p by s + t − q on the left-hand side and q by s + t − p on the right-hand side. or when n = 2h + 1 and m = 3h + 2. t) have the same middle point. In summary. t). the condition 2m ≤ 3n + 1 becomes 2m ≤ 6h + 1.

the polar values in the diamond (t. w) can be computed from the other polar values inside (s. v. s). the computation of the h polar values on the edge (v. and that the polar values fB (sm−j−k tj pk ) are known for 0 ≤ k ≤ m − j and 0 ≤ j ≤ n. y) and the polar values of the form fB (sm−j−k tj pk ) are located in the trapezoid (s. we consider the case where n = 2h + 1 and m = 3h + 2.5: Determining polar values in A and B and B shows that the polar values fA (sm−j−l tj q l ) are known for 0 ≤ l ≤ m−j and 0 ≤ j ≤ n. w) contains (h + 1)2 polar values. z. q1 . y). SPLINE SURFACES WITH TRIANGULAR PATCHES p p2 z w t 345 v p1 y u x q2 B A q1 q s Figure 9. If n = 2h + 1 and m = 3h + 2. so that the diamond (t. The proof proceeds by induction on h. w) and not on (u. . When k = h + 1. s) and (s. v. q2 . observe that in the equation (−1)i3 i1 +i2 +i3 =h+1 (h + 1)! fA (si1 th+1+i2 q h+i3 ) = i1 !i2 !i3 ! (−1)j3 j1 +j2 +j3 =h h! fB (sj1 th+1+j2 ph+1+j3 ). p1 . the polar values of the form fA (sm−j−l tj q l ) are located in the trapezoid (s. v. u. w) can be determined inductively from right to left and from bottom up (referring to ﬁgure 9. We explain how the h + 1 polar values on the edge (u. On ﬁgure 9. v). and j = h + 1. v. and there are (m − n)2 = (h + 1)2 such polar values. w. and (v. j1 !j2 !j3 ! the only undetermined polar value is the one associated with u. x. the polar values associated with A and B that are not already determined are also contained in the diamond (t. since fA (si tj ) = fB (si tj ) along (s. l = h. Assume inductively that the h2 polar values inside the diamond (t. namely fA (t2h+2 q h ). In either case. v.2. u. First. i = 0. w). t) (where i + j = m). u. If n = 2h and m = 3h. v. v) can be computed. q.9. p2 . w) being analogous. and there are (m − n)2 = h2 such polar values. v. w).5). u. u.5. u. p. the polar values associated with A and B that are not already determined are contained in the diamond (t.

is a u . z. v. u. and that once they are determined. and thus there is at most one degree of freedom for both patches FA and FB . p2 . which implies that n = 2h and m = 3h + 1. we attempt to understand better what are the constraints on triangular patches when n = 2N and m = 3N + 1. i = h − l1 . v. Again. and in fact. x. note that C n -continuity was only needed to compute the polar values associated with u and w. When 2m = 3n + 2. w) can be computed using only C n−1 -continuity constraints. both polar values fA (t2h+1 q h ) and fB (t2h+1 ph ) are undetermined. v. if 0 ≤ l1 ≤ h. v) can be computed from the equation obtained by letting k = l1 + 1. this argument uses C 2h continuity between A and D and between B and C. for any vector − ∈ R2 . q2 . the proof proceeds by induction on h. and thus. 2 In both cases. to ﬁnd any reasonable scheme to constuct triangular spline surfaces. the polar value fA (sh−l1 t2h+2 q l1 ) along (u. we get a strictly smaller diamond of h2 polar values contained in (t. x. q2 . w). y. q1 . s). z. Generally. q1 . p1 . The key is to look at “derived surfaces”. s) and (s.346CHAPTER 9. where 1 ≤ l1 ≤ h. y. We now consider the case n = 2h and m = 3h. This time. but only C 2h−1 -continuity between A and B. s). l = l1 . Instead of presenting this method. but there is an equation relating them. p1 . fA (t2h+2 q h ) can be computed. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES since all the other polar values involved in this equation are inside (s. w) contains h polar values. u. (−1)i3 i1 +i2 +i3 =l1 +1 (l1 + 1)! fA (sh−l1 +i1 t2h+1−l1 +i2 q l1 +i3 ) = i1 !i2 !i3 ! (−1)j3 j1 +j2 +j3 =l1 l1 ! fB (sh−l1 +j1 t2h+1−l1 +j2 pl1 +1+j3 ). but it is not practical. all the other polar values in the diamond (t. so that the diamond (t. u. Knowing that we must have 2m ≥ 3n + 2 to have local ﬂexibility. and j = 2h + 1 − l1 . the argument for the previous case can be used. Such a method using convolutions is described by Ramshaw [65]. p2 . the polar value associated with u is fA (t2h+1 q h−1 ) and the polar value associated with w is fB (t2h+1 ph−1 ). the map u − → → Du F : P → E deﬁned by the directional derivative of F in the ﬁxed direction − . and we use the induction hypothesis to compute these polar values. → Given a polynomial surface F : P → E of degree m. y. j1 !j2 !j3 ! since all the other polar values involved are inside (s. the problem remains to actually ﬁnd a method for contructing spline surfaces when 2m = 3n + 2. s) and (s. Thus. After a similar computation to determine the polar values fB (sh−l1 t2h+2 pl1 ). y. Thus.

2. Dun Du F and Du1 . Given two triangular surfaces F : P → E and G : P → E. . then Du F and Du G also join with C n u continuity along L. Lemma 9. parallel to the three directions of α γ − +− +− =− . Given two triangular surfaces F : P → E and G : P → E. and such that . −n be any vectors in R2 . β . .2. Dun Du G meet with C 0 continuity. . . . If a ∈ L. → → → → α β γ 0 We have the following lemma.2. . the derived surface α γ the edges of triangles in the triangular grid. by lemma 9. .2. and if − ∈ R2 is parallel to L. . Given a spline surface F : P → E of degree 3n + 1 having C 2n continuity.2. we can compute Du F (a) by evaluating F at a and at points near a on L. . . β . If F and G meet with C n continuity along L. which means that Du F and Du G also meet with C n continuity along L. parallel to the three directions of the edges of triangles in the α γ → → − → → − triangular grid. . − .2. − . for → → − → any three vectors − . and such that − + β + − = 0 . We can now derive necessary conditions on surfaces F and G of degree 3n + 1 to join → → − → with C 2n continuity. Dun F and Du Du1 . . the derived surfaces u Du Du1 . u u then it is clear that the derived surfaces Du1 . then Du F and Du G also u 0 meet with C continuity along L. SPLINE SURFACES WITH TRIANGULAR PATCHES − → β B − → α A C − → γ Figure 9. Given two triangular surfaces F : P → E and G : P → E. Taking the derivative in the direction − . Lemma 9. Consider three vectors − .9. for every triangle A. Dun F and Du1 . Dun G also meet with C 0 continuity. Let −1 . .3. Du1 . Dun G meet with C 0 conti→ nuity along L. Lemma 9. if F and G meet → with C n continuity along a line L. and if − is parallel to L. called a derived surface of F . . if F and G meet → with C 0 continuity along a line L. the following lemmas show that if F and G join with → C n continuity along a line L and if − is parallel to L. and since F and G agree on L.3. Proof. . then Du F and Du G also meet u n with C continuity along L. → → Proof. we will get the same value for Du F (a) and Du G(a). Since the various directional derivatives commute.4.6: A stripe in the parameter plane for triangular patches 347 polynomial surface of degree m − 1. .

→ → Du1 . . the derived surface γ γ β β Dn+1 Dn+1 FA has a constant value in any horizontal stripe. they − → → must be identical. setting it to zero → → − → α γ corresponds to (n+1)n conditions. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES → Dn+1 Dn+1 FA is the same in any stripe in the direction − . . the derived surface Dn+1 Dn+1 FA γ α γ β β − .2. β . A⊙ where η ∈ (P)⊙(n−1) . where a ∈ P (see section 11. we have a total of 3(n+1)n conditions. If we can show that for − . Subtracting the 3(n+1)n conditions. and the derived surface Dn+1 Dn+1 F is the → is the same in any stripe in the direction α A α γ − → same in any stripe in the direction β . the derived surfaces Dn+1 FA and γ n+1 n+1 n+1 n−1 Dγ FB have degree 2n and meet with continuity C . in order to ﬁnd spline surfaces of degree 3n + 1 with C 2n continuity. Indeed. The same argument applies to F and F .1). the conditions − → Dn+1 Dn+1 FA = Dn+1 Dn+1 FA = Dn+1 Dn+1 FA = 0 α γ α γ β β . A surface of degree 3n + 1 is determined 2 by (3n+3)(3n+2) control points. But then. we see that each patch FA 2 2 is speciﬁed by 3(n + 1)2 control points. The derived surfaces Dβ Dγ FA and Dn+1 Dn+1 FB have degree n − 1. . −k ). Thus. saying that Dn+1 Dn+1 FA = 0 is equivalent α β to saying that →n+1 − → →n+1 ⊙ − f (− α β ⊙ η) = 0 . Each derived surface patch has degree n − 1. Duk FA (a) = mk fA (a. recall from lemma 10. since FA and FB meet with C 2n continuity and have degree 3n + 1. Dn+1 Dn+1 FB and Dn+1 Dn+1 FC are identical. A similar argument applies to γ β the stripes in the other two directions. . u u m−k If fA ⊙ : (P)⊙m → E is the linear map from the tensor power (P)⊙m to E associated with the symmetric multilinear map fA . a. . −1 . Proof. and thus. with the roles of − and β γ B C reversed. . for all triangles A. Thus.3 that.5. if fA : P m → E is the polar form of FA : P → E. If FA : P → E and FB : P → E are any two adjacent triangular patches. . . . they join γ β − → with C n−1 -continuity along their common boundary in the direction β . since (P)⊙(n−1) is spanned by the simple (n − 1)-tensors of − → the form an−1 . and by applying lemma 9. and thus. these conditions are 2 We can show that these conditions are indeed independent using tensors.3 n + 1 times.348CHAPTER 9. From lemma 9. and fA : (P)m → E is the homogenized version of fA . − .4. it is natural to attempt to satisfy the conditions − → Dn+1 Dn+1 FA = Dn+1 Dn+1 FA = Dn+1 Dn+1 FA = 0 . which proves the lemma. α γ α γ β β independent.2. .

reasoning on dimensions. the derived surface Dn+2 Dn F is the same in any stripe in the direction − . we were led to consider surface splines of degree 3n + 1 with C 2n continuity. Such spline surfaces do exist. if the three subspaces above had a →n+1 − n+1 →n+1 ⊙ − nontrivial intersection. SPLINE SURFACES WITH RECTANGULAR PATCHES correspond to three subspaces of (P)⊙(3n+1) . the derived surface Dn+2 Dn FA is the same in any stripe in the β α − . satisfying the independent conditions − → Dn+1 Dn+1 FA = Dn+1 Dn+1 FA = Dn+1 Dn+1 FA = 0 . Unfortunately. →n+1 →n+1 ⊙ − {− α β ⊙ η | η ∈ (P)⊙(n−1) }. 9. → → direction γ α γ A β − → and the derived surface Dn+2 Dn FA is the same in any stripe in the direction β . if the parameter plane is divided into rectangles. →n+1 ⊙ − n+1 ⊙ η | η ∈ (P)⊙(n−1) }. they would contain a tensor of the form − α β ⊙→ γ ⊙ δ. β α γ α γ β Each patch is then deﬁned by 3(n + 1)2 − 2 control points. For simplicity.9. → {− α γ n+1 − → →n+1 ⊙ η | η ∈ (P)⊙(n−1) }. it is easy to see that these subspaces are pairwise disjoint. and the conditions are indeed independent. no nice scheme involving de Boor control points is known for such triangular spline surfaces. to the best of our knowledge. It can be shown that if we consider surface splines of degree 3n + 3 with C 2n+1 continuity. Some interesting related work on joining triangular patches with geometric continuity (G1 -continuity) can be found in Loop [51]. we will consider surface patches . {β ⊙− γ 349 which is impossible since such a tensor has order at least 3n + 3. it is easy to show that each patch is deﬁned by 3(n + 1)2 − 2 control points. As a α γ consequence. In summary. Next we will see that we have better luck with rectangular spline surfaces.3. α γ α γ β β Each patch is then deﬁned by 3(n + 1)2 control points. For example.3 Spline Surfaces with Rectangular Patches We now study what happens with the continuity conditions between surface patches. However. as discussed very lucidly by Ramshaw [65]. satisfying the independent conditions − → Dn+2 Dn FA = Dn+2 Dn FA = Dn+2 Dn FA = 0 . We can also consider surface splines of degree 3n + 3 with C 2n+1 continuity. then for every triangle A. This is one of the outstanding open problems for spline surfaces. and their existence can be shown using convolutions.

for all i ≤ n. and so either i ≤ n or j ≤ n. This means that given any three adjacent patches A. the proof is fairly simple.1.350CHAPTER 9. When m = 2n + 2. As opposed to the triangular case. Similarly. we have fA (xi y m−i−j z j ) = fD (xi y m−i−j z j ). if fB and fD are known. One can prove using convolutions (see Ramshaw [65]) that such spline . in order to have rectangular spline surfaces with C n continuity. fA (xi y m−i−j z j ) = fB (xi y m−i−j z j ). we must have m ≥ 2n + 2. there is at most one free control point for every two internal adjacent patches. Thus. the only control point which is not determined is fA (xn+1 z n+1 ). More precisely. We shall consider the case of rectangular spline surfaces of degree 2n meeting with C n−1 continuity. Take ∆xyz as reference triangle. since A and D join with C n continuity along (y.7: Constraints on rectangular patches of degree m joining with the same degree of continuity n for all common edges. if fB and fD are known. Furthermore. Proof. B. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES x w B y A z C D Figure 9. z).3. Lemma 9.2. However. Surface splines consisting of rectangular patches of degree m ≥ 1 joining with C n continuity cannot be locally ﬂexible if m ≤ 2n + 1. by lemma 9. then fA is completely determined. for all j ≤ n. then it is generally impossible to construct a spline surface. we will prove that if m ≤ 2n + 1. given any 4 adjacent patches as shown in the ﬁgure below.1. First. when m = 2n + 2. which shows that fA (xi y m−i−j z j ) is completely determined. D as in the previous ﬁgure. i + j ≤ m ≤ 2n + 1. then fA is completely determined. Since A and B meet with C n continuity along (x. y).

n are an answer to our quest. An immediate consequence of lemma 9. and thus.3.2. Given two triangular surfaces F : P → E and G : P → E of degree 2n. we can easily adapt what we have done for spline curves to bipolynomial spline surfaces.3.3. since each rectangle is the product of two intervals. We can now derive necessary conditions on surfaces F and G of degree 2n to join with C continuity. However. as in the case of triangular spline surfaces.2. in the present case of rectangular spline surfaces. Instead. we will be successful in ﬁnding a nice class of spline surfaces speciﬁable in terms of de Boor control points. − → → for any horizontal vector − . n . SPLINE SURFACES WITH RECTANGULAR PATCHES 351 surfaces exist. and subtracting the (n + 1)n constraints. and if − is parallel to L.2. This time. Since Dα FA has degree n − 1. for every rectangle A.3. the derived α → surface Dn+1 FA is the same in any stripe in the direction − . In view of lemma 9.9.3. but the construction is not practical. Furthermore. we ﬁnd 2 that each rectangular patch is determined by (n + 1)2 control points. it makes sense to look for rectangular spline surfaces of degree 2n with continuity C n−1 satisfying the constraints − → Dn+1 FA = Dn+1 FA = 0 α β n+1 for all rectangles A. Given a spline surface F : P → E of degree 2n having C n−1 continuity. n−1 Lemma 9. and the derived surface Dn+1 FA α α β − → is the same in any stripe in the direction β . we discover that bipolynomial spline surfaces of bidegree n. we will look for necessary conditions in terms of derived surfaces. we have a total of (n + 1)n constraints. Proof. we can do this for bipolynomial spline surfaces of . we deduce that Dn+1 F and Dn+1 G meet with u u C n−1 continuity along L.3. and any vertical vector β . In fact. then u n+1 n+1 Du F = Du G. so they must be identical. note that a surface of degree 2n such that − → Dn+1 FA = Dn+1 FA = 0 α β is equivalent to a bipolynomial surface of bidegree n. A surface of degree 2n is speciﬁed by (2n+2)(2n+1) control points. Proof.3 n + 1 times. Thus. Applying lemma 9. Lemma 9. The following lemma is the key result. But these surfaces have degree at most n − 1. → if F and G meet with C n−1 continuity along a line L. setting it to zero corresponds to (n+1)n 2 constraints.3.

Furthermore.. tl+1 ]. Riesenfeld [67] realized that Chaikin’s scheme was simply the de Boor subdivision method for quadratic uniform B-splines. si+p . . . 28]. tj+1 . we have de Boor control points of the form xi.4 Subdivision Surfaces An alternative to spline surfaces is provided by subdivision surfaces. The idea is to start with a rough polyhedron speciﬁed by a mesh (a collection of points and edges connected so that it deﬁnes the boundary of a polyhedron). The reader should have no trouble ﬁlling in the details. . In summary. 29. contrary to the case of triangular spline surfaces. a “good” subdivision scheme produces large portions of spline surfaces. . The patch deﬁned on the rectangle Rk. q . Given a knot sequences (si ) along the u-direction. where r is the multiplicity of the knot tj that divides them. and most faces are rectangular. One of the major advantages of such a scheme is that it applies to surfaces of arbitrary topology. The patches of the spline surface have domain rectangles of the form Rk. yields a smooth surface (or solid). The challenge of ﬁnding such a scheme for triangular spline surfaces remains open. where r is the multiplicity of the knot si that divides them. except for faces . The idea of deﬁning a curve or a surface via a limit process involving subdivision goes back to Chaikin. After one round of subdivision the Doo-Sabin scheme produces a mesh whose vertices all have the same degree 4. and to apply recursively a subdivision scheme. and two patches adjacent in the v-direction meet with C q−r continuity. . and a knot sequences (tj ) along the v-direction.e. and that it is not restricted to a rectangular mesh (i.l has the (p + 1)(q + 1) de Boor control points xi. . except for a ﬁnite number of so-called extraordinary points.j . Since the study of bipolynomial spline surfaces of bidegree p. in the limit. The progressive version of the de Casteljau algorithm can be generalized quite easily. and by Catmull and Clark [17]. Soon after that. . In 1978.. we will not elaborate any further. i. two subdivision schemes for surfaces were proposed by Doo and Sabin [27. we were able to generalize the treatment of spline curves in terms of knot sequences and de Boor control points to bipolynomial spline surfaces.l = [sk . the process of recursively inserting a knot at the midpoint of every interval in a cyclic knot sequence. where sk < sk+1 and tl < tl+1 . Two patches adjacent in the u-direction meet with C p−r continuity. . 9.352CHAPTER 9.j = f (si+1 . sk+1 ] × [tl . q basically reduces to the study of spline curves. where k − p ≤ i ≤ k and l − q ≤ i ≤ l. a mesh based on a rectangular grid).e. The main diﬀerence between the two schemes is the following. which. in the case of rectangular spline surfaces. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES bidegree p. and leave this topic as an interesting project. who (in 1974) deﬁned a simple subdivision scheme applying to curves deﬁned by a closed control polygon [18]. tj+q ).

On the other hand. Large regions of the mesh deﬁne bicubic B-splines. who also proposed a method for improving the design of boundary curves. except for original vertices whose degree is not equal to six. Charles Loop in his Master’s thesis (1987) introduced a subdivision scheme based on a mesh consisting strictly of triangular faces [50]. Due to the lack of space. In Loop’s scheme. and V -faces. For every edge E common to two faces F1 and F2 . image vertices are connected to form three kinds of new faces: F -faces. reﬁnements of previous subdivision schemes and new subdivision schemes have been proposed. Although both schemes can be viewed as cutting-oﬀ corners. The limit surface is C 2 -continuous except at extraordinary points. and it turns out that these faces shrink and tend to a limit which is their common centroid. . Every vertex v of the current mesh yields a new vertex vF called image of v in F . the CatmullClark scheme is closer to a process of face shrinking. Then. in 1998. except for vertices arising from original nonrectangular faces and from vertices of degree not equal to four. referring the reader to the SIGGRAPH’98 Proceedings and Course Notes on Subdivision for Modeling and Animation. and most vertices have degree 4. However. E-faces.9.4. An F -face is a smaller version of a face F . it is not until roughly 1994 that subdivision surfaces became widely used in computer graphics and geometric modeling applications. Several years later. the four image vertices vF1 . referred to as extraordinary points. vF2 of the end vertex v of E. The limit surface is C 2 -continuous except at extraordinary points. every triangular face is reﬁned into four subtriangles. SUBDIVISION SURFACES 353 arising from original vertices of degree not equal to four and from nonrectangular faces. Since 1994. a nontrivial problem. wF2 of the other end vertex w of E are connected to form a rectangular face. Catmull-Clark method. and Loop method.8. Note that if F is an n-sided face.9. also referred to as extraordinary points. During every round of the subdivision process. Most vertices have degree six. The centroid of each nonrectangular face is referred to as an extraordinary point. and wF1 . we will restrict ourselves to a brief description of the Doo-Sabin method. so is the new F -face. This process is illustrated in ﬁgure 9. large regions of the mesh deﬁne biquadratic B-splines. new vertices and new faces are created as follows. for every face F having v as a vertex. The limit surface is C 1 -continuous except at extraordinary points. subdivision hit the big screen with Pixar’s “Geri’s game”. and it is obtained by connecting the image vertices of the boundary vertices of F in F . after one round of subdivision. Although such subdivision schemes had been around for some time. the Catmull-Clark scheme produces rectangular faces. as illustrated in ﬁgure 9. The Doo-Sabin scheme is described very clearly in Nasri [56]. Large regions of the mesh deﬁne triangular splines based on hexagons consisting of 24 small triangles each of degree four (each edge of such an hexagon consists of two edges of a small triangle). the number of nonrectangular faces remains constant. not unlike a sculptor at work. Furthermore. A new E-face is created as follows. After one round of subdivision.

354CHAPTER 9.8: Vertices of a new F -face w wF 1 F1 vF1 v E wF2 vF2 F2 Figure 9. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES y yF x xF F wF w vF zF z uF u v Figure 9.9: Vertices of a new E-face .

the centroid of F is the barycenter of the weighted points (v. SUBDIVISION SURFACES F1 vF1 v 355 F2 vF2 vF3 F3 Figure 9.4. if i = j. If v has degree n.10: Vertices of a new V -face A new V -face is obtained by connecting the image vertices vF of a given vertex v in all the faces adjacent to v.e. create the new vertex 1 3 vF + v. with αij = n+5 4n 3+2 cos(2π(i−j)/n) 4n if i = j. One way to handle boundaries is to treat them as quadratic B-splines. A simple scheme used by Doo is to compute the centroid c of the face F . vertices of degree n ≤ 2. if vF is the image of v in the face F containing v.10. where the v’s are the vertices of F ).9. and thus. v′ = Various rules are used to determine the image vertex vF of a vertex v in some face F . j ≤ n and n ≥ 3 is the number of boundary edges of F .. Another rule is n vi = j=1 αij wj . cos(2π(i − j)/n) = −1. Special rules are needed to handle boundary edges. k=1 because it is the real part of the sum of the n-th roots of unity. The scheme just described applies to surfaces without boundaries. 1/n). where the wj are the vertices of the face F . i. 1 ≤ i ≤ n. for every i. and vi is the image of wi in F . This process is illustrated in ﬁgure 9. provided that v has degree n ≥ 3. since n cos(2πk/n) = 0. j=i . For every boundary vertex v. 4 4 Another method was proposed by Nasri [56]. the new V -face is also n-sided. and the image vF of v in F as the midpoint of c and v (if F has n sides. where 1 ≤ i. Note that the above weights add up to 1.

edge points. to model clothes or human skin. 4 The computation of new vertex points is slightly more involved. for example. Given a face F with vertices v1 . In fact. . and Sabin [74]. vF1 . vn . New face points are denoted as solid square points. . The version presented in Catmull and Clark [17] is as follows. vF2 .11 shows this process. all vertices have degree four. vE = v + w + vF1 + vF2 . i.356CHAPTER 9. there are several diﬀerent versions. We now turn to a description of the Catmull-Clark scheme. the new vertex point v ′ associated with v is v′ = 1 2 n−3 F+ E+ v. n n n New faces are then determined by connecting the new points as follows: each new face point vF is connected by an edge to the new edge points vE associated with the boundary edges E of the face F . called “NURSS” by the authors. or cusps. n 1 vF = vi .e. i. and vertex points. The Doo-Sabin method has been generalized to accomodate features such as creases. . each new vertex point v ′ is connected by an edge to the new edge points vE associated with all the edges E incident with v. These matters were investigated by Doo and Sabin [28] and by Peters and Reif [60]. Roughly. Figure 9. Note that only rectangular faces are created. and the number of nonrectangular faces remains constant. the new face point vF is computed as the centroid of the vi . it is not obvious that such subdivision schemes converge. using discrete Fourier transforms. . the idea is to analyze the iteration of subdivision around extraordinary points. where vF1 and vF2 are the centroids of F1 and F2 . if F1 and F2 are the two faces sharing E as a common edge. Sewell. Given a vertex v (an old one). by Sederberg. It is also easy to check that these faces shrink and tend to a limit which is their common centroid. n i=1 Given an edge E with endpoints v and w. The subdivision rules are modiﬁed to allow for nonuniform knot spacing. and what kind of smoothness is obtained at extraordinary points. the new edge point vE is the average of the four points v. However. w. Such features are desirable in human modeling. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES Observe that after one round of subdivision. . This can be achieved by eigenvalue analysis. darts. or better. Unlike the previous one. if F denotes the average of the new face points of all (old) faces adjacent to v and E denotes the average of the midpoints of all (old) n edges incident with v. and new vertex points are denoted as hollow square points.e. this method consists in subdividing every face into smaller rectangular faces obtained by connecting new face points. new edges points are denoted as hollow round points. Zheng.

and pl of a boundary curve. two new control points pl+1 and pl+1 are i i+1 i+2 2i+1 2i+2 . SUBDIVISION SURFACES 357 Figure 9. Then for any three consecutive control points pl . starting from a tetrahedron). Another version studied by Doo and Sabin is v′ = 1 1 n−2 F+ E+ v. and vertex points An older version of the rule for vertex points is v′ = 1 1 1 F + E + v. 4 2 4 but it was observed that the resulting surfaces could be too “pointy” (for example. all faces are rectangular. n n n Doo and Sabin analyzed the tangent-plane continuity of this scheme using discrete Fourier transforms [28]. and the number of extraordinary points (vertices of degree diﬀerent from four) remains constant. Boundaries can be easily handled by treating the boundary curves a cubic B-splines. A more general study of the convergence of subdivision methods can be found in Zorin [89] (see also Zorin [88]). Observe that after one round of subdivision. and using rules for knot insertion at midpoints of intervals in a closed knot sequence. We have only presented the Catmull-Clark scheme for surfaces without boundaries. pl . The tangent-plane continuity of various versions of Catmull-Clark schemes are also investigated in Ball and Storry [3] (using discrete Fourier transforms).9. edge points.4.11: New face point. It is also possible to accomodate boundary vertices and edges. and C 1 -continuity is investigated by Peters and Reif [60].

Loop’s method consists in splitting each (triangular) face into four triangular faces. One of the techniques involved is called eigenbasis functions. also allows semi-sharp creases in addition to (inﬁnitely) sharp creases. it was shown by Stam [78] that it is in fact possible to evaluate points on Catmull-Clark subdivision surfaces. . we compute the new edge point ηrs as the following convex combination: ηrs = 1 3 3 1 p + r + s + q. For every edge (rs). and the midpoint of the edge (rs). if p0 . 3/8. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES p F ηrs G q r Figure 9. However. Unlike the previous methods. as opposed to parametric models which obviously do. have generalized the Catmull-Clark subdivision rules to accomodate sharp edges and creases. and Truong [24]. 8 8 8 i+2 DeRose. Loop’s method only applies to meshes whose faces are all triangles. . For any vertex v of degree n. The method of DeRose Kass. let us mention that a particularly simple subdivision method termed “midedge subdivision” was discovered by Peters and Reif [61]. 8 8 8 8 as illustrated in ﬁgure 9. This new scheme was used in modeling the character Geri in the short ﬁlm Geri’s game. A common criticism of subdivision surfaces is that they do not provide immediate access to the points on the limit surface. This corresponds to computing the aﬃne combination of three points assigned respectively the weights 3/8. . and Truong [24]. pn−1 are the other endpoints of all (old) edges .s 358CHAPTER 9. . in which the Loop scheme was extended to allow (inﬁnitely) sharp creases. Kass.12: Loop’s scheme for computing edge points created according to the formulae pl+1 = 2i+1 1 l 1 l p + p . + pl . 2 i 2 i+1 and pl+1 = 2i+2 1 l 6 l 1 pi + pi+1 . and 2/8: the centroids of the two triangles ∆prs and ∆qrs. Before presenting Loop’s scheme. except that DeRose et al’s method applies to Catmull-Clark surfaces.12. using rules to determine new edge points and new vertex points. Their work is inspired by previous work of Hoppe et al [44]. which were also studied by Zorin [89]. since exactly two triangles ∆prs and ∆qrs share the edge (rs).

and hollow square points denote new vertex points. The limit surface is C 2 -continuous except at extraordinary points. Large regions of the mesh deﬁne triangular splines based on hexagons consisting of small triangles each of degree four (each edge of such an hexagon consists of two edges of a small triangle). Thus. we will present its main lines. Loop’s method is illustrated in ﬁgure 9. but in some cases.13. where hollow round points denote new edge points.13: Loop’s scheme for subdividing faces incident with v. the new vertex point v ′ associated with v is n−1 v = (1 − αn ) ′ i=0 1 pi n + αn v. except for vertices coming from orginal vertices of degree diﬀerent from six. Observe that after one round of subdivision. in order to insure convergence and better smoothness at extraordinary points. but such vertices are surrounded by ordinary vertices of degree six. Boundaries can be easily handled by treating the boundary curves a cubic B-splines. Vertices of degree diﬀerent from six are called extraordinary points. as in the Catmull-Clark scheme. Loop’s method was ﬁrst formulated for surfaces without boundaries. Since the principles of Loop’s analysis are seminal and yet quite simple. all vertices have degree six. SUBDIVISION SURFACES 359 v1 η2 η3 v2 η1 v3 Figure 9. In his Master’s thesis [50].9. ordinary points have a well deﬁned limit that can be computed by subdividing the quartic triangular patches.4. He proves convergence of extraordinary points to a limit. . He also ﬁgures out in which interval αn should belong. tangent plane continuity is lost at extraordinary points. where αn is a coeﬃcient dependent on n. Loop rigorously investigates the convergence and smoothness properties of his scheme. Loop determined that the value αn = 5/8 produces good results [50].

it is convenient to label points with the index of the subdivision round during which they are created. Keep in mind that the lower indices of the pl are taken modulo n. after one round of subdivision. we get v −q = Thus. Using the fact that pl = i 1 l−1 3 l−1 3 l−1 1 l−1 p + p + v + pi+1 8 i−1 8 i 8 8 and some calculations. pl−1 .360CHAPTER 9. 0 n−1 Since q l is the centroid of ordinary points. .e. (2) The ordinary vertices pl . if −1 < αn − 3 8 l l 3 l−1 5 l−1 v − q = 8 8 αn − 3 (v l−1 − q l−1 ). . 8 8 l l we get convergence of v to q . . < 1. pl surrounding v l also tend to the same limit as q l . i Proving that liml→ ∞ v l = liml→ ∞ q l is fairly easy. Then. . . . . extraordinary points are surrounded by ordinary points. we have v l − q l = (1 − αn )q l−1 + αn v l−1 − By a trivial induction. Loop proves that as l tends to ∞. this proves the convergence for extraordinary points. Since points are created during every iteration of the subdivision process. the rule for creating a new vertex point v l associated with a vertex v l−1 can be written as v l = (1 − αn )q l−1 + αn v l−1 . The value αn = 5/8 is certainly acceptable. . (1) Every extraordinary vertex v l tends to the same limit as q l . the other endpoints of all edges incident with 0 n−1 v l−1 . POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES As we already remarked. where q l−1 n−1 = i=0 1 l−1 p n i is the centroid of the points pl−1 . . it is easy to show that n−1 q = i=0 l 1 l 3 l−1 5 l−1 p = v + q . i. which makes the analysis of convergence possible. 11 5 − < αn < . 8 3 αn − 8 l (v 0 − q 0 ). n i 8 8 From this.

. . . . . n − 1]. for every k. fn−1 ) of data points. 82] for a more comprehensive treatment. It is also more convenient to index n-tuples starting from 0 instead of 1. . . . cn−1 ) ∈ Cn of “Fourier coeﬃcients” determines a periodic function fc : R → C (of period 2π) known as discrete Fourier series. The discrete Fourier transform is a linear map : Cn → Cn . d ∈ Cn . the other major player in Fourier analysis is the convolution. . Letting Fn = e−i2πkl/n 0≤k≤n−1 be the conjugate of Fn . Every sequence c = (c0 . fn−1 ). The discrete convolution is a map ⋆ : Cn × Cn → Cn . cn−1 ) of the discrete Fourier series fc such that fc (2πk/n) = fk . −1 Thus. . . . 1. . . . assuming that we index the entries in Fn over [0. . . .9. given any sequence f = (f0 . referring the reader to Strang [80. thus writing c = (c0 . . deﬁned such that n−1 fc (θ) = c0 + c1 e + · · · + cn−1 e iθ i(n−1)θ = k=0 ck eikθ . The Fourier transform and the convolution . . Now. . cn−1 ). . it is natural to deﬁne the discrete convolution as a circular type of convolution rule.4. It is convenient to view a ﬁnite sequence c ∈ Cn as a periodic sequence over Z. . In the discrete case. . . 1. . . the Fourier matrix is invertible. SUBDIVISION SURFACES 361 Proving (2) is a little more involved. Then. n − 1] × [0. Loop makes a clever use of discrete Fourier transforms. and forming the new sequence c ⋆ d. The problem amounts to solving the linear system Fn c = f. it is easily checked that 0≤l≤n−1 Fn Fn = Fn Fn = n In . cn−1 ) from the data points f = (f0 . it is desirable to ﬁnd the “Fourier coeﬃcients” c = (c0 . . taking two sequences c. the standard k-th row now being indexed by k − 1 and the standard l-th column now being indexed by l − 1. or phase polynomial . and its inverse Fn = (1/n)Fn is computed very cheaply. where Fn is the symmetric n × n-matrix (with complex coeﬃcients) Fn = ei2πkl/n 0≤k≤n−1 0≤l≤n−1 . 0 ≤ k ≤ n − 1. Discrete Fourier series deal with ﬁnite sequences c ∈ Cn of complex numbers. The purpose of the discrete Fourier transform is to ﬁnd the Fourier coeﬃcients c = (c0 . Let us quickly quickly review some basic facts about discrete Fourier series. The matrix Fn is called a Fourier matrix . . . by letting ck = ch iﬀ k − h = 0 mod n. . .

The drawback of this choice is that the convolution rule has an extra factor of n. which causes problem with the convolution rule. . We also deﬁne the inverse discrete Fourier transform (taking c back to f ) as c = Fn c. Inspired by the continuous case. or equivalently. . . POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES rule (discrete or not!) must be deﬁned in such a way that they form a harmonious pair. the Fourier coeﬃcients c = (c0 . . . −π Remark: Others authors (including Strang in his older book [80]) deﬁne the discrete Fourier 1 transform as f = n Fn f . fn−1 ) ∈ Cn as f = Fn f. and the Fourier coeﬃcients of the Fourier series f (x) = are given by the formulae ck = ∞ k=−∞ −∞ ck eikx 1 2π π f (x)e−ikx dx. . . In view of the formula Fn Fn = Fn Fn = n In .e. where the Fourier transform f of the function f is given by f (x) = ∞ f (t)e−ixt dt. j=0 Note the analogy with the continuous case. where the multiplication on the right-hand side is just the inner product of c and d (vectors of length n). it is natural to deﬁne the discrete Fourier transform f of a sequence f = (f0 . 0 ≤ k ≤ n − 1. which means that the transform of a convolution should be the product of the transforms. Loop deﬁnes the discrete Fourier transform as Fn f . cn−1 ) are then given by the formulae ck = 1 1 f (k) = n n n−1 fj e−i2πjk/n . We will come back to this point shortly. . c ⋆ d = c d. and following Strang [82]. . i.362CHAPTER 9. as n−1 f (k) = j=0 fj e−i2πjk/n for every k.

. . . . as usual. which means that the columns of the Fourier matrix Fn are the eigenvectors of the circulant matrix H(f ). . . . viewing f and g as column vectors. For any sequence f = (f0 . . Then. . Fourier matrices. in our opinion. . . the lth component of the Fourier transform f of f (counting from 0). . c. The fascinating book on circulants. For example.4. the circulant matrix associated with the sequence f = (a. . . 82]. fn−1 ) ∈ Cn . that is. 0 0 0 0 ··· 1 0 consisting of cyclic permutations of its ﬁrst column. and that the eigenvalue associated with the lth eigenvector is (f )l . is highly recommended. We deﬁne the circular shift matrix Sn (of order n) as the matrix 0 0 0 0 ··· 0 1 1 0 0 0 · · · 0 0 0 1 0 0 · · · 0 0 Sn = 0 0 1 0 · · · 0 0 . . d) is a d c b b a d c c b a d d c b a We can now deﬁne the convolution f ⋆ g of two sequences f = (f0 . . fn−1 ) and g = (g0 . . . the deﬁnition in terms of circulant matrices. . . for details. . b. we get Fn H(f )(n In ) = (n In )f Fn .9. SUBDIVISION SURFACES 363 The simplest deﬁnition of discrete convolution is. the miracle (which is not too hard to prove!) is that we have H(f )Fn = Fn f . . . by Davis [21]. If we recall that Fn Fn = Fn Fn = n In . . . and more. . . Fn H(f ) = f Fn . multiplying the equation H(f )Fn = Fn f both on the left and on the right by Fn . . . . gn−1) as f ⋆ g = H(f ) g. we deﬁne the circulant matrix H(f ) as n−1 H(f ) = j=0 j fj Sn . . . Again. see Strang [80. . 0 where Sn = In .

However. when it should be using its conjugate Fn . If the sequence f = (f0 . for every k. or equivalently. that fn−j = fj for all j. 0 ≤ j ≤ n − 1. Observe that it is the same as the (forward) discrete Fourier transform. which means that f−j = fj for all j ∈ Z (viewed as a periodic sequence). which is our Fourier transform f (following Strang [82]). can be rewritten as the (circular) convolution rule f ⋆ g = f g. which requires interpreting indexes modulo n. since F uses the Fourier matrix Fn . since g = Fn g. (which is our inverse Fourier transform f ) and not as Fn f . POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES If we apply both sides to any sequence g ∈ Cn . Loop deﬁnes convolution using the formula n−1 (f ⋆ g)k = j=0 fj gk−j . but is equivalent to the circulant deﬁnition. for every k. . Loop states the convolution rule as F (f ⋆ g) = F (f )F (g). the inverse Fourier transform (taking c back to f ) is expressed as n−1 c(k) = j=0 cj cos (2πjk/n) . fn−1 ) is even. f ⋆ g = H(f )g. for every j. which. which is incorrect. 0 ≤ k ≤ n − 1. where the multiplication on the right-hand side is just the inner product of the vectors f and g. and f ⋆ g = Fn (f ⋆ g). Similarly. 0 ≤ k ≤ n − 1. we get Fn H(f )g = f Fn g. . . it is easily seen that the Fourier transform f can be expressed as n−1 f (k) = j=0 fj cos (2πjk/n) . we get back to Loop’s Master’s thesis [50]. .364CHAPTER 9. However. This is what saves Loop’s proof (see below)! After this digression. . we warn our readers that Loop deﬁnes the discrete Fourier transform as F (f ) = Fn f. 0 ≤ j ≤ n − 1.

8 q = i=0 l 1 l p n i is rewritten as Ql = A ⋆ P l . his F (f ) and our f are identical. pl surrounding v l also tend to the same limit as q l . n n . . . equation pl = i is rewritten as P l = M ⋆ P l−1 + and equation n−1 1 l−1 3 l−1 3 l−1 1 l−1 p + p + v + pi+1 8 i−1 8 i 8 8 3 l−1 V . as observed earlier. both of length n.4. From these equations. 0. pl ). . .. . even though Loop appears to be using an incorrect deﬁnition of the Fourier transform. . Note that these sequences are even! We also deﬁne the sequence P l as P l = (pl . 0.. . 8 8 8 pl = i n−3 A= 1 1 . 0 n−1 and treat q l and v l as constant sequences Ql and V l of length n. With these remarks in mind. . . deﬁne the sequences 3 1 1 M = . 0 n−1 The trick is rewrite the equations n−1 q = i=0 l 1 l p n i and and 1 l−1 3 l−1 3 l−1 1 l−1 p + p + v + pi+1 8 i−1 8 i 8 8 in terms of discrete convolutions. 8 . ..9. To do so. Then. . we go back to Loop’s proof that the ordinary vertices pl . what saves his argument is that for even sequences. we get Pl = M− 5 A ⋆ P l−1 + Ql . .. . SUBDIVISION SURFACES 365 Neverthless. .

lim Rl⋆ = lim Rl⋆ = 0n .366CHAPTER 9. namely. and we get 0 if j = 0. (R)j = 3 1 + 4 cos (2πj/n) if j = 0. n At this stage. and so. 5 1 ≤ (R)j ≤ 8 8 for all j. n−1 j=0 M− 5 A 8 l⋆ = 0. 0 ≤ j ≤ n − 1. He proves that q l (and thus v l ) has the limit (1 − βn )q 0 + βn v 0 . 8 Since the absolute value of the cosine is bounded by 1. letting R= M− 5 A . However. we have Rl⋆ = (R)l . where cn⋆ stands for the n-fold convolution c ⋆ · · · ⋆ c. we just have to compute the discrete Fourier transform of R. this is easy to do. and thus which proves that l→ ∞ l→ ∞ lim (R)l = 0n . applying the Fourier transform in its cosine form and the convolution rule. Loop gives explicit formulae for the limit of extraordinary points. Since both M and A are even sequences. l→ ∞ and consequently that l→ ∞ lim pl = lim q l . j we get P = l 5 M− A 8 ⋆ P 0 + Ql . where βn = 3 . the faces surrounding extraordinary points converge to the same limit as the centroid of these faces. 8 all we have to prove is that Rl⋆ tends to the null sequence as l goes to inﬁnity. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES Taking advantage of certain special properties of M and A. 11 − 8αn . i l→ ∞ Therefore.

Extraordinary points of large degree may exhibit poor smoothness. general approaches to study the properties (convergence. certain triangles being signiﬁcantly larger than others near extraordinary points. 4 4 4 For instance. Another phenomenon related to the eigenvalue distribution of the local subdivision matrix is the fact that the mesh may be uneven. especially after 1996. Again. it is possible to ﬁnd a formula for the tangent vector function at each extraordinary point. At extraordinary points. 3 αn = + 8 3 1 + cos (2π/n) 8 4 2 . Loop also investigates the tangent plane continuity at these limit points. such as. We conclude this section on subdivision surfaces by a few comments. but we hope that we have at least given pointers to the most important research directions. The phenomenon of “eigenvalue clustering” can also cause ripples on the surface.4. arbitrary topology of the mesh. as Loop ﬁrst observed experimentally. Loop’s scheme was extended to accomodate sharp edges and creases on boundaries. There are many other papers on the subject of subdivision surfaces. numerical stability. Loop proves that his subdivision scheme is C 2 -continuous. In summary. .9. but his study is more tentative. there is convergence. there are problems with curvature continuity at extraordinary points (the curvature can be zero). they have their problems too. SUBDIVISION SURFACES 367 The bounds to insure convergence are the same as the bounds to insure convergence of v l to q l . uniformity of representation. and there is a range of values from which αn can be chosen to insure tangent plane continuity. 8 8 In particular. First. He proposes the following “optimal” value for αn . For example. Although subdivision surfaces have many attractive features. Stam [78] also implemented a method for computing points on Catmull-Clark surfaces. except at a ﬁnite number of extraordinary points. for a vertex of degree three (n = 3). see Hoppe et [44]. and it is nontrivial. namely 5 11 − < αn < . and we apologize for not being more thorough. The implementation of the method is discussed. He proves that tangent plane continuity is insured if αn is chosen so that 1 3 1 − cos (2π/n) < αn < + cos (2π/n) . If αn is chosen in the correct range. Their method makes use of Loop’s scheme. the values α3 = 5/8 is outside the correct range. The related issue of adaptive parameterization of surfaces is investigated in Lee et al [49]. αn = 5/8 yields βn = 1/2. we advise our readers to consult the SIGGRAPH Proceedings and Course Notes. Note that α6 = 5/8 is indeed this value for regular vertices (of degree n = 6). Loop also discusses curvature continuity at extraordinary points. and code simplicity. smoothness) of subdivision surfaces have been investigated in Reif [66] and by Zorin [89].

(iii) Various methods exist to determine twist vectors. Problem 3 (30 pts). Experiment with various methods for determining corner twists. accounting for 12 control points per patch. Problem 2 (20 pts). (2) Prove that the conditions − → Dn+2 Dn FA = Dn+2 Dn FA = Dn+2 Dn FA = 0 . Show that the number of independent conditions is generally 3m. 0≤j≤N be a net of data points. 0.8.368CHAPTER 9. . . show that the control points on the boundary curves of each rectangular patch can be computed. such that F (ui . . uM and v0 . We would like to ﬁnd a rectangular bicubic C 1 -continuous B-spline surface F interpolating the points xi. Show that the number of independent conditions is generally 2m + 1. the derived surface Dn+2 Dn FA is the same in any α β → stripe in the direction − . Implement the interpolation method proposed in problem 4. prove that for every triangle A. j )0≤i≤M. e Problem 4 (30 pts). (ii) However. uN be two knot sequences consisting of simple knots. . Compute the Bessel twists. Show that the number of conditions required for two triangular patches of degree m to meet with C 1 -continuity is 3m + 1. (1) If we consider surface splines of degree 3n + 3 with C 2n+1 continuity. Formulate a de Boor algorithm for rectangular B-spline surfaces. Problem 6 (20 pts). (i) Using the method of section 6. and let (xi. .. j+s . the derived surface Dn+2 Dn FA is the same in any stripe in the γ γ β − . Show that the other 4 interior control points can be found by computing the corner twists of each patch (twist vectors are deﬁned in section 7. Formulate a knot insertion algorithm for rectangular B-spline surfaces. . 1.e. Show that the number of conditions required for two triangular patches of degree m to meet with C 2 -continuity is 6m − 2. Use it to convert a B-spline surface into rectangular B´zier patches. . vj ) = xi. vj ) to be the bilinear interpolant of the four bilinear patches determined by the nine points xi+r.6). where r = −1. Problem 5 (40 pts). . 1 and s = −1. α β γ α γ β . j . POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES 9. Let u0 . j . 0. each patch requires 16 control points.5 Problems Problem 1 (30 pts). and the derived surface Dn+2 Dn F is the same in any stripe in the direction → direction α α γ A − → β. One method (Bessel twist) consists in estimating the twist at (ui . i.

5. . and that in this case. . Hint. (3) If the sequence f = (f0 . Prove that Sn Fn = Fn diag(v 1) where diag(v 1) is the diagonal matrix with the following entries on the diagonal: v 1 = 1. . . 0 ≤ k ≤ n − 1. . . PROBLEMS 369 are independent. which means that f−j = fj for all j ∈ Z (viewed as a periodic sequence). that fn−j = fj for all j. e−ik2π/n . e−i(n−1)2π/n . The matrix Fn is called a Fourier matrix . each patch is deﬁned by 3(n + 1)2 − 2 control points. .9. . 1. (1) Letting Fn = e−i2πkl/n 0≤k≤n−1 0≤l≤n−1 be the conjugate of Fn . . fn−1 ) is even. or equivalently. Problem 7 (30 pts). . . Let Fn be the symmetric n × n-matrix (with complex coeﬃcients) Fn = ei2πkl/n 0≤k≤n−1 0≤l≤n−1 . the standard k-th row now being indexed by k − 1 and the standard l-th column now being indexed by l − 1. . n − 1]. . Prove that the Fourier transform of Loop’s matrix R= M− 5 A 8 . prove that Fn Fn = Fn Fn = n In . prove that the Fourier transform f is expressed as n−1 f (k) = j=0 fj cos (2πjk/n) . Problem 8 (10 pts). . (2) Prove that H(f )Fn = Fn f . for every k. . 1. and that the inverse Fourier transform (taking c back to f ) is expressed as n−1 c(k) = j=0 cj cos (2πjk/n) . 0 ≤ j ≤ n − 1. . e−i2π/n . . assuming that we index the entries in Fn over [0. n − 1] × [0. . . .

Problem 10 (50 pts). Generalize your program to handle boundaries. Problem 9 (50 pts). Implement the Doo-Sabin subdivision method for closed meshes. . Generalize your program to handle boundaries.370CHAPTER 9. Implement the Loop subdivision method for closed meshes. POLYNOMIAL SPLINE SURFACES AND SUBDIVISION SURFACES is given by (R)j = 0 3 8 + 1 4 if j = 0. cos (2πj/n) if j = 0. Problem 11 (60 pts). Experiment with various values of αn . Generalize your program to handle boundaries. Implement the Catmull-Clark subdivision method for closed meshes.

the linearization of multiaﬃne maps is used to obtain formulae for the directional derivatives of polynomial maps. It turns out that E is characterized by a universality property: aﬃne maps to vector spaces extend uniquely to linear maps. the construction of a vector space E in which − → both E and E are embedded as (aﬃne) hyperplanes is described. This chapter proceeds as follows. Let us ﬁrst explain how to distinguish between points and vectors practically. Then. Similarly. or Homogenizing For all practical purposes. Such an homogenization of an aﬃne space and its associated vector space is very useful to deﬁne and manipulate rational curves and surfaces. However. these formulae lead to a very convenient way of formulating the continuity conditions for joining polynomial curves or surfaces. As a consequence. aﬃne maps). First. A disadvantage of the aﬃne world is that points and vectors live in disjoint universes. (vector spaces. linear maps). such a treatment will be given elsewhere. using what amounts to a “hacking trick”. Actually. It is often more convenient. It is shown how aﬃne frames in E become bases in E. or the directional derivatives of polynomial surfaces. It also leads to a very elegant method for obtaining the various formulae giving the derivatives of a polynomial curve. linear combinations. aﬃne maps between aﬃne spaces E and F extend to linear maps between E and F . aﬃne combinations. Such an “homogenization” (or “hat construction”) can be achieved. multiaﬃne maps extend to multilinear maps.Chapter 10 Embedding an Aﬃne Space in a Vector Space 10. to deal with linear objects. In turn. at least mathematically. 371 . we will show that such a procedure can be put on ﬁrm mathematical grounds. it would also be advantageous if we could manipulate points and vectors as if they lived in a common universe. Next.1 The “Hat Construction”. curves and surfaces live in aﬃne spaces. using perhaps an extra bit of information to distinguish between them if necessary. rather than aﬃne objects (aﬃne spaces.

where → → → a = a0 + x1 −1 + x2 −2 + x3 −3 . parallel to the (x. 0) whose fourth coordinate is 0. v v v → → − → → v A vector − ∈ E is also represented by its coordinates (u1 . 1). −2 ). in which E corresponds to the (x. for simplicity. called . where the fourth coordinate is neither 1 nor 0. This “programming trick” works actually very well. can we make sense of such elements. As usual. −2 )). x3 . x2 . called a translation. deﬁned as ω −1 (1). and to agree that points are represented by (row) vectors (x1 . y)-plane. u3 ) over the basis (−1 . for every a ∈ E. E ) is embedded in − → a vector space E. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE Assume that we consider the real aﬃne space E of dimension 3. every point x ∈ E is v v → represented by its coordinates (x1 . and E itself is embedded as an aﬃne hyperplane.λ : E → E. and that vectors are represented by (row) vectors (v1 . Given an aﬃne space (E. In the case of an aﬃne space E of dimension 2. and deﬁned such that u → tu (a) = a + − . 0. x2 . for some linear form ω : E → R. → → − every − ∈ E induces a mapping tu : E → E. E ). −2 . u2. However.372 CHAPTER 10. and vectors) are ﬁnite. With respect to this aﬃne frame. Given any point a and any scalar λ ∈ R. the vector space E is unique up to isomorphism. We prefer a more geometric and simpler description in terms of simple geometric transformations. it is quite useful to visualize the space E. we deﬁne the mapping Ha. it is assumed that all vector spaces are deﬁned over the ﬁeld R of real numbers. 1) whose fourth coordinate is 1. u v v → One way to distinguish between points and vectors is to add a fourth coordinate. Remark: Readers with a good knowledge of geometry will recognize the ﬁrst step in embedding an aﬃne space into a projective space. and of such a construction? The − → answer is yes. y)-plane. and its actual construction is not so important. v2 . v3 . in which E is embedded as a hyperplane passing through the origin. we can think of E as the vector space R3 of − → dimension 3. We will present a construction in which an aﬃne space (E. satisﬁes a universal property with respect to the extension of aﬃne maps to linear maps. As a consequence. Clearly. x3 . Of course. (−1 . The question is. and that we have some → → v aﬃne frame (a0 . and passing through the point on the z-axis of coordinates (0. Thus. Ramshaw explains the construction using the symmetric tensor power of an aﬃne space [65]. the set of translations is a vector space isomorphic u − → → → to E . We begin by deﬁning − → two very simple kinds of geometric (aﬃne) transformations. and that all families of scalars (points. −2 . We will also show that the homogenization E of − → an aﬃne space (E. we are opening the door for strange elements such as (x1 . Berger explains the construction in terms of vector ﬁelds. The extension to arbitrary ﬁelds and to families of ﬁnite support is immediate. translations and dilatations. The construction of the vector space E is presented in some details in Berger [5]. and E corresponds to the plane of equation z = 1. 5). x2 . x3 ). E ). we will use the same notation − for both the vector − and the translation u u tu .

in such a way that both E and E can be naturally embedded into E. Ha. note that Ha. OR HOMOGENIZING 373 dilatation (or central dilatation. − → (2) Berger deﬁnes a map h : E → E as a vector ﬁeld . and when λ = 0 and x = a.0 is the constant aﬃne map sending every point to a. If we assume λ = 1.λ2 . THE “HAT CONSTRUCTION”. We now consider the set E of geometric transformations from E to E.0 (x) = a for all x ∈ E. xa if we want to deﬁne a1 .λ is never a translation. or homothety) of center a and ratio λ. we see that we have to distinguish between two cases: (1) λ1 + λ2 = 0. We would like − → to give this set the structure of a vector space. and Ha.λ (a) = a. and deﬁned such that → Ha.λ(a).1. We have Ha. it turns out that it is more convenient to consider dilatations of the form Ha. and is obtained by “scaling” − by λ.λ (x) = a + λ− ax. since → → → → λ1 − 1 + λ2 − 2 = λ1 − 1 − λ1 − 2 = λ1 − →. Then. Ha.1−λ (x) = Hx. since → a1 .λ1 and Ha2 . let us see the eﬀect of such a dilatation on a point x ∈ E: we have → → → → Ha. Ha. Remarks: (1) Note that Ha. a translation − can be viewed as the constant xa. λ can be viewed as → → the vector ﬁeld x → λ− Similarly. λ2 (x) = x + λ2 − 2 . Thus. Then. u − . we could deﬁne E as the (disjoint) union of these two → vector ﬁeld x → u vector ﬁelds. We prefer our view in terms of geometric transformations.λ (x) is on the line → deﬁned by a and x.λ is never the identity. To see this.1−λ .10. if λ < 1). The eﬀect is a uniform dilatation ax (or contraction. for every x ∈ E. consisting of the union of the (disjoint) sets of translations and dilatations of ratio λ = 1.1−λ as a. each a. In fact. For simplicity of notation. λ1 + a2 . λ1 (x) = x + λ1 − 1 and xa → a2 . λ . let us denote Ha. and since a is a ﬁxed-point.1−λ (x) = a + (1 − λ)− = a + − − λ− = x + λ− ax ax ax xa. it will turn out that barycenters show up quite naturally too! In order to “add” two dilatations Ha1 . When λ = 0. Thus. In this case. λ2 . λ (x) = x + λ− xa. where λ = 0. xa xa xa xa a− 1 2a . we have → a.

λµ . λ1 + λ2 λ1 + λ2 and λ2 λ1 +λ2 the dilatation associated with the point b and the scalar λ1 + λ2 . we have u → u → → λ− + − = λ(− + λ−1 − ). a− 1 2a λ1 λ1 +λ2 where λ1 − → denotes the translation associated with the vector λ1 − →. λ1 + a2 . λ . since λ = 0. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE a1 . λ2 = λ1 − →. λ = a. and we let → → a. λ2 = λ2 λ1 a1 + a2 . a− 1 a− 1 2a 2a (2) λ1 + λ2 = 0. In this case. u u → the dilatation of center a + λ−1 − and ratio λ. since ab = λ−1 − . letting b = a + λ−1 − . u We can now use the deﬁnition of the above operations to state the following lemma. λ1 + λ2 λ1 + λ2 such that → − λ1 − λ2 − → → xb = xa1 + xa2 . → Given a translation deﬁned by − and a dilatation a. and → → λ · − = λ− . λ + − = a + λ−1 − . u v u v It is also natural to deﬁne multiplication by a scalar as follows: µ · a. λ . xa → xa u → − → → and so. showing that the “hat construction” described above has allowed us to achieve our goal of − → embedding both E and E in the vector space E. u u − → → where λ− is the product by a scalar in E . λ1 + λ2 . . λ1 + a2 .374 we let CHAPTER 10. the points a1 and a2 assigned the weights have a barycenter λ1 λ2 b= a1 + a2 . we let a1 . we have u u → → − → → − → → u xa xa u xa → λ− + − = λ(− + λ−1 − ) = λ(− + ab) = λxb. u → → → → The sum of two translations − and − is of course deﬁned as the translation − + − . λ1 + λ2 λ1 + λ2 Since → − → → xa xa λ1 − 1 + λ2 − 2 = (λ1 + λ2 )xb.

→ ω(− ) = 0. → → − for every a ∈ E. λ . Finally. since u → → → u i(− ) + λ · j(a) = − + a. λ . → → λ · − = λ− .1. → → → → u v u v if µ = 0 then µ · a. OR HOMOGENIZING 375 Lemma 10. λ = a + λ−1 − . λ1 + λ2 . Note that → → → j(a + − ) = a + − . λ ∈ R. λ = 0. Thus. Finally. under the injective aﬃne map j : E → E. λ2 = λ1 λ2 a1 + a2 . the map ω : E → R deﬁned such that ω( a. we have − → E = i( E ) ⊕ Rj(a). λ1 + a2 . The set E consisting of the disjoint union of the translations and the dilatations Ha. THE “HAT CONSTRUCTION”. λ ) = λ. for every a ∈ E. λ1 + λ2 λ1 + λ2 → → → a. λ = a. under the map j : E → E. λ2 = λ1 − →. 1 .10. λ1 + a2 . u u u Proof. u u u − +− =− +− . The veriﬁcation that E is a vector space is straightforward.1. ω −1(1) is indeed an aﬃne hyperplane isomor− → phic to E with direction i( E ). where j(a) = a. from the deﬁnition of + . u u and Furthermore. and thus. The linear map mapping − → − → → → a vector − to the translation deﬁned by − is clearly an injection i : E → E embedding E u u → → where − stands for the translation associated with the vector − . for every − ∈ E . and ω −1(1) is an aﬃne u u − → hyperplane isomorphic to E with direction i( E ). 1 + − .1. λ . ω −1 (0) is a hyperplane isomorphic to E under the injective linear map − → → → i : E → E such that i(− ) = tu (the translation associated with − ). λ = 0 . is a vector space under the following operations of addition and multiplication by a scalar: if λ1 + λ2 = 0 then a1 . 1 = a. λµ . − → 0 · a. It is also clear that ω is a linear form.1−λ = a. λ = a + λ−1 − . u u . u − → is a linear form. for every a ∈ E. as an hyperplane in E. λ + − = − + a. a− 1 2a if λ1 + λ2 = 0 then a1 . j is an aﬃne u u injection with associated linear map i.

→ arbitrary element b. The following diagram illustrates the embedding of the aﬃne space E into the vector space E. when E is an aﬃne plane. by picking λ = µ and u − → E = i( E ) + Rj(a). µ . and he says that an element z ∈ E such that ω(z) = λ is λ-heavy (or has ﬂavor λ) ([65]).376 CHAPTER 10. λ a. In general. one . − → Note that E is isomorphic to E ∪ (E × R∗ ) (where R∗ = R − {0}).1: Embedding an aﬃne space (E. 1 = a j(E) = ω −1(1) Ω − → u − → i( E ) = ω −1(0) − → Figure 10. Other authors use the notation E∗ for E. E ) into a vector space E → → → when λ = 0. we can thing of E as a stack of parallel hyperplanes. intuitively. the λ-heavy elements all belong to the hyperplane ω −1(λ) − → parallel to i( E ). for every a ∈ E. we have − → E = i( E ) ⊕ Rj(a). and we get any v u v → − − = µ ab. Thus. Ramshaw calls the linear form ω : E → R a weight (or ﬂavor). and the elements of i( E ) are 0-heavy and are called vectors. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE a. The − → elements of j(E) are 1-heavy and are called points. − → − → and since i( E ) ∩ Rj(a) = { 0 }. µ = 0. Thus. we get any arbitrary − ∈ E by picking λ = 0 and − = − .

which we will call a weighted point. We can now justify rigorously the programming trick of the introduction of an extra coordinate to distinguish between points and vectors. As an example. and i( E ) and E . the expression 2 + 3 denotes the real number 5. when we consider the homogenized version A of the aﬃne space A associated with the ﬁeld R considered as an aﬃne space. we may also write a.1. when viewing λ as a point in both A and A. When we want to be more precise. and any family (λi )i∈I of scalars in R. when viewing λ as a vector in R and in A. Given any family (ai )i∈I of points in E. and 2 + 3 does not make sense in A.5. First. However. for λ = 0. then ai . and write 1a just as a. In particular. 3]. in the simpliﬁed u u − = a + − . and simply λ. The elements of A are called B´zier sites. in view of the fact that → → a + − . every element of E can be written uniquely as − + λa. λ .1. 1 . in A. and denote → → notation. However. in A. we will identify j(E) and E. λi = i∈I −− − −→ λi ai . 1 = a. Then. which can be denoted as 2. we make a few observations. because we ﬁnd it too confusing. u u → → and since we are identifying a + − with a + − . 2+3 denotes 2 the middle point of the segment [2. since u u → → a. 1 + − . λ . OR HOMOGENIZING 377 for each λ. since it is not a barycentric combination. 1 as a.5. THE “HAT CONSTRUCTION”. 1 (under the injection j). for λ = 1. − → − → From now on. 2 .10. i∈I where −−→ −− λi ai = i∈I i∈I − → λi bai . and one corresponding to E . a little bit like an inﬁnite stack of very thin pancakes! There are two privileged − → pancakes: one corresponding to E. the expression 2 + 3 makes sense: it is the weighted point 2. the above reads as a + u u → → a + − as a + − .1. we go one step further. From u u → lemma 10. We will also write λa instead of a. λ + − = a + λ−1 − . Thus. We u also denote λa + (−µ)b as λa − µb. by e Ramshaw. we write λ for λ. it is easily shown by induction on the size of I that the following holds: (1) If i∈I λi = 0. for every a ∈ E. u u → → we will refrain from writing λa + − as λa + − .

in his book published in 1844! . for − → → any family (λ ) of scalars in R. which is left as an exercise. λi . → vj where −−→ −− λi ai = i∈I i∈I − → λi bai for any b ∈ E. This will be convenient when dealing with derivatives in section 10. λi = i∈I i∈I λi i∈I λi ai . n}. i∈I λi .4. λi belongs to the hyperplane ω (1). or (2) If i∈I λi = 0. Operations on weighted points and vectors were introduced by H. Grassmann. and when I = {1. and that in E. we have the following slightly more general property.378 CHAPTER 10. . with I ∩ J = ∅. Thus. . we allow ourselves to write λ1 a1 + · · · + λn an . is a vector independent of b. When i∈I λi = 0. which. Given any aﬃne space (E.1. − → Lemma 10. it is a point. λi + i∈I j∈J − = → vj −−→ −− λi ai + i∈I j∈J −. which. and thus. or (2) If i∈I λi = 0. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE for any b ∈ E. the element −1 i∈I ai . is a vector independent of b. we can make sense of i∈I ai . . we see how barycenters reenter the scene quite naturally. then ai .4. i i∈I j j∈J the following properties hold: (1) If i∈I λi = 0. In fact.1. regardless of the value of i∈I λi .5. by lemma 2. λi + − = → vj j∈J i∈I λi i∈I i∈I λi ai + j∈J − → vj i∈I λi . where some of the occurrences of + can be replaced by − . By induction on the size of I and the size of J.1. and any family (− ) v of vectors in E . as λ1 a1 + · · · + λn an . When i∈I λi = 1. where the occurrences of − (if any) are replaced by −. The above formulae show that we have some kind of extended barycentric calculus. then ai . then ai . E ). i∈I λi . . for any family (ai )i∈I of points in E. the linear combination of points i∈I λi ai is a vector.2. Proof. by lemma 2.

2 Aﬃne Frames of E and Bases of E − → There is also a nice relationship between aﬃne frames in (E. . . . . . . . .r.r. then its coordinates w. − am . λλm ). . . . We sketch parts of the proof. . . − am . → − − → −(v−+ − ·− v−)a − v a−+ · − + v− a v · 1 a0 a1 m a0 m 1 m 0 + 1 1 m m with respect to the aﬃne basis (a0 . . Furthermore. . . . and for any aﬃne frame (a0 . λ over the basis 0a → a− a− a (− →. . . . if v . . the basis (a0 . λ ∈ E. . a− 1 0a if µ = 0. where λ = 0. .1. → → − If a vector − ∈ E is expressed as v − − − − − − − → − = v − → + · · · + v − a = − − − · ·− + − − − − − − − ·− − − . . . Given any aﬃne space (E. . stated in the following lemma. . . . E ) and bases of E.t. − am )) in E.t. .r. vm . . then the coordinates of x. . a0 ) in E. . . 0). . . . . . λ w. am ) is a basis for E. the family (a0 . . the basis (a0 . the aﬃne basis (a0 . Proof. λ . λm ) with λ0 + · · · + λm = 1. am ) a− 1 a0 → 0a for E. a− 1 0a 0a → → − For any vector − ∈ E . . am ) in E are (−(v1 + · · · + vm ). . − am ) in E . (− →. . µ . then we have − − a− 1 a0 → a0 → λ1 − → + · · · + λm − am + µa0 = a0 + µ−1 λ1 − → + · · · + µ−1 λm − am .10. .2. if − a0 → a− 1 x = a0 + x1 − → + · · · + xm − am 0a − a0 → a− 1 over the aﬃne frame (a0 . am ) in E are (λλ0 . . . − = v −→ + · · · + v − a → − − → v 1 a0 a1 m a0 m − → − − over the basis (− →. the family (− →. . leaving the details as an exercise. . λ). .t. .2. . then over the basis (− →. (− →. am ) in E. . . given any element x. . . . . . . If we assume that we have a nontrivial linear combination − → − a0 → λ1 − → + · · · + λm − am + µa0 = 0 . λxm . For any element a. . . − a . . are 0 1 0 m 0 (λx1 . . . . for any aﬃne frame (a0 . AFFINE FRAMES OF E AND BASES OF E 379 10. − am )) a− 1 a0 → 0a − for E. the a− 1 a0 → a− 1 a0 → 0a 0a → coordinates of − are v (v1 . a ) in E. vm ). am ) in E are (λ0 . then the coordinates of a. E ). . . . a0 ) is a basis for E. if the barycentric coordinates of a w. − → − Lemma 10. v1 .

. − am ) is a basis of a− 1 a0 → 0a − → E . . . − am )) in E. λ). . xm . a2 ) in E. . (− →. a0 ) corresponding to the aﬃne frame a− 1 a− 2 0a 0a (a0 . 1) are the coordinates of x in E. .r. λ = a0 + x1 − → + · · · + xm − am .380 CHAPTER 10. . . λxm . (− →. a0 ) in E are a− 1 a0 → 0a (λx1 . and . . − →. to the aﬃne frame (a0 . we must also have λi = 0 for all i. The following diagram shows the basis (− →. . we must have µ = 0. i. . . λ over the basis (− →. . 1 = a a1 Ω E −→ a− 2 0a − → u −→ a− 1 0a Figure 10. a0 ) in E a− 1 a− 2 0a 0a − which is never null. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE a. we have a− 1 a0 → 0a − − x. the last coordinate is 1. − am )) a− 1 a0 → 0a in E.2: The basis (− →. .e. xm ) are the coordinates of x w. . λ a0 a2 a. . . but since (− →. Given any element x. . λ = a0 . 1 ≤ i ≤ m.t. (x1 . − →. − If (x1 .. . . . λ + λx1 − → + · · · + λxm − am . − am . λ ∈ E. . . and thus. . a− 1 a0 → a− 1 a0 → 0a 0a − which shows that the coordinates of x. then. a1 . . . . . in view of the deﬁnition of + . if − x = a0 + x1 − → + · · · + xm − am a− 1 a0 → 0a − over the aﬃne frame (a0 .

ω : E → R is a linear form such − → − → that ω −1 (0) = i( E ). j : E → E is an injective aﬃne − → map with associated injective linear map i : E → E. the vector space E has the property that for any vector space F and any aﬃne − → − → − → map f : E → F . 1 = a a1 Ω a2 E − → u Figure 10. and for every vector space F and every aﬃne map . . ω −1 (1) = j(E). i. First. a2 ) in E − → → − → if − has coordinates (u1 . then − u a− 1 a0 → u 0a has coordinates (u1 . − am ) in E .1. . E ) is a triple E. .e. Other authors use the notation f∗ for f . Given any aﬃne space (E. − → Deﬁnition 10. . um . EXTENDING AFFINE MAPS TO LINEAR MAPS 381 a0 a.. every aﬃne map f : E → F extends uniquely to a linear map f : E → F . a1 .3 Extending Aﬃne Maps to Linear Maps − → Roughly. . ω . where E is a vector space. As a consequence. . there is a unique linear map f : E → F extending f : E → F . j. an homogenization (or linearization) − → of (E.10. a2 ) in E viewed as a basis in E. 0) in E. a1 . . The following diagram shows the aﬃne frame (a0 . given two aﬃne spaces E and F . . we deﬁne rigorously the notion of homogenization of an aﬃne space. . the last coordinate is 0. λ a. . .3. E ).3. 10. We now consider the universal property of E mentioned at the beginning of this section. .3: The basis (a0 . um) with respect to the basis (− →.

By linearity of f and since f extends f . f = f ◦ j. since a + − and a + − are identiﬁed. objects deﬁned by a universal property are unique up to isomorphism.382 CHAPTER 10. where E is a vector space. recall that from lemma 10. and since f extends f . the map f deﬁned as above is clearly a linear map extending f . i. The next lemma will indeed show that E has the required extension property. for every a ∈ E. In u particular.3. u u u u u − → → → → → − and thus. deﬁnition 10. j(E) = ω −1 (1) is an aﬃne hyperplane with direction i( E ) = ω −1(0). every element → of E can be written uniquely as − + λa. H is an aﬃne hyperplane in E. we have u u u − → → → f (− + λa) = λf (a) + f (− ). for any aﬃne map − → − → f : E → F . The obvious candidate for E is the vector space E that we just constructed.2. On the other hand.1. Then. as in the following diagram: E@ j / E @@ @@ f @ f @ @ − → F − → Thus. where f is the linear map associated with f . there is a unique linear map f : E → F extending f such that − → → → f (− + λa) = λf (a) + f (− ) u u − → → → − for all a ∈ E. we have u → → → → f (− + λa) = f (− ) + λf (a) = f (− ) + λf (a) = λf (a) + f (− ). u u Proof. This property is left as an exercise. when λ = 0. Assuming that f exists.1 is more convenient for our purposes. j. as a triple E. since it makes the notion of weight more evident. E ).3.1. and such that the universal property stated above holds.e. . H . all − ∈ E . u u which proves the uniqueness of f . However. f (− ) = f (− ) for all − ∈ E . Note that we − → could have deﬁned an homogenization of an aﬃne space (E. we have → → f (− + λa) = λf (a + λ−1 − ). Given any aﬃne space (E. and j : E → E is an injective aﬃne map such that j(E) = H. we must have u u − → → → → → → f (a) + f (− ) = f (a) + f (− ) = f (a + − ) = f (a + − ) = f (a) + f (− ). and all λ ∈ R. E ) and any vector space F . u u u u → → If λ = 1. − → − → Lemma 10. As usual. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE − → − → f : E → F . there is a unique linear map f : E → F extending f .

u Proof. In u particular.2 shows that E.3.3. Consider the vector space F . u u − → → → − for all a ∈ E. . (−1 . . .3. when λ = 0. as in the diagram below. Assume that E and F → → are of ﬁnite dimension. we have → → f (a + − ) = f (− + a). Then. u u that is. there is a unique linear map f : E → F .3 shows us how to homogenize an aﬃne map to turn it into a linear map between the two homogenized spaces. since − → → → f (− + λa) = f (− ) + λf (a). . Lemma 10. we have → → u f (− + λa) = λf (a + λ−1 − ). where f is the linear map associated with f . .2. a linear map h : E → F is given by an u u v vm . and the aﬃne map j ◦ f : E → F . . extending j ◦ f . (−1 .3. − )) is an aﬃne frame of F . b0 ) in F .10.3. there is a unique linear map f : E → F extending f . we obtain the following lemma. u u → → and (b0 . . j.3. u u the linear map f is weight-preserving. . with origin b0 . Also observe that we recover f from f . with respect to the two v vm → → → → bases (−1 . E ). and that (a0 . Given two aﬃne spaces E and F and an aﬃne map f : E → F . . we have → → → → f (− + λa) = f (λ(a + λ−1 − )) = λf (a + λ−1 − ) = λf (a + λ−1 − ). . − . u u From a practical point of view. . . By lemma 10. . . As a corollary. −n )) is an aﬃne frame of E. More generally. all − ∈ E . u u u u 383 − → Lemma 10. − → − → Note that f : E → F has the property that f ( E ) ⊆ F . ω . . . is an homogenization of (E. a0 ) in E and (−1 . and thus extending f . by letting λ = 1 in → → f (− + λa) = λf (a + λ−1 − ). and all λ ∈ R. E j f / F j f E such that / F − → → → f (− + λa) = f (− ) + λf (a). −n . EXTENDING AFFINE MAPS TO LINEAR MAPS When λ = 0. lemma 10. with origin a0 .

. 1). since − → → → f (− + λa) = f (− ) + λf (a). . . 1)⊤ and Y = (y1 . EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE (m + 1) × (n + 1) matrix A. . 0. 1). . points are represented by vectors whose last cooru u dinate is 1. for X = (x1 . . . . . . and vectors are represented by vectors whose last coordinate is 0. consider the following aﬃne map f : A2 → A2 deﬁned as follows: y1 = ax1 + bx2 + µ1 . For example. . 1) → → of f (a0 ) with respect to the basis (−1 . . . . . the last column contains the coordinates (µ1 . 1)⊤ . The matrix of f is a b µ1 c d µ2 0 0 1 and we have y1 a b µ1 x1 y2 = c d µ2 x2 1 0 0 1 1 y1 a b µ1 x1 y2 = c d µ2 x2 y3 0 0 1 x3 In E. −n . . 1) and (y1 . . . . a0 ) in E. µm . − ) and (− . . . u u given any x ∈ E and y ∈ F . . . . with coordinates (x1 . . .384 CHAPTER 10. . . and since u u v v 1 n 1 m → → f (a0 + − ) = f (− + a0 ). . with m occurrences of 0. the last row of the matrix A = M(f ) with respect to the given bases is (0. . . u u → → since over the basis (−1 . . . . ym . . − ). . − . the submatrix of A obtained by deleting v vm − → the last row and the last column is the matrix of the linear map f with respect to the bases → → → → (− . ym. . we have . . we have y = f (x) iﬀ Y = AX. xn . xn . 0. y2 = cx1 + dx2 + µ2 . b0 ). . . If this linear map h is equal to the homogenized version f of an aﬃne map f .

. . am ) + → → λj fS (−1 .. for λi = 0. As a corollary. Em j×···×j f / F j (E)m f / F .. λm ∈ R.. . am ) + v vm S⊆{1.. . −k ). . am ∈ E. .. ... .2.. Given any two aﬃne spaces E and F and an m-aﬃne map f : E m → F .3). − → Lemma 10. y3 = x3 . 10. . . if → → f (a1 + −1 . . . . vi vi − → → → for all a1 .4 From Multiaﬃne Maps to Multilinear Maps We now show how to homogenize multiaﬃne maps.4. − + λm am ) = λ1 · · · λm f (a1 + λ−1 −1 . v vm 1 ≤ i ≤ m. . .. . − ∈ E . am + − ) = f (a1 . .. for any m-aﬃne map − → − → f : E m → F . Section B. there is a unique m-linear map f : (E)m → F extending f . we obtain the following useful lemma. . − + λm am ) v = λ1 · · · λm f (a1 . − ∈ E .4. −k ).. . .. . .ik }. there is a unique m-linear map f : (E)m → F extending f as in the diagram below. . . where the fS are uniquely determined v vm multilinear maps (by lemma 4. . Given any aﬃne space E and any vector space F . . . . . .1.. .. .m}. .1. k≥1 j∈{1.m}. am ∈ E. k≥1 → → fS (−1 . . y2 = cx1 + dx2 + µ2 x3 .. . . and all λ1 ... we have → → → → vm f (−1 + λ1 a1 . . all −1 . . . Furthermore. and all −1 . . . The proof is very technical and can be found in Chapter B. . k=|S| S={i1 . k=|S| S={i1 . FROM MULTIAFFINE MAPS TO MULTILINEAR MAPS 385 which means that the homogeneous map f is is obtained from f by “adding the variable of homogeneity x3 ”: y1 = ax1 + bx2 + µ1 x3 .m} j ∈S / → → → − for all a1 .. . v 1 v m vm Proof. .. such that.. .4. . .10. am + λ−1 − ). then → → vm f (−1 + λ1 a1 . . vi vi S⊆{1.ik }. .2. Lemma 10. .

− + λm am ) = λ1 · · · λm f (a1 + λ−1 −1 . then → → f (−1 + λ1 a1 . . . We can use this formula to ﬁnd the homogenized version f of the map f .1 (see the proof of lemma 10. am ∈ E. λ2 )) = (λ1 λ2 a(x1 λ−1 )(x2 λ−1 ) + bx1 λ−1 + cx2 λ−1 + d ..ik }.. . x2 ) = ax1 x2 + bx1 + cx2 + d. am + − ) = f (a1 . . . zm )) = ω(z1 ) · · · ω(zm ). . vi vi S⊆{1... .m}. . where we choose the basis (1. . . .. .2). k=|S| S={i1 . and all λ1 .. . → → → → f (−1 + λ1 a1 .3.ik }. . . . zm ∈ E. . . . . . .1. where the fS are uniquely determined v multilinear maps (by lemma 4. . . . . . k≥1 → → fS (−1 . λ1 ). for all z1 . am + λ−1 − ). − ∈ E . . . 0). The homogenized version f of an m-aﬃne map f is weight-multiplicative. (x2 . . am ) + → → λj fS (−1 . v vm 1 v m vm shows us that f is recovered from f by setting λi = 1. − + λm am ) = λ1 · · · λm f (a1 + λ−1 −1 ..386 such that. . Immediate from lemma 10. For example.. ... if f : A × A → A is the biaﬃne map deﬁned such that f (x1 . λ1 λ2 ) 1 2 1 2 = (ax1 x2 + bx1 λ2 + cx2 λ1 + dλ1 λ2 . for λi = 0. . ..m} j ∈S / → → → − for all a1 . vi vi − → → → vm for all a1 . . λm ∈ R. k≥1 j∈{1. − + λm am ) v vm = λ1 · · · λm f (a1 . all −1 .m}..3. . .... λ1 λ2 ). . . k=|S| S={i1 .3 from lemma 10. v vm 1 ≤ i ≤ m. and all −1 . . Furthermore.. . . EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE → → f (a1 + −1 . the bilinear map f : A × A → A. for 1 ≤ i ≤ m. . . . . . From a practical point of view. . v 1 v m vm Proof.3). −k )... .4. . . . . . is given by f ((x1 . if CHAPTER 10. we have → → → → vm f (−1 + λ1 a1 . . − ∈ E . . −k ). . . . am ) + v vm S⊆{1.. . in the sense that ω(f (z1 . . . am + λ−1 − ). if we consider the aﬃne space A with its canonical aﬃne frame (the origin is 0. am ∈ E.. and the basis consists of the single vector 1). . . . . . in A.

.. . If P (X1.. . . polynomial maps can also be homogenized. 1≤i≤n k → k v1 1 · · · vnn λm−p − k1 . j en we have → → f (a + −1 ... x2 ) is indeed recovered from f by setting λ1 = λ2 = 1. Xn ) = Q(X1 ... and we want to ﬁnd a homogeneous polynomial Q(X1 . . i1 · · · vn. 1). . we get: → h(− + λa) = λm b + v 1≤p≤m k1 +···+kn =p 0≤ki . i1 · · · vn. we can get an explicit formula for the homogenized → → version f of f .. .. − + λm a) = λ1 · · · λm b + v vm 1≤p≤m I1 ∪.2.. Z) = Z m P X1 Xn .p} Ii ∩Ij =∅. .. . → → → → vj E 1. . . .. . Since multiaﬃne maps can be homogenized. generalizing our previous example. ... . . . we know that for any m vectors − = v − +···+v − ∈ − . . . in in ∈In − → w |I1|. . . . . For the homogenized version h of the aﬃne polynomial w h associated with f . a. Z Z . − .... Xn ] is done as follows.. This is very useful in practice. ... If (a... . . .. . . .|In | . − → − → → for some b ∈ F . we obtain the expression for f by homogenizing the polynomials which → are the coeﬃcients of the − |I1 |.. FROM MULTIAFFINE MAPS TO MULTILINEAR MAPS 387 Note that f (x1 ...m} j ∈(I1 ∪.|In| .. we have e en → → f (−1 + λ1 a.4. . . with respect to the basis w → → (−1 . j e1 n. Xn .j≤n i1 ∈I1 v1. i=j 1≤i..∪In ={1.. such that P (X1 . ..|In| . ... . using the characterization of multiaﬃne maps f : E m → F given by lemma 4. Xn . we let Q(X1 . a + − ) = b + v vm 1≤p≤m I1 ∪. In fact. i=j 1≤i. . and since E = E ⊕ Ra.∪In ={1. . .kn ...∪In ) / λj − → w |I1 |. and some − |I1|... . w Remark: Recall that homogenizing a polynomial P (X1 . Xn . . in in ∈In j∈{1.p} Ii ∩Ij =∅.10. when E is of ﬁnite dimension. . (−1 . .3. Xn ) is of total degree p.j≤n i1 ∈I1 v1. In other words. Xn ) ∈ R[X1 .. 1 ) of E.. . Z) of total degree m ≥ p. − )) is an aﬃne frame for e en E.. ..|In | ∈ F . . ...

as − → However. λn ∈ R. but we prefer to be less pedantic. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE 10. Note that − → δ = 1 = a + 1 − a. . it will be convenient to denote a point in A (and in A. for any a ∈ R. . and we write simply a + 1 − a. of its m-polar form f : Am → E. we will write λ1 a1 + · · · + λn an . In this section. since F agrees with F on A. . Remark: when we write a + 1−a. t = 0. such that λ1 + · · · + λn = 0. and thus. as suggested in section 10. it is also more convenient to denote the vector ab as b − a. since we view A as embedded as a line in A) as a.5 Diﬀerentiating Aﬃne Polynomial Functions Using Their Homogenized Polar Forms. since we view R as embedded in A). we have F (a + tδ) − F (a) = F (a + tδ) − F (a). . t→0. t when t → 0. However. an ∈ E. to distinguish it from the vector a ∈ R (and a ∈ A. or as δ. . In this section. Osculating Flats In this section.1. t if it exists. with t ∈ R. . . One of the major beneﬁts of homogenization is that the derivatives of an aﬃne polynomial function F : A → E can be obtained in a very simple way from the homogenized version f : (A)m → E. . For any a ∈ A. lim F (a + tδ) − F (a) . following Ramshaw. . we need to see what is the limit of F (a + tδ) − F (a) . − → The vector 1 of A will be denoted as 1 . given a1 . the derivative DF (a) is the limit.388 CHAPTER 10. we mean a + 1 −a in A. remember that such combinations are vectors in E (and in E). When dealing with → − derivatives. and λ1 . we assume that E is a normed aﬃne space. t=0 λ1 a1 + · · · + λn an .

t=0 t lim m−1 However. . δ). . m m by multilinearity and symmetry. Since F (a + tδ) = f (a + tδ. .1. m where f is the homogenized version of the polar form f of F . However. . a + tδ ). . where E is an aﬃne space. . m−k k Proof. . . and F is the homogenized version of F . . . . the k-th derivative Dk F (a) can be computed from the homogenized polar form f of F as follows. a) by δ. . . t→0. . . since F (a + tδ) − F (a) = f (a + tδ. . δ). a. . . we have k=m F (a + tδ) − F (a) = m t f (a. and not a point in E. . . . and thus. F (a + tδ) − F (a) = mf (a. . . . . we showed that DF (a) = mf (a. the derivative DF (a) of F at a is − → a vector in E . . . . Given an aﬃne polynomial function F : A → E of polar degree m. δ) + m−1 k=2 m k tk f (a. m−1 This shows that the derivative of F at a ∈ A can be computed by evaluating the homogenized version f of the polar form f of F . . More generally. by replacing just one occurrence of a in f (a. the structure of E takes care of this. δ). we have DF (a) = DF (a). . Lemma 10. . since F (a + tδ) − F (a) is indeed a vector (remember our convention that − is an abbreviation for − ). . . A simple induction on k. a.5. . m−k k and thus. . δ ).10. .5. δ. δ. . . a. where 1 ≤ k ≤ m: Dk F (a) = m(m − 1) · · · (m − k + 1) f (a. a). a. since F extends F on A. DIFFERENTIATING AFFINE POLYNOMIAL FUNCTIONS 389 Recall that since F : A → E. . . . . . a + tδ ) − f (a. . . where E is a normed aﬃne space. we have the following useful lemma. a.

.2. . s. . a. . depends only on the k + 1 . . the previous lemma reads as Dk F (a) = mk f (a. s). m−k by multilinearity and symmetry. Lemma 10. for 0 ≤ k ≤ m. . 0 ≤ i ≤ m. we have Dk F (a) = 0 . . . and with the convention that mk = 0 when k > m. . r. δ ).. i m−i i and we conclude using the fact that f agrees with f on E. for any r. i m−i i s−r . Given an aﬃne polynomial function F : A → E of polar degree m. as mk = m(m − 1) · · · (m − k + 1). with r = s. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE − → When k > m. .. Since δ= we can expand mk (s − r)k i=k i=0 k (−1)k−i f (r. where 1 ≤ k ≤ m: Dk F (r) = Proof. the k-th derivative Dk F (r) can be computed from the polar form f of F as follows. Using the falling power notation. . the above lemma shows that the k-th derivative Dk F (r) of F at r. . . . s−r s−r s−r . s−r s−r k Dk F (r) = mk f r. s ∈ A. The falling powers mk have some interesting combinatorial properties of their own. . it is useful to introduce the falling power notation. Lemma 10. Since coeﬃcients of the form m(m−1) · · · (m−k + 1) occur a lot when taking derivatives. and gives more insight into what’s really going on. . . with m0 = 1. . It also extends fairly easily to the case when the domain A is replaced by a more general normed aﬃne space (to deﬁne surfaces). we get mk D F (r) = (s − r)k k i=k i=0 k (−1)k−i f (r. where E is a normed aﬃne space. . r..2 is usually derived via more traditional methods involving ﬁnite diﬀerences. We believe that the approach via polar forms is more conceptual. r. following Knuth. . .5. s. . δ. . . . s).5. If F is speciﬁed by the sequence of m + 1 control points bi = f (r m−i s i ). . . .390 CHAPTER 10. . m−k k We also get the following explicit formula in terms of control points. We deﬁne the falling power mk . and by induction on k.

. the formula of lemma 10. . s−r Thus. is determined by the two points b0. In order to see that the tangent at the current point F (t) deﬁned by the parameter t. s)). and it is given by DF (r) = − m m −→ (b1 − b0 ). . . More generally.1. . r) and b1. . . t. . . Later on when we deal with surfaces. = bk . and consider a map F : E → E. m−1 m−1 given by the de Casteljau algorithm. e the tangent at the point bm is the line determined by bm−1 and bm (provided that these points are distinct). . s).10. . δ) = m−1 s−r s−r m (f (t. Similarly. t.5. if b0 = b1 = . . .5. if b0 = b1 . This shows that when b0 and b1 are distinct. m−1 = f (t. the acceleration vector D2 F (r) is given by D2 F (r) = − −→ − m(m − 1) m(m − 1) −→ (b0 b2 − 2b0 b1 ) = (b2 − 2b1 + b0 ). t. Let us assume that E and E are normed aﬃne spaces. . .2 reads as follows: i=k mk k k D F (r) = (−1)k−i bi . e Similarly. 2 (s − r) (s − r)2 the last expression making sense in E. we have basically done all the work already. r) − f (t.m−1 ). we have DF (t) = m (b1. . the above reasoning can be used to show that the tangent at the point b0 is determined by the points b0 and bk+1 . . .m−1 − b0. . . However. Recall from . s−r m−1 m−1 and since f agrees with f on Am . we have justiﬁed the claims about tangents to B´zier curves made in section 5. bk In terms of the control points b0 . . . . . it will be necessary to generalize the above results to directional derivatives. t. . . m−1 = f (t. DIFFERENTIATING AFFINE POLYNOMIAL FUNCTIONS 391 control points b0 . and bk = bk+1 . bk . the tangent to the B´zier curve at the point b0 is the line determined by b0 and b1 . note that since δ= and DF (t) = mf (t. . k i (s − r) i=0 In particular. t. b0 b1 = s−r s−r the last expression making sense in E. then DF (r) is the velocity vector of F at b0 . . . .

392

CHAPTER 10. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE

− → → → − deﬁnition D.1.2, that if A is any open subset of E, for any a ∈ A, for any − = 0 in E , u → the directional derivative of F at a w.r.t. the vector − , denoted as Du F (a), is the limit, if u it exists, → F (a + t− ) − F (a) u lim , t→0,t∈U,t=0 t → where U = {t ∈ R | a + t− ∈ A}. u If F : E → E is a polynomial function of degree m, with polar form the symmetric multiaﬃne map f : E m → E, then → → F (a + t− ) − F (a) = F (a + t− ) − F (a), u u where F is the homogenized version of F , that is, the polynomial map F : E → E associated with the homogenized version f : (E)m → E of the polar form f : E m → E of F : E → E. Thus, Du F (a) exists iﬀ the limit → F (a + t− ) − F (a) u t→0, t=0 t lim exists, and in this case, this limit is Du F (a) = Du F (a). Furthermore, → → → F (a + t− ) = f (a + t− , . . . , a + t− ), u u u

m

and since

→ → → F (a + t− ) − F (a) = f (a + t− , . . . , a + t− ) − f (a, . . . , a), u u u

m m

**by multilinearity and symmetry, we have → → F (a + t− ) − F (a) = m t f (a, . . . , a, − ) + u u
**

m−1 k=m

k=2

m k

→ → tk f (a, . . . , a, − , . . . , − ), u u

m−k k

and thus,

Du F (a) = lim

→ F (a + t− ) − F (a) u → = mf (a, . . . , a, − ). u t→0, t=0 t

m−1

**However, we showed previously that Du F (a) = Du F (a), and thus, we showed that → u Du F (a) = mf (a, . . . , a, − ).
**

m−1

By a simple, induction, we can prove the following lemma.

10.5. DIFFERENTIATING AFFINE POLYNOMIAL FUNCTIONS

393

the k-th directional derivative Du1 . . . Duk F (a) can be computed from the homogenized polar form f of F as follows: → → Du1 . . . Duk F (a) = mk f (a, . . . , a, −1 , . . . , −k ). u u

m−k

Lemma 10.5.3. Given an aﬃne polynomial function F : E → E of polar degree m, where E → → → − and E are normed aﬃne spaces, for any k nonzero vectors − , . . . , − ∈ E , where 1 ≤ k ≤ m, u u

1 k

and all k ≥ 1, and since E is of ﬁnite dimension, all multiaﬃne maps are continuous and the derivatives Dk F exist for all k ≥ 1 and are of class C ∞ (recall that Dk F (a) is a symmetric k-linear map, see Lang [48]). Furthermore, we know that → → Dk F (a)(−1 , . . . , −k ) = Du1 . . . Duk F (a). u u Thus, lemma 10.5.3 actually shows that → → → → u u Dk F (a)(−1 , . . . , −k ) = mk f (a, . . . , a, −1 , . . . , −k ). u u

m−k

Lemma 10.5.3 is a generalization of lemma 10.5.1 to any domain E which is a normed aﬃne space. We are going to make use of this lemma to study local approximations of a polynomial map F : E → E, in the neighborhood of a point F (a), where a is any point in E. In order to be sure that the polar form f : E m → E is continuous, let us now assume that E is of ﬁnite dimension, → → Since by lemma 10.5.3, the directional derivatives D . . . D F (a) exist for all − , . . . , − u u

u1 uk 1 k

**This shows that Dk F (a) is the symmetric k-linear map → → → → (−1 , . . . , −k ) → mk f (a, . . . , a, −1 , . . . , −k ). u u u u
**

m−k

Of course, for k > m, the derivative Dk F (a) is the null k-linear map. Remark: As usual, for k = 0, we agree that D0 F (a) = F (a). We could also relax the condition that E is of ﬁnite dimension, and assume that the polar form f is a continuous map. Now, let a be any point in E. For any k, with 0 ≤ k ≤ m, we can truncate the Taylor expansion of F : E → E at a at the (k + 1)-th order, getting the polynomial map Gk : E → E a deﬁned such that, for all b ∈ E, Gk (b) = F (a) + a → − → − 1 1 1 D F (a)( ab) + · · · + Dk F (a)( ab k ). 1! k!

The polynomial function Gk agrees with F to kth order at a, which means that a Di Gk (a) = Di F (a), a

394

CHAPTER 10. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE

for all i, 0 ≤ i ≤ k. We say that Gk osculates F to kth order at a. For example, in the case a of a curve F : A → E, for k = 1, the map G1 is simply the aﬃne map determined by the a tangent line at a, and for k = 2, G2 is a parabola tangent to the curve F at a. a

k As pointed out by Ramshaw, it is tempting to believe that the polar form ga of Gk is a simply obtained from the polar form f of F , by ﬁxing m − k arguments of f at the point a, more precisely, if k ga (b1 , . . . , bk ) = f (b1 , . . . , bk , a, . . . , a), m−k

for all b1 , . . . , bk ∈ E. Unfortunately, this is false, even for curves. The problem is a silly one, it has to do with the falling power mk . For example, if we consider a parabola F : A → A2 , it is easy to see from Taylor’s formula, that

1 Gb (a) = 2f (a, b) − f (b, b), 1 and Ga (b) = 2f (a, b) − f (a, a),

which means that f (a, b) is both the middle of the two line segments (f (a, a), G1 (b)) and a (f (b, b), G1 (a)), which happen to be tangent to F at F (a) and F (b). Unfortunately, it is not b true that G1 (b) = G1 (a) = f (a, b). a b It is possible to ﬁx this problem and to ﬁnd the relationship between the polar forms f k and ga , but this is done most conveniently using symmetric tensors, and will be postponed until section 11.1 (see lemma B.4.3). The ennoying coeﬃcients mk can also be washed out, if we consider the aﬃne subspaces spanned by the range of Gk , instead of osculating curves or surfaces. a Deﬁnition 10.5.4. Given any two normed aﬃne spaces E and E, where E is of ﬁnite dimension, for any polynomial map F : E → E of degree m, for any a ∈ E, for any k, with 0 ≤ k ≤ m, the polynomial map Gk : E → E is deﬁned such that, for all b ∈ E, a Gk (b) = F (a) + a → − → − 1 1 1 D F (a)( ab) + · · · + Dk F (a)( ab k ). 1! k!

We say that Gk osculates F to kth order at a. The osculating ﬂat Osck F (a) is the aﬃne a subspace of E generated by the range of Gk . a If F : A → E is a curve, then we say that F is nondegenerate iﬀ Osck F (a) has dimension k for all a ∈ A. In such a case, the ﬂat Osc1 F (a) is the tangent line to F at a, and Osc2 F (a) is the osculating plane to F at a, i.e., the plane determined by the point F (a), the velocity vector D1 F (a), and the acceleration vector D2F (a). The osculating plane is the usual notion used in diﬀerential geometry. The osculating plane to the curve F at the point F (a) is the limit of any plane containing the tangent line at F (a) and any other point F (b) on the curve F , when b approaches a.

10.5. DIFFERENTIATING AFFINE POLYNOMIAL FUNCTIONS

395

→ → If F : P → E is a surface, assuming that (Ω, (−1 , −2 )) is an aﬃne frame of P, recall that e e we denote Dejk . . . Dej1 F (a) as ∂k F (a). ∂xj1 . . . ∂xjk → → These are the partial derivatives at a. Also, letting b = a + h1 −1 + h2 −2 , in terms of partial e e derivatives, the truncated Taylor expansion is written as Gk (b) = F (a) + a

1≤i1 +i2 ≤k

hi1 hi2 1 2 i1 !i2 !

∂ ∂x1

i1

∂ ∂x2

i2

F (a).

It is not too diﬃcult to show that there are k(k + 3) (k + 1)(k + 2) −1 = 2 2 partial derivatives in the above expression, and we say that the surface F is nondegenerate iﬀ Osck F (a) has dimension k(k+3) for all a ∈ P. For a nondegenerate surface, Osc1 F (a) is 2 the tangent plane to F at a, and Osc2 F (a) is a ﬂat of dimension 5, spanned by the vectors 2 2 ∂F ∂F ∂2F (a), ∂x2 (a), ∂ F (a), ∂ F (a), and ∂x1 ∂x2 (a). The ﬂat Osc3 F (a) is a ﬂat of dimension 9, and ∂x1 ∂x2 ∂x2 1 2 we leave as an exercise to list the vectors spanning it. Thus, plane curves are degenerate in the above sense, except lines, and surfaces in A3 are degenerate in the above sense, except planes. There is a simple relationship between osculating ﬂats and polar forms, but it is much more convenient to use tensors to prove it, and we postpone the proof until section 11.1 (see lemma B.4.4). Let us simply mention a useful corollary. Given a polynomial map F : E → E of degree m with polar form f : E m → E, the aﬃne subspace spanned by the range of the multiaﬃne map (b1 , . . . , bm−k ) → f (a, . . . , a, b1 , . . . , bm−k ),

k

is the osculating ﬂat Oscm−k F (a). This leads to a geometric interpretation of polar values. We note in passing that the geometric interpretation of polar forms in terms of osculating ﬂats, was investigated by S. Jolles, as early as 1886. Let F : A → E be a nondegenerate curve of degree 3 (and thus, a space curve). By the previous corollary, the polar value f (r, s, t) is the unique intersection of the three osculating planes Osc2 F (r), Osc2 F (s), and Osc2 F (t). The polar value f (s, s, t) is the intersection of the tangent line Osc1 F (s) with the osculating plane Osc2 F (t). More generally, given a nondegenerate curve or surface F : E → E, the polar value f (b1 , . . . , bm ) is the unique intersection of the osculating ﬂats corresponding to all of the distinct points a that occur in the multiset {b1 , . . . , bm }.

This interpretation is so nice that one wonders why it was not chosen as a deﬁnition of polar forms. Unfortunately, this idea does not work as soon as F : A → E is degenerate,

396

CHAPTER 10. EMBEDDING AN AFFINE SPACE IN A VECTOR SPACE

which happens a lot. Indeed, B´zier points are usually not aﬃnely independent. Thus, we e had to use a more algebraic deﬁnition. In the case of surfaces, osculating ﬂats intersect more often that we would expect. For example, given a nondegenerate cubic surface, we know that it lies in an aﬃne space of dimension 9. But for each a ∈ P, the osculating ﬂat Osc2 F (a) has dimension 5. In general, three 5-ﬂats in a 9-space do not intersect, but any three 5-ﬂats Osc2 F (a), Osc2 F (b), and Osc2 F (c), intersect at the polar value f (a, b, c). Readers who would like to have an in-depth understanding of the foundations of geometric design and a more conceptual view of the material on curves and surfaces are urged to read the next chapter on tensors. However, skipping this chapter will only have very minor consequences (basically, ignoring the proofs of a few results).

10.6

Problems

Problem 1 (10 pts). Prove that E as deﬁned in lemma 10.1.1 is indeed a vector space. Problem 2 (10 pts). Prove lemma 10.1.2. Problem 3 (10 pts). Fill in the missing details in the proof of lemma 10.2.1. Problem 4 (10 pts). Fill in the missing details in the proof of lemma 10.5.2. Problem 5 (10 pts). Give some vectors spanning the ﬂat Osc3 F (a) of dimension 9.

**Chapter 11 Tensor Products and Symmetric Tensor Products
**

11.1 Tensor Products

This chapter is not absolutely essential and can be omitted by readers who are willing to accept some of the deeper results without proofs (or are willing to go through rather nasty computations!). On the other hand, readers who would like to have an in-depth understanding of the foundations of computer-aided geometric design and a more conceptual view of the material on curves and surfaces, should make an eﬀort to read this chapter. We hope that they will ﬁnd it rewarding! First, tensor products are deﬁned, and some of their basic properties are shown. Next, symmetric tensor products are deﬁned, and some of their basic properties are shown. Symmetric tensor products of aﬃne spaces are also brieﬂy discussed. The machinery of symmetric tensor products is then used to prove some important results of CAGD. For example, an elegant proof of theorem 5.3.2 is given. We have seen that multilinear maps play an important role. Given a linear map f : E → → F , we know that if we have a basis (−i )i∈I for E, then f is completely determined by its u − ) on the basis vectors. For a multilinear map f : E n → F , we don’t know if there → values f ( ui is such a nice property, but it would certainly be very useful. In many respects, tensor products allow us to deﬁne multilinear maps in terms of their action on a suitable basis. Once again, as in section 10.1, we linearize, that is, we create a new vector space E ⊗ · · · ⊗ E, such that the multilinear map f : E n → F is turned into a linear map f⊗ : E ⊗ · · · ⊗ E → F , which is equivalent to f in a strong sense. If in addition, f is symmetric, then we can deﬁne a symmetric tensor product E ⊙ · · · ⊙ E, and every symmetric multilinear map f : E n → F is turned into a linear map f⊙ : E ⊙ · · · ⊙ E → F , which is equivalent to f in a strong sense. Tensor products can be deﬁned in various ways, some more abstract than others. We tried to stay down to earth, without excess! 397

398 CHAPTER 11. TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS Let K be a given ﬁeld, and let E1 , . . . , En be n ≥ 2 given vector spaces. First, we deﬁne tensor products, and then we prove their existence and uniqueness up to isomorphism. Deﬁnition 11.1.1. A tensor product of n ≥ 2 vector spaces E1 , . . . , En , is a vector space T , together with a multilinear map ϕ : E1 × · · · × En → T , such that, for every vector space F and for every multilinear map f : E1 ×· · ·×En → F , there is a unique linear map f⊗ : T → F , with → → → → f (−1 , . . . , −n ) = f⊗ (ϕ(−1 , . . . , −n )), u u u u → → for all − ∈ E , . . . , − ∈ E , or for short u u

1 1 n n

**f = f⊗ ◦ ϕ. Equivalently, there is a unique linear map f⊗ such that the following diagram commutes: E1 × · · · × En N
**

/ T NNN NNN f⊗ NNN f N&

ϕ

F

First, we show that any two tensor products (T1 , ϕ1 ) and (T2 , ϕ2 ) for E1 , . . . , En , are isomorphic. Lemma 11.1.2. Given any two tensor products (T1 , ϕ1 ) and (T2 , ϕ2 ) for E1 , . . . , En , there is an isomorphism h : T1 → T2 such that ϕ2 = h ◦ ϕ1 . Proof. Focusing on (T1 , ϕ1 ), we have a multilinear map ϕ2 : E1 × · · · × En → T2 , and thus, there is a unique linear map (ϕ2 )⊗ : T1 → T2 , with ϕ2 = (ϕ2 )⊗ ◦ ϕ1 . Similarly, focusing now on on (T2 , ϕ2 ), we have a multilinear map ϕ1 : E1 × · · · × En → T1 , and thus, there is a unique linear map (ϕ1 )⊗ : T2 → T1 , with ϕ1 = (ϕ1 )⊗ ◦ ϕ2 . But then, we get and ϕ1 = (ϕ1 )⊗ ◦ (ϕ2 )⊗ ◦ ϕ1 , ϕ2 = (ϕ2 )⊗ ◦ (ϕ1 )⊗ ◦ ϕ2 .

On the other hand, focusing on (T1 , ϕ1 ), we have a multilinear map ϕ1 : E1 × · · · × En → T1 , but the unique linear map h : T1 → T1 , with ϕ1 = h ◦ ϕ1

11.1. TENSOR PRODUCTS

399

is h = id, and since (ϕ1 )⊗ ◦ (ϕ2 )⊗ is linear, as a composition of linear maps, we must have (ϕ1 )⊗ ◦ (ϕ2 )⊗ = id. Similarly, we must have This shows that (ϕ1 )⊗ and (ϕ2 )⊗ are inverse linear maps, and thus, (ϕ2 )⊗ : T1 → T2 is an isomorphism between T1 and T2 . Now that we have shown that tensor products are unique up to isomorphism, we give a construction that produces one. Lemma 11.1.3. Given n ≥ 2 vector spaces E1 , . . . , En , a tensor product (E1 ⊗ · · · ⊗ En , ϕ) → → → → for E1 , . . . , En can be constructed. Furthermore, denoting ϕ(−1 , . . . , −n ) as −1 ⊗ · · · ⊗ −n , u u u u → → → the tensor product E1 ⊗ · · · ⊗ En is generated by the vectors −1 ⊗ · · · ⊗ −n , where −1 ∈ u u u − ∈ E , and for every multilinear map f : E × · · · × E → F , the unique linear → E1 , . . . , un n 1 n map f⊗ : E1 ⊗ · · · ⊗ En → F such that f = f⊗ ◦ ϕ, is deﬁned by → → → → f⊗ (−1 ⊗ · · · ⊗ −n ) = f (−1 , . . . , −n ), u u u u → → on the generators −1 ⊗ · · · ⊗ −n of E1 ⊗ · · · ⊗ En . u u Proof. First, we apply the construction of deﬁnition A.1.11 to the cartesian product I = E1 × · · · × En , and we get the free vector space M = K (I) on I = E1 × · · · × En . Recall that −→ − −→ − → the family (−i )i∈I is deﬁned such that (ei )j = 0 if j = i and (ei )i = 1. It is a basis of the vector e → space K (I) , so that every − ∈ K (I) can be uniquely written as a ﬁnite linear combination w − . There is also an injection ι : I → K (I) such that ι(i) = − for every i ∈ I. Since → → of the ei ei − is uniquely associated with some n-tuple i = (− , . . . , − ) ∈ E × · · · × E , we → → → every ei u1 un 1 n − as (− , . . . , − ). Also, by lemma A.2.4, for any vector space F , and for any → → → will denote e u u

i 1 n

(ϕ2 )⊗ ◦ (ϕ1 )⊗ = id.

function f : I → F , there is a unique linear map f : K (I) → F , such that f = f ◦ ι,

**as in the following diagram: E1 × · · · ×R n E
**

ι / K (E1 ×···×En ) RRR RRR RRR f RRR f RR(

F

**Next, let N be the subspace of M generated by the vectors of the following type: → → → → → → → → → → (−1 , . . . , −i + −i , . . . , −n ) − (−1 , . . . , −i , . . . , −n ) − (−1 , . . . , −i , . . . , −n ), u u v u u u u u v u → → → → → → (− , . . . , λ− , . . . , − ) − λ(− , . . . , − , . . . , − ). u u u u u u
**

1 i n 1 i n

400 CHAPTER 11. TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS We let E1 ⊗ · · · ⊗ En be the quotient M/N of the free vector space M by N, which is well deﬁned, by lemma A.3.1. Let π : M → M/N be the quotient map, and let → By construction, ϕ is multilinear, and since π is surjective and the ι(i) = −i generate M, e − , . . . , − ) ∈ E × · · · × E , the ϕ(− , . . . , − ) generate M/N. → → → → since i is of the form i = ( u1 un u1 un 1 n − , . . . , − ) as − ⊗ · · · ⊗ − , the tensor product E ⊗ · · · ⊗ E is → → → → Thus, if we denote ϕ( u1 un u1 un 1 n − ⊗ · · · ⊗ − , where − ∈ E , . . . , − ∈ E . → → → → generated by the vectors u1 un u1 un 1 n ϕ = π ◦ ι.

For every multilinear map f : E1 × · · · × En → F , if a linear map f⊗ : E1 ⊗ · · · ⊗ En → F → → exists such that f = f⊗ ◦ ϕ, since the vectors −1 ⊗ · · · ⊗ −n generate E1 ⊗ · · · ⊗ En , the map u u f⊗ is uniquely deﬁned by → → → → f⊗ (−1 ⊗ · · · ⊗ −n ) = f (−1 , . . . , −n ). u u u u On the other hand, because M = K (E1 ×···×En ) is free on I = E1 × · · · × En , there is a unique linear map f : K (E1 ×···×En ) → F , such that f = f ◦ ι, as in the diagram below: E1 × · · · ×R n E

ι / K (E1 ×···×En ) RRR RRR RRR f RRR f RR(

F

− → → → Because f is multilinear, note that we must have f (− ) = 0 , for every − ∈ N. But then, w w f : M → F induces a linear map h : M/N → F , such that → → → → by deﬁning h([− ]) = f (− ), for every − ∈ M, where [− ] denotes the equivalence class in z z z z − ∈ M: → M/N of z E1 × · · · × S n π◦ι / K (E1 ×···×En ) /N E

SSS SSS SSS SSS SSS f S)

h

f = h ◦ π ◦ ι,

F

Indeed, the fact that f vanishes on N insures that h is well deﬁned on M/N, and it is clearly linear by deﬁnition. However, we showed that such a linear map h is unique, and thus it agrees with the linear map f⊗ deﬁned by → → → → f⊗ (−1 ⊗ · · · ⊗ −n ) = f (−1 , . . . , −n ) u u u u on the generators of E1 ⊗ · · · ⊗ En .

11.1. TENSOR PRODUCTS

401

where h ∈ L(E1 ⊗ · · · ⊗ En ; F ). Indeed, h ◦ ϕ is clearly multilinear, and since by lemma 11.1.3, for every multilinear map f ∈ L(E1 , . . . , En ; F ), there is a unique linear map f⊗ ∈ L(E1 ⊗ · · · ⊗ En ; F ) such that f = f⊗ ◦ ϕ, the map − ◦ ϕ is bijective. As a matter of fact, its inverse is the map f → f⊗ . Remark: For F = K, the base ﬁeld, we obtain a natural isomorphism between the vector space L(E1 ⊗ · · · ⊗ En ; K), and the vector space of multilinear forms L(E1 , . . . , En ; K). However, L(E1 ⊗ · · · ⊗ En ; K) is the dual space (E1 ⊗ · · · ⊗ En )∗ , and thus, the vector space of multilinear forms L(E1 , . . . , En ; K) is naturally isomorphic to (E1 ⊗ · · · ⊗ En )∗ . When all the spaces have ﬁnite dimension, this yields a (noncanonical) isomorphism between the ∗ ∗ vector space of multilinear forms L(E1 , . . . , En ; K) and E1 ⊗ · · · ⊗ En . The fact that the map ϕ : E1 × · · · × En → E1 ⊗ · · · ⊗ En is multilinear, can also be expressed as follows: − ⊗ · · · ⊗ (− + − ) ⊗ · · · ⊗ − = (− ⊗ · · · ⊗ − ⊗ · · · ⊗ − ) + (− ⊗ · · · ⊗ − ⊗ · · · ⊗ − ), → → w → → → → → → → → u1 vi un u1 vi un u1 wi un i − ⊗ · · · ⊗ (λ− ) ⊗ · · · ⊗ − = λ(− ⊗ · · · ⊗ − ⊗ · · · ⊗ − ). → → → → → → u u u u u u

1 i n 1 i n

What is important about lemma 11.1.3 is not so much the construction itself, but the fact that a tensor product with the universal property with respect to multilinear maps stated in that lemma holds. Indeed, lemma 11.1.3 yields an isomorphism between the vector space of linear maps L(E1 ⊗ · · · ⊗ En ; F ), and the vector space of multilinear maps L(E1 , . . . , En ; F ), via the linear map − ◦ ϕ deﬁned by h → h ◦ ϕ,

However, there vectors are not linearly independent. This situation can be ﬁxed when considering bases, which is the object of the next lemma. − → Lemma 11.1.4. Given n ≥ 2 vector spaces E1 , . . . , En , if ( uk )i∈Ik is a basis for Ek , 1 ≤ i k ≤ n, then the family of vectors − → − → (u11 ⊗ · · · ⊗ unn )(i1 ,...,in )∈I1 ×...×In i i

Of course, this is just what we wanted! Tensors in E1 ⊗ · · · ⊗ En are also called n-tensors, → → → and tensors of the form −1 ⊗ · · · ⊗ −n , where −i ∈ Ei , are called simple (or decomposable) u u u n-tensors. Those n-tensors that are not simple are often called compound n-tensors. → → We showed that E ⊗ · · · ⊗ E is generated by the vectors of the form − ⊗ · · · ⊗ − . u u

1 n 1 n

**is a basis of the tensor product E1 ⊗ · · · ⊗ En . − → Proof. For each k, 1 ≤ k ≤ n, every v k ∈ Ek can be written uniquely as −k → v = − → k vj uk , j
**

j∈Ik

Given 3 vector spaces E. since ( uk )i∈Ik is a basis for Ek .in ∈ K..402 CHAPTER 11. we see that the dimension of the − → tensor product E1 ⊗ · · · ⊗ En is m1 · · · mn . ... and thus.. if ( uk )i∈Ik is a i − ∈ E ⊗ · · · ⊗ E can be written in a unique → basis for Ek ... it will follow that − → − → (u11 ⊗ · · · ⊗ unn )(i1 . such that − → − → −− w −→ h(u11 ⊗ · · · ⊗ unn ) = −i1 ... Lemma 11. We also mention the following useful isomorphisms.... the linear map f⊗ : E1 ⊗ · · · ⊗ En → F such that f = f⊗ ◦ ϕ.. We will show that for every family −− (−i1 . .2.. .. they form a basis of E1 ⊗ · · · ⊗ En .. G.in i1 in (i1 . F. there is some linear map h : E1 ⊗ · · · ⊗ En → F ... there exists unique isomorphisms (1) E ⊗ F ≃ F ⊗ E (2) (E ⊗ F ) ⊗ G ≃ E ⊗ (F ⊗ G) ≃ E ⊗ F ⊗ G (3) (E ⊕ F ) ⊗ G ≃ (E ⊗ G) ⊕ (F ⊗ G) (4) K ⊗ E ≃ E such that respectively ..4.in )∈I1 ×. then every tensor z 1 n way as − → − → − = → λ z u1 ⊗ · · · ⊗ un . i1 . j jn ∈In → n − vjn unn ) = j j1 ∈I1 .1.. By the universal property of the tensor product. We deﬁne the function f : E1 × · · · × En → F as follows: f( j1 ∈I1 − → 1 vj1 u11 ..... the u11 ⊗ · · · ⊗ unn also i i i generate E1 ⊗ · · · ⊗ En ..5 (2)..in .×In i i − → − → − → is linearly independent. all zero except for a ﬁnite number.... 1 ≤ k ≤ n...1... w −→ It is immediately veriﬁed that f is multilinear.in ) ∈ I1 ×.. . w −→ of vectors in F .. when each Ik is ﬁnite and of size mk ... TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS k for some family of scalars (vj )j∈Ik .in )∈I1 ×...in )(i1 .jn .×In for some unique family of scalars λi1 ..jn ∈In 1 n − − vj1 · · · vjn −j1.... i i Then. by lemma A. However.×In . is the desired map h. As a corollary of lemma 11.5. In particular. Let F be any nontrivial vector space.

and thus. − ⊗ − ) u v w u w v w → → 4. f ◦ g and g ◦ f are identity maps. but they also act on linear maps (they are functors). It is easily seen to be bilinear. u v u v It is immediately veriﬁed that h is bilinear. → → → → → such that fw (− ⊗ − ) = − ⊗ − ⊗ − . z w z from (E ⊗ F ) × G into E ⊗ F ⊗ G. − ⊗ − → − ⊗ − u v v u 403 → → → → → → → → → 2. Fix some − ∈ G. We → illustrate the proof method on (2). there is a linear map fw : E ⊗ F → E ⊗ F ⊗ G.11. − . − ) = f (− ) ⊗ g(− ). (− ⊗ − ) ⊗ − → − ⊗ (− ⊗ − ) → − ⊗ − ⊗ − u v w u v w u v w → → → → → → → 3. f and g are isomorphisms. and thus. These isomorphisms are proved using the universal property of tensor products. → → → → → → such that f ((− ⊗ − ) ⊗ − ) = − ⊗ − ⊗ − . The map w → → → → → (− . → → → → → → such that g(− ⊗ − ⊗ − ) = (− ⊗ − ) ⊗ − . we can deﬁne h : E × F → E ′ ⊗ F ′ by → → → → h(− . Not only do tensor products act on spaces.1. (− . it induces a unique linear map f ⊗ g : E ⊗ F → E ′ ⊗ F ′. and thus. It is trilinear. there is a linear map g : E ⊗ F ⊗ G → (E ⊗ F ) ⊗ G. u u Proof. The other cases are similar. and u v w u v w thus. such that → → → → (f ⊗ g)(− ⊗ − ) = f (− ) ⊗ g(− ). Given two linear maps f : E → E ′ and g : F → F ′ . − ) → (− ⊗ − ) ⊗ − u v w u v w from E × F × G to (E ⊗ F ) ⊗ G. and thus. Clearly. TENSOR PRODUCTS → → → → 1. u v u u . u v w u v w Also consider the map → → → → → → (− . − ) → − ⊗ − ⊗ − u v u v w from E × F to E ⊗ F ⊗ G is bilinear. u v u v w Next. consider the map → → → (− . − ) → fw (− ). λ ⊗ − → λ− . − ) ⊗ − → (− ⊗ − . it induces a linear map f : (E ⊗ F ) ⊗ G → E ⊗ F ⊗ G.

. We could proceed directly as in lemma 11. which is easily done. we conclude that → → u v (f ′ ◦ f ) ⊗ (g ′ ◦ g) = (f ′ ⊗ g ′ ) ◦ (f ⊗ g).1. such that. . However. 11. for every vector space F and for every symmetric multilinear map f : E n → F . . . −n ) = f⊙ (ϕ(−1 .3. When E is of ﬁnite dimension n.2 Symmetric Tensor Products Our goal is to come up with a notion of tensor product that will allow us to treat symmetric multilinear maps as linear maps. −n )). is a vector space S. However.1. with → → → → f (−1 . . Remark: The tensor product E ⊗···⊗E m is also denoted as m E. there is a more economical method. . we would have to deﬁne a multiplication operation on T(E). . 0 E = K). and E. together with a symmetric multilinear map ϕ : E n → S. The generalization to the tensor product f1 ⊗ · · · ⊗ fn of n ≥ 3 linear maps fi : Ei → Fi is immediate. First. so that symmetry makes sense. En . is an interesting object.404 CHAPTER 11. rather then n vector spaces E1 . . First. . . TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS If we also have linear maps f ′ : E ′ → E ′′ and g ′ : F ′ → F ′′ . A symmetric n-th tensor power (or tensor product) of a vector space E. . u u u u . and left to the reader. since we already have the notion of a tensor product. it corresponds to the algebra of polynomials with coeﬃcients in K in n noncommuting variables. . there is a unique linear map f⊙ : S → F . and is called the m-th tensor power of E (with space T(E) = m≥0 1 E = E. Deﬁnition 11. Since these vectors generate E ⊗ F . we can easily verify that the linear maps (f ′ ◦ f ) ⊗ (g ′ ◦ g) and (f ′ ⊗ g ′ ) ◦ (f ⊗ g) agree on all vectors of the form − ⊗ − ∈ E ⊗ F . We now turn to symmetric tensors. and construct symmetric tensor products from scratch.2. note that we have to restrict ourselves to a single vector space E. we deﬁne symmetric tensor products (powers). where n ≥ 2. The vector m called the tensor algebra of E.

−n ∈ E. ϕ2 ) for E. . ϕ1 ) and (S2 . . ϕ2 ) for E. . . Given a vector space E. We now give a construction that produces a symmetric n-th tensor power of a vector space E.3. . Given any two symmetric n-th tensor powers (S1 . .1. where −1 . and thus. . are isomorphic. . Let ϕ : E n → ( → → → → (−1 . . a symmetric n-th tensor power ( n E. Replace tensor product by symmetric n-th tensor power in the proof of lemma 11. . we deﬁne an appropriate quotient. Lemma 11. and for u u u u n every symmetric multilinear map f : E n → F . the unique linear map f⊙ : E → F such that f = f⊙ ◦ ϕ.2. . u u u u → → on the generators −1 ⊙ · · · ⊙ −n of u u n E. ϕ1 ) and (S2 . → → u1 un u− u− π(1) π(n) → for all −i ∈ E. . Lemma 11. .2. . n} → {1. . there is a unique linear map f⊙ such that the following diagram commutes: F First. we show that any two symmetric n-th tensor powers (S1 . −n ) → p(−1 ⊗ · · · ⊗ −n ). the symmetric u u u u → → → → tensor power n E is generated by the vectors −1 ⊙ · · · ⊙ −n .2. ϕ) for E can → → → → be constructed (n ≥ 2). . n}. −n ∈ E. . is deﬁned by → → → → f⊙ (−1 ⊙ · · · ⊙ −n ) = f (−1 .2.2. . Proof. We claim that the quotient u space ( n E)/C does the job. . . . denoting ϕ(−1 . or for short u u 405 f = f⊙ ◦ ϕ. u u u u n E)/C be the map . . Furthermore. . . −n ) as −1 ⊙· · ·⊙ −n . Let C be the subspace of n E generated by the vectors of the form − ⊗ · · · ⊗ − − − → ⊗ · · · ⊗ − →. . −n ). E nC ϕ / S CC CC f⊙ C f CC! Equivalently. Proof. SYMMETRIC TENSOR PRODUCTS → → for all −1 . and all permutations π : {1. Let p : n E→( n E)/C be the quotient map. . there is an isomorphism h : S1 → S2 such that ϕ2 = h ◦ ϕ1 . .11. The tensor power n E is too big. .

We will adopt Ramshaw’s notation. n-th tensor power of E. we have f⊗ (− ) = 0 for every − ∈ n E. −n ) = −1 ⊗ · · · ⊗ −n . TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS → → → → or equivalently. → → w → → → → → → → → u1 vi un u1 vi un u1 wi un i − ⊙ · · · ⊙ (λ− ) ⊙ · · · ⊙ − = λ(− ⊙ · · · ⊙ − ⊙ · · · ⊙ − ). Remark: The notation n E for the symmetric n-th tensor power of E. −n ). . and p is surjective.406 CHAPTER 11. It is clear that ϕ is symmetric. . where [− ] is the → → → an induced linear map h : ( E)/C → F . we must have n → → E)/C → F exists. Other authors use the notation Symn (E). is borrowed from Ramshaw. It may not be standard. Thus. Given any symmetric multilinear map f : E n → F . or Sn (E). n E=( n E)/C and ϕ constitute a symmetric Again. u u u u → → → → Let us denote ϕ(−1 . What is important is that the symmetric n-th power has the universal property with respect to symmetric multilinear maps. the actual construction is not important. if a linear map f⊙ : ( ( n E)/C. . . since the vectors −1 ⊙ · · · ⊙ −n generate u u → → → → f⊙ (−1 ⊙ · · · ⊙ −n ) = f (−1 . → → u1 un u− u− π(1) π(n) . . can also be expressed − ⊙ · · · ⊙ (− + − ) ⊙ · · · ⊙ − = (− ⊙ · · · ⊙ − ⊙ · · · ⊙ − ) + (− ⊙ · · · ⊙ − ⊙ · · · ⊙ − ). Thus. the vectors − ⊙ · · · ⊙ − generate u u n 1 n 1 n ( E)/C. we get z z n − ]) = f (− ). there is a linear map f⊗ : such that f = f⊗ ◦ ϕ0 . . → → → → → → u u u u u u 1 i n 1 i n − ⊙ · · · ⊙ − = − → ⊙ · · · ⊙ − →. . . such that h([ z z ⊗ z n n − ∈ → equivalence class in ( E)/C of z E: E n LL p◦ϕ0 / ( n E)/C LLL LLL h f LLLL % F However. −n ) as −1 ⊙ · · · ⊙ −n . u u u u which shows that h and f⊙ agree. since f is symmetric. . . as in the diagram below: E n GG n / E GG GG f⊗ G f GGG # ϕ0 n E→F F − → → → However. The fact that the map ϕ : E n → as follows: n E is symmetric and multinear. where ϕ0 (−1 . . . ϕ = p ◦ ϕ0 . Since the u u u u n → → → → vectors − ⊗ · · · ⊗ − generate u u E.

F ).3. . A symmetric n-th tensor power (or tensor product) of an aﬃne space E. and since by lemma 11. K) is naturally isomorphic to ( n E)∗ . an ∈ E. together with a symmetric multiaﬃne map ϕ : E n → S. . for every symmetric multilinear map f ∈ S(E n . and tensors of the form − ⊙· · ·⊙ − . Deﬁnition 11. is an aﬃne space S. Those → → → u1 un ui symmetric n-tensors that are not simple are often called compound symmetric n-tensors.3 Aﬃne Symmetric Tensor Products Ramshaw gives a construction for the symmetric tensor powers of an aﬃne space. this yields a (noncanonical) isomorphism between the vector space of symmetric multilinear forms S((E ∗ )n . its inverse is the map f → f⊙ . . there is a unique aﬃne map f⊙ : S → F . . .2. When the space E has ﬁnite dimension. It is also possible to deﬁne tensor products of aﬃne spaces. there is a unique linear map f⊙ ∈ L( n E. F ) such that f = f⊙ ◦ ϕ. . where n ≥ 2. as aﬃne maps f⊙ : E → F . . we can view the → → symmetric tensor −1 ⊙ · · · ⊙ −n as a multiset. The last identity shows that the “operation” ⊙ is commutative.11. or for short f = f⊙ ◦ ϕ. Thus. u u Remark: As in the case of general tensors. are called simple (or decomposable) symmetric n-tensors. we now brieﬂy discuss symmetric tensor powers of aﬃne spaces. Thus. 407 where h ∈ L( n E. via the linear map −◦ϕ deﬁned by h → h ◦ ϕ. with f (a1 . for every aﬃne space F and for every symmetric multiaﬃne map f : E n → F .2. AFFINE SYMMETRIC TENSOR PRODUCTS for all permutations π on n elements. Lemma 11. F ). . Indeed. . K) and n E. the vector space of symmetric multilinear forms S(E n . . .3. the map − ◦ ϕ is bijective. and the vector space of symmetric multilinear maps S(E n . Actually.3 yields an isomorphism between the vector space of linear maps L( n E. such that. since this might also be helpful to those readers who wish to study Ramshaw’s paper [65] carefully. an )). 11. The motivation is to be able to deal with symmetric multiaﬃne maps f : E m → m F . h ◦ ϕ is clearly symmetric multilinear. F ). using polynomials. an ) = f⊙ (ϕ(a1 .1. Symmetric tensors in n E are also called symmetric n-tensors. where − ∈ E. for all a1 . . F ).3. As a matter of fact. . and symmetric tensor powers of aﬃne spaces. it turns out that we can easily construct symmetric tensors powers of aﬃne spaces from what we have done so far.

where a1 . . . but with aﬃne spaces and aﬃne and multiaﬃne maps. . an ∈ E. . whose proof is left as an exercise. . deﬁned as the restriction to E n of the multilinear map − . However. is deﬁned by f⊙ (a1 ⊙ · · · ⊙ an ) = f (a1 . . Given an aﬃne space E. an ) is denoted as a1 ⊙· · ·⊙an . it is identical to the construction of the free vector space K (I) . getting the vector space E. In order to prove the existence of such tensor products. .− → − ⊙ ··· ⊙ − → → → → u1 un u1 un from (E)n to n E.. Given an aﬃne space − → − → E and a subspace F of E . .2. . The following lemma. . . there is a unique aﬃne map f⊙ such that the following diagram commutes: E nC / S CC CC f⊙ C f CC! ϕ F As usual. (2) Quotients of aﬃne spaces can be deﬁned. the reasons why tensor products of aﬃne spaces can be constructed. are as follows: (1) The notion of aﬃne space freely generated by a set I makes sense. . The ﬁrst method is to mimick the constructions of this section. there is a more direct approach. is a symmetric n-th tensor power for E. determined such that → → − − a ≡F b iﬀ ab ∈ F . we can proceed in at least two ways. . TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS Equivalently. it can be shown that E/ ≡F is an aﬃne space with associated vector space E / F . − − → → Then.. an are points in E. . the unique aﬃne map n f⊙ : E → F such that f = f⊙ ◦ ϕ.. and together with the multiaﬃne map ϕ : E n → n E. . Indeed. .3. and for every symmetric multiaﬃne map f : E n → F . and then construct the symmetric tensor power n n E. on the generators a1 ⊙ · · · ⊙ an of n E. . . then the symmetric tensor power E of the aﬃne space E already sits inside n n E consists of all aﬃne E as an aﬃne space! As a matter of fact.408 CHAPTER 11. If ϕ(a1 . . It turns out that if we ﬁrst homogenize the aﬃne space E. and behave well. an are points in E. makes what we just claimed more precise. In fact. the subset n E of the vector space n E consisting of all aﬃne combinations of simple n-tensors of the form a1 ⊙ · · · ⊙ an . . where a1 . where a1 . is an aﬃne space. . the aﬃne space combinations of simple n-tensors of the form a1 ⊙ · · · ⊙ an . an aﬃne congruence on E is deﬁned as a relation ≡F on E.. Lemma 11. it is easy to show that symmetric tensor powers are unique up to isomorphism. the symmetric tensor power n E is spanned by the tensors a1 ⊙· · ·⊙an . an ).

3. given any polynomial aﬃne map F : E → E of degree m with polar form f : E m → E. .1. . we deﬁne the set JM of functions η : {1. In the end. such that M(i) = 0 for ﬁnitely many i ∈ I. −n ∈ E. However. which is a ﬁnite set. we just have to show that a symmetric multiaﬃne map is uniquely determined by its behavior on the diagonal. i ∈ dom(M). but also by the simple n-tensors of the form an . By lemma 11. . am ∈ E. am ): view the notation f (a1 · · · am ) as an abbreviation for f⊙ (a1 ⊙ · · · ⊙ am ). it is suﬃcent to show that every linear map n h: E → F is uniquely determined by the tensors of the form an . . an ∈ E. It is worth noting that the aﬃne tensor power n E is spanned not only by the simple n-tensors a1 · · · an . 11. . omitting the symbol ⊙. i∈I . . an ∈ E. but Ramshaw favors the reading n E. PROPERTIES OF SYMMETRIC TENSOR PRODUCTS Proof. We let dom(M) = {i ∈ I | M(i) = 0}. again. . |η −1 (i)| = M(i). The answer is that that they are isomorphic. Following Ramshaw.4. This is because Ramshaw deﬁnes the hat construction as a special case of his construction of the aﬃne symmetric tensor power. For this. u u u u n generate E. where a1 . The existence of the symmetric tensor power E m justiﬁes the multiplicative notation f (a1 · · · am ) for the polar value f (a1 . From lemma 11. Left as an exercise. . this is a consequence of lemma 4. note that the sum i∈I M(i) makes sense. since (I) i∈I M(i) = i∈dom(M ) M(i).4 for symmetric tensor powers. . the vectors −1 ⊙ · · · ⊙ −n . but they are not linearly independent. am ) = f⊙ (a1 ⊙ · · · ⊙ am ). . . the aﬃne blossom of F . . as follows: JM = {η | η : {1. for any n ≥ 2. whereas Ramshaw followed an approach in which aﬃne spaces played a more predominant role. . . For every multiset M ∈ N .11. and dom(M) is ﬁnite. Indeed. . . We personally favor the reading n E. n Given an aﬃne space E. n} → dom(M). and write tensors a1 ⊙ · · · ⊙ am simply as a1 · · · am . . .3. the two approaches are equivalent. . for all a1 . . . 409 Let E and E be two aﬃne spaces. where a ∈ E. since f is symmetric multiaﬃne and h = f⊙ is the unique aﬃne map such that f (a1 . we call the aﬃne map f⊙ . an ) = h(a1 · · · an ).2. we can associate a unique aﬃne map f⊙ : E m → E. by lemma A. . . for any multiset M ∈ N(I) .4. drop the subscript ⊙ from f⊙ .2. . M(i) = n}.1. if we let f = h ◦ ϕ. . where −1 . n} → dom(M). . and that the set of all multisets over I is denoted as N(I) . We will prove a version of lemma 11. . and this is quite easy to show. one may wonder what is the relationship between E and n E. . .2. . . . recall that a (ﬁnite) multiset over a set I is a function M : I → N. such that f (a1 . and we simply followed an approach closer to traditional linear algebra.5. .4 Properties of Symmetric Tensor Products → → → → Going back to vector spaces. . for all a1 . that is. . Then. .

. i2 . we have → h(−1 ui − → 1 vj1 u11 . Then.. . by lemma A. . .. i∈I M (i)=n. ik }. . M(ik ) occurrences of ik .2. any η deﬁnes a “permutation” of the sequence (of length n) i1 . for any family of vectors → (− M )M ∈N(I) . By the universal property of n the symmetric tensor product.. . then the family of u vectors → − ⊙M (i1 ) ⊙ · · · ⊙ − ⊙M (ik ) → u u i1 ik M ∈N(I) .4. The details are left as an exercise. . ik .5 (2). . Using the commutativity of ⊙. The proof is very similar to that of lemma 11. we can also show that these vectors generate n E.. . ik } = dom(M). .. i1 .4.1. {i1 . M(i2 ) occurrences of i2 . . . the linear map f⊙ : E → F such that f = f⊙ ◦ ϕ. TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS In other words. . if (−i )i∈I is a basis for E. → ⊙ · · · ⊙ −k ui wM → 1 n w vη(1) · · · vη(n) −M . j ⊙M (i1 ) n E → F . .1 any function η ∈ JM speciﬁes a sequence of length n.. . n i∈I M (i)=n. w we show the existence of a symmetric multilinear map h : M ∈ N(I) with i∈I M(i) = n. . consisting of M(i1 ) occurrences of i1 . For any nontrivial vector space F . . . Intuitively. . .ik }=dom(M ) is linearly independent.410 CHAPTER 11. is the desired map h. they form a basis for n E... . it follow that the family − ⊙M (i1 ) ⊙ · · · ⊙ − ⊙M (ik ) → → u i1 u ik M ∈N(I) . i2 . We deﬁne the map f : E n → F as follows: f( j1 ∈I → n − vjn unn ) = j jn ∈I i∈I M ∈N(I) M (i)=n η∈JM It is not diﬃcult to verify that f is symmetric and multilinear.ik }=dom(M ) is a basis of the symmetric n-th tensor power E. M (i1 ) M (i2 ) M (ik ) → Given any k ≥ 1. . i∈I M (i)=n . ik . . . Proof. and thus. . . we denote u − ⊙···⊙− → → u u k →⊙k as − . Given a vector space E. . such that for every →⊙M (ik ) ) = − . . .1. and any − ∈ E. 1 Note that must have k ≤ n. {i1 . if i∈I M(i) = n and dom(M) = {i1 . where {i1 . . → Lemma 11. . u We can now prove the following lemma.. .

jk ≥ 0. it induces a unique linear map 2 2 2 E ′ by f ⊙ g: such that E→ E ′. Again.1 shows that every symu − ∈ n E can be written in a unique way as → metric tensor z − = → z M ∈N(I) M (i)=n {i1 . . Given two linear maps f : E → E ′ and g : E → E ′ . all zero except for a ﬁnite number. the dimension of n E is n + 1.. where the monomials of total degree n are the symmetric tensors → − ⊙M (i1 ) ⊙ · · · ⊙ − ⊙M (ik ) .ik }=dom(M ) i∈I → →⊙M (i1 ) ⊙ · · · ⊙ − ⊙M (ik ) . u v u u . We leave as an exercise to show p+n−1 p+n−1 that this number is .4. → Given a vector space E and a basis (−i )i∈I for E. which is pn . when I is ﬁnite. we can deﬁne h : E × E → → → → → h(− . In particular. the dimension of n E is the number of ﬁnite multisets (j1 . u ik λM − 1 ui for some unique family of scalars λM ∈ K. → u ik u i1 → in the “indeterminates” −i . This is not a coincidence! Symmetric tensor products are closely related to polynomials (for more on this. This can also be seen directly. lemma 11. and thus. .. u v u v It is immediately veriﬁed that h is bilinear. . → → → → (f ⊙ g)(− ⊙ − ) = f (− ) ⊙ g(− ).11. see the next remark). Thus the dimension of n E is . say of size p. jp ). PROPERTIES OF SYMMETRIC TENSOR PRODUCTS 411 As a consequence. when p = 2. Remark: The number p+n−1 n is also the number of homogeneous monomials j j X1 1 · · · Xp p of total degree n in p variables (we have j1 + · · · + jp = n). Compare n n with the dimension of n E. This looks like a homogeneous polynomial of total degree n. such that j1 +· · ·+jp = n.. this u is not a coincidence. − ) = f (− ) ⊙ g(− ).4. . where i ∈ I (recall that M(i1 ) + · · · + M(ik ) = n).. Polynomials can be deﬁned in terms of symmetric tensors.

Actually. However. isn’t it? u v v u We can ﬁnally apply this powerful machinery to CAGD. then they form a basis of n A. . When this operation is associative and has an identity. Let me tease the reader anyway. . and we will continue to do so. 11. un+1 in A can be viewed as symmetric tensors u1 ⊙ · · · ⊙ un+1 .4. (with 1 E = E. and 0 E = K). which is easily done. and if these tensors are linearly independent in n A. TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS If we also have linear maps f ′ : E ′ → E ′′ and g ′ : E ′ → E ′′ . we have n + 1 tensors uk+1 ⊙ · · · ⊙ un+k . Then. 2 .3. called the symmetric tensor algebra of E. we can easily verify that (f ′ ◦ f ) ⊙ (g ′ ◦ g) = (f ′ ⊙ g ′ ) ◦ (f ⊙ g). then S(E) is obtained as the quotient of T(E). Remark: The vector space m S(E) = m≥0 E. by the subspace of T(E) generated by all vectors in T(E). We could also have deﬁned S(E) from T(E).2 What’s nice about the symmetric tensor algebra.5 Polar Forms Revisited When E = A. . .2. and a ring structure. is an interesting object. u2n . and left to the reader. . . but this would also require deﬁning a multiplication operation (⊗) on T(E).1). is that S(E) provides an intrinsic deﬁnition of a polynomial ring (algebra!) in any set I of variables.412 CHAPTER 11. it can be shown to correspond to the ring of polynomials with coeﬃcients in K in n variables (this can be seen from lemma 11. As a reward to the brave readers who read through this chapter. → → → → of the form − ⊗ − − − ⊗ − . by saying that an algebra over a ﬁeld K (or a ring A) consists basically of a vector space (or module) structure. we recall theorem 5.3. When E is of ﬁnite dimension n. corresponds to the ring of polynomials in inﬁnitely many variables in I. → When E is of inﬁnite dimension and (−i )i∈I is a basis of E. . we give a short proof of the hard part of theorem 5. The generalization to the symmetric tensor product f1 ⊙ · · · ⊙ fn of n ≥ 3 linear maps fi : E → F is immediate. First. This elegant proof is due to Ramshaw. If this is done. multisets of size n + 1 consisting of points u1 . we have avoided talking about algebras (even though polynomial rings are a prime example of algebra). we would have to deﬁne a multiplication operation on T(E). together with a multiplication operation which is bilinear.2. the vector space S(E) (also u denoted as Sym(E)). where 0 ≤ k ≤ n. an algebra essentially has both a vector space (or module) structure. Very elegant. given any progressive sequence u1 . .

3. m However. 0 ≤ k ≤ m. the dimension of A is m + 1 (which can also be seen directly). we know that for any sequence of m + 1 points b0 . f (uk+1 . u2m be a progressive sequence of numbers ui ∈ R. for every k. . . But this is easy! Indeed. h(uk+1 ⊙ · · · ⊙ um+k ) = bk . . . for every k. by lemma A. . . . . u2m . . as a corollary of lemma 11. . . . . for every k. using symmetric tensors. . there is a unique polynomial curve F : A → E of degree m. since then. we want to prove that there exists a polynomial curve F : A → E of degree m. 0 ≤ k ≤ m. . Uniqueness has already be shown. . bm in E (which is assumed nontrivial!). and so.2.2 Let u1. Now.1. . . 0 ≤ k ≤ m. since we can deﬁne f as the restriction of g to Am . By the ﬁrst half of theorem 5.4. 0 ≤ k ≤ m. . . POLAR FORMS REVISITED 413 Theorem 5. um+k ) = bk . we just have to prove the existence of a linear map m h: satisfying the conditions A → E. given the progressive sequence u1 . for every k. um+k ) = bk . h(uk+1 ⊙ · · · ⊙ um+k ) = bk . .3. . we will have succeeded. . um+k ) = bk . we let g = h ◦ ϕ. . . there is at most one polynomial curve of degree m. . um+k ) = bk . whose polar form f : Am → E satisﬁes the conditions f (uk+1. . . there will be a unique linear map m h: satisfying the conditions A → E. 0 ≤ k ≤ m. whose polar form f : Am → E satisﬁes the conditions for every k. bm in some aﬃne space E.11. . If we can show that there exists a symmetric multilinear map g : (A)m → E satisfying the conditions g(uk+1. We just have to show that they form a basis of m A. since then.5.2. . Given any sequence of m+1 points b0 . whose polar form f : Am → E satisﬁes the conditions f (uk+1.3. Proof. where 0 ≤ k ≤ m. . we have m+1 symmetric tensors uk+1 ⊙· · ·⊙um+k . . .

As we already explained.5 (1). h(uk+1 ⊙ · · · ⊙ um+k ) = bk . the multiaﬃne map f which is the restriction of g = h ◦ ϕ to Am . 0 ≤ k ≤ m. every symmetric tensor θ1 · · · θm can be written as (u1 δ + r1 0) · · · (um δ + rm 0) = r1 · · · rm σi um u1 m−i i .. we know that (1. . omitting the symbol ⊙. 0 ≤ k ≤ m. This leads us to investigate bases of m A. .3. . generate where 0 ≤ k ≤ m. 0 ≤ k ≤ m.. . where 0 ≤ k ≤ m.2. which proves the existence (and also uniqueness!) of a linear map m for every k. . TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS for every k.. The crucial point of the previous proof. let us denote 1 as δ (this is a vector in R). bases of A.. for notational simplicity. is that the symmetric tensors uk+1 · · · um+k . form a basis of m A. and thus. is the polar form of the desired curve. . Then. this implies that the vectors uk+1 ⊙ · · · ⊙ um+k . .414 CHAPTER 11. since θ = uδ + r0. By lemma A. For example. we have θ1 θ2 θ3 = 2 3 u1 u2 u3δ 3 + (u1 u2 r3 + u1 u3 r2 + u2 u3 r1 )0δ 2 + (u1 r2 r3 + u2 r1 r3 + u3 r1 r2 )0 δ + r1 r2 r3 0 . r1 rm 0≤i≤m where σi (x1 . xm ) is the i-th elementary symmetric function in x1 . h(uk+1 ⊙ · · · ⊙ um+k ) = bk . judicious use of linear and multilinear algebra allowed us to prove the existence of a polynomial curve satisfying the conditions of theorem 5. 0) is a basis for A. 0 δ. h: satisfying the conditions A → E. . As suggested earlier. for every k. For example. m A. xm . without having to go through painful calculations.2. for every θ ∈ A. the symmetric tensors uk+1 ⊙ · · · ⊙ um+k form a basis of m A. Thus. and the dimension of m A is m + 1. . which implies that there is at most one linear map m h: satisfying the conditions A → E. symmetric tensors θ1 ⊙ · · · ⊙ θm will be denoted simply as θ1 · · · θm . Since there are m + 1 vectors uk+1 ⊙ · · · ⊙ um+k . To avoid clashes.

k in A2 . they form a basis of m A.4. for points u1 . of tensors i We could also have used the barycentric basis (r. where r. This is also a direct consequence of lemma 11. . for every θ ∈ A (the condition u + v = 1 holding only iﬀ θ is a point). . if E is an aﬃne space of dimension n. POLAR FORMS REVISITED 415 In particular. . vj r m−k s k . also forms a basis of we call the B´zier basis... δn )) is an aﬃne frame for E. if we use the basis (δ1 . we have u1 u2 u3 = u1 u2 u3 δ 3 + (u1 u2 + u1 u3 + u2 u3 )0δ 2 + (u1 + u2 + u3 )0 δ + 0 . In e general. . is a basis of m E. . and since m A is of dimension m + 1. the family of tensors r m−k s k . s are distinct points in A.m} i∈I j∈J m−i 2 3 σi (u1 . δ2 . we can write θ = ur + vs. . Considering any barycentric aﬃne frame (a0 . For example. an ) for E. an ) is a barycentric aﬃne frame for E. It will also be useful to extend the e notion of weight function deﬁned for the homogenized version E of an aﬃne space E. Thus. s). . then we have a B´zier basis consisting of the tensors r i sj tk .. where δ1 and δ2 denotes the unit vectors (1. . The above makes it clear that the m+1 symmetric tensors 0 δ i generate m A. For reasons of convenience to be revealed in a moment. . um)0 0≤i≤m m−i δi. where i = i1 + · · · + in . . . where 0 ≤ i ≤ m. and Ω the origin of A2 . . s. |J|=k I∪J={1. 1). where 0 ≤ k ≤ m. .5. If we consider a barycentric basis (r. . . then the family of symmetric tensors ai0 ai1 · · · ain . . which It is easy to generalize these bases to aﬃne spaces of larger dimensions. and every symmetric tensor θ1 · · · θm can be written as (u1 r + v1 s) · · · (um r + vm s) = ui I∩J=∅. to the symmetric tensor power m E. (δ1 . e m A. . the symmetric tensor u1 · · · um can be written as (u1 δ + 0) · · · (um δ + 0) = For example. Ω) for A2 . n 0 1 where i0 + i1 + · · · + in = m. Then. um ∈ A. t) i. If (a0 . . and (Ω. . j. . where i + j + k = m. we prefer the family m m−i i 0 δ . where i + j + k = m. the family of symmetric tensors i i δ11 · · · δnn Ωm−i . we have the power basis consisting of m i j the tensors δ1 δ2 Ω k .. and call it the power basis. 0) and (0.11. considering the aﬃne plane A2 .1. is a B´zier basis of m E.

σm (um+1 . . we express every point u as uδ + 0. uk+m)). . dk = (uk+1 δ + 0) · · · (uk+mδ + 0) = 0≤i≤m The m + 1 tensors dk are linearly independent iﬀ the determinant of the (m + 1)(m + 1) matrice M = (mi. . . . iﬀ θ ∈ m E. . as the unique linear map such that. is a B´zier basis of m E. . the tensor power m E is even generated by the simple m-tensors of the form am . .. where 0 ≤ i ≤ m. . and 1 ≤ k ≤ m + 1.. the aﬃne tensor power of E. . . What lemma A. um+2 ) . . we notice that the ﬁrst column contains 1 in the ﬁrst row. .2. 1 1 det(M) = 1 . . and 0 elsewhere.2. . Then. and we get m−i i σi (uk+1 . . n 0 1 where i0 + i1 + · · · + in = m. . . . . . . .. . . ending by subtracting the ﬁrst row from the second. 0) for A. It is immediately seen that ω is independent of the chosen aﬃne frame. or de Casteljau pole system. then the (m − 1)-th row from the m-th. by computing a determinant. . is to imitate what we did in the computation of the Vandermonde determinant. . etc. This is certainly not obvious at ﬁrst glance. where a ∈ E. We will proceed by induction. . It is also worth noting that m E is generated by the simple m-tensors of the form a1 · · · am . which is obvious. . where 2 ≤ i ≤ m + 1 (counting from top down). ω(ai0 ai1 · · · ain ) = 1. . u2m ) (um+i − uj )..3..416 CHAPTER 11. . contain the factor (um+i−1 − ui−1). One way two compute det(M). 0 1 n where i0 + i1 + · · · + in = m. . Actually. . . As earlier. . . 1 σ1 (um+1 . since for any barycentric aﬃne basis (a0 .e. is that the symmetric tensors dk = uk+1 · · · um+k form a basis of m A..k ) = (σi (uk+1. . um+2 ) . the family of tensors ai0 ai1 · · · ain . . . . . . um ) σ1 (u2 . . . . we can factor 2≤i≤m+1 (um+i−1 − ui−1 ) = 1≤i≤m (um+i − ui ).. um+1 ) σ1 (u3 . in view of lemma 11. If we subtract the m-th row from the (m + 1)-th (the last row).5 showed. an ) for E. um ) σm (u2. Thus. where b1 · · · bm ∈ E. u2m ) . We now give another proof that these tensors are linearly independent. . . We claim that det(M) = 1≤i≤j≤m σ1 (u1 . . is nonnull.2. . . and that all the other entries on row i. using the basis (δ. We call such a family of tensors a de Casteljau basis. .. . am ∈ E. i. . in view of the remark e just after lemma 11. . um+1 ) σm (u3.3. . . . where a1 . . TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS m deﬁne the weight function (or ﬂavor function) ω : E → K. . σm (u1. . . it is easily seen that a tensor θ ∈ m E has weight 1 (ﬂavor 1) iﬀ it can be expressed as an aﬃne combination of simple m-tensors of the form b1 · · · bm . uk+m )0 δ. .

u2. r3 )) = 7u1u2 u3 + 2u1 u2 r3 + 2u1u3 r2 + 2u2 u3 r1 − 1u1 r2 r3 − 1u2 r1 r3 − 1u3 r1 r2 + 4r1 r2 r3 f2 ((u1. r1 ). r2 ). . (u2 . and (−1 . (−1 . we get a similar determinant. (u2 . − . − )). → → → Taking the basis (δ. is f : (A) → A3 . with 1 ≤ i ≤ j ≤ m. r3 )) = 0u1 u2 u3 + 0u1 u2 r3 + 0u1 u3 r2 + 0u2u3 r1 + 0u1 r2 r3 + 0u2r1 r3 + 0u3 r1 r2 + 1r1 r2 r3 . Then. r2 ). r3 )) = 2u1 u2 u3 − 1u1 u2 r3 − 1u1 u3 r2 − 1u2 u3 r1 + 4u1 r2 r3 + 4u2 r1 r3 + 4u3r1 r2 − 8r1 r2 r3 f4 ((u1. (u3 . getting F2 (u. u2m−1 . homogenizing. for all i. r2 ). . (u3 . r1 ). r2 ). (u2 . and we now give such an example. −2 . Ω) for A3 . course. given by f1 ((u1. We ﬁrst polarize F : A → A3 . and involving only the 2m − 2 elements u2 . u2. F3 (u. u3 ) = 1u1 u2 u3 + 1u1 u2 + 1u1 u3 + 1u2u3 + 3u1 + 3u2 + 3u3 − 5 f3 (u1 . u3 ) = 7u1 u2 u3 + 2u1 u2 + 2u1 u3 + 2u2u3 − 1u1 − 1u2 − 1u3 + 4 f2 (u1 .5.11. we can conclude by induction. explicitly. Consider the polynomial curve F : A → A3 . −2 . r) = 7u3 + 6u2r − 3ur 2 + 4r 3 F4 (u. . and tensoring. r3 )) = 1u1 u2 u3 + 1u1u2 r3 + 1u1 u3 r2 + 1u2 u3 r1 + 3u1 r2 r3 + 3u2 r1 r3 + 3u3r1 r2 − 5r1 r2 r3 f3 ((u1. → → → deﬁned as follows (assuming the standard aﬃne frame (Ω. but of size m × m. for A3 ): e e e3 F1 (u) = 7u3 + 6u2 − 3u + 4 F2 (u) = 1u3 + 3u2 + 9u − 5 F3 (u) = 2u3 − 3u2 + 12u − 8. j. namely. . det(M) = 0 iﬀ um+i = uj . 0) for A. getting f : A × A × A → A3 . Of. r) = 2u3 − 3u2 r + 12ur 2 − 8r 3 . It is also instructive to go through the steps of polarizing. we rediscover the condition for a progressive sequence. r1 ). the result of homogenizing e e e3 3 3 f : A × A × A → A . r) = 0u3 + 0u2r + 0ur 2 + 1r 3 . r1 ). r) = 1u3 + 3u2r + 9ur 2 − 5r 3 F1 (u. u2. (u3 . (u2 . POLAR FORMS REVISITED 417 and by expanding according to the ﬁrst column. Note that we could have ﬁrst homogenized F : A → A3 . given by f1 (u1 . (u3 . u3 ) = 2u1 u2 u3 − 1u1u2 − 1u1 u3 − 1u2u3 + 4u1 + 4u2 + 4u3 − 8.

it is easy to see that the tensors (in this case. vector) (f )⊙ m m−k k 0 δ . assuming 3 the power basis of A.418 CHAPTER 11.. This is because symmetry is built-in. It can be be found in Chapter B. TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS and then polarized F : A → A3 . Section B. getting the same trilinear map f : (A)3 → A3 .2 (conditions on polar forms for polynomial maps to agree to kth order). . k m . c = (u1 r2 r3 + u2 r1 r3 + u3 r1 r2 ). i. for (u1 δ + r1 0) (u2δ + r2 0) (u3δ + r3 0). are the B´zier control points of the curve F . Note that the four columns happen to be the tensors.4. in the present case vectors. f⊙ carries some very e tangible geometric signiﬁcance. b = (u1 u2 r3 + u1 u3 r2 + u2 u3 r1 ). (f )⊙ (0 ). as abstract as it is. and the last one is a a point. s) for A is used. However. we have a = u1 u2 u3 . (f )⊙ (0δ 2 ).5. (f )⊙ (δ 3 ). this material is quite technical. Thus. and d = r1 r2 r3 . points) f⊙ (r m−k s k ). The columns of the above matrix have a geometric signiﬁcance.e. An advantage of the B´zier basis is that the tensors r m−k s k e are really points. we easily see that (f1 )⊙ (a δ 3 + b 0δ 2 + c 0 δ + d 0 ) = 7a + 2b + −1c + 4d (f2 )⊙ (a δ 3 + b 0δ 2 + c 0 δ + d 0 ) = 1a + 1b + 3c − 5d 2 2 3 3 2 3 2 3 2 3 (f4 )⊙ (a δ 3 + b 0δ 2 + c 0 δ + d 0 ) = 0a + 0b + 0c + 1d. if the aﬃne basis (r. we compute the tensored version (f )⊙ : A → A3 of f : (A)3 → A3 . Note how tensoring eliminates duplicates that polarizing introduced. Some results on osculating ﬂats can are also shown as well as a generalized version of lemma 5. Now if we rescale each tensor m m−k k . For example. we arrive at a geometric interpretation for the columns of the coeﬃcients 0 δ by k of the homogenized map F : the coeﬃcients of uk in the formula for the coordinates of F (u) are precisely the coordinates (except the last one) of the tensor (in this case. (f )⊙ (0 δ). The ﬁrst three are vectors. 3 Finally. that we write a symmetric tensor in 3 A as a δ 3 + b 0δ 2 + c 0 δ + d 0 . Then. k 2 3 (f3 )⊙ (a δ 3 + b 0δ 2 + c 0 δ + d 0 ) = 2a − 1b + 4c − 8d This explains why it is interesting to rescale by Similarly.

Problem 5 (10 pts).6 Problems Problem 1 (10 pts). determined such that → → − − a ≡F b iﬀ ab ∈ F . and g ′ : F ′ → F ′′ . then either f = 0 or − = 0 . Given an aﬃne space E and a − → − → subspace F of E .5. − − → → Prove that E/ ≡F is an aﬃne space with associated vector space E / F . v u u v Problem 4 (20 pts). Prove that α is bilinear and that the unique linear map v u α⊗ : E ∗ ⊗ F → L(E. Problem 7 (10 pts). Prove lemma 11. (3) Prove that tensor products of aﬃne spaces exist. f ′ : E ′ → E ′′ . an aﬃne congruence on E is deﬁned as a relation ≡F on E. − ∈ F . prove that (f ′ ◦ f ) ⊗ (g ′ ◦ g) = (f ′ ⊗ g ′ ) ◦ (f ⊗ g). − )(− ) = f (− )− . Prove that n E and n E are isomorphic. − )(− ) = 0 for all − ∈ E.1. Problem 3 (20 pts). Problem 2 (10 pts). Let α : E ∗ × F → L(E. F ) induced by α is an isomorphism. (2) Quotients of aﬃne spaces can be deﬁned as follows.3. PROBLEMS 419 11. Let E and F be two vector spaces of ﬁnite dimension. v u u v → → for all f ∈ E ∗ . Show that if α(f.2. F ) be the map deﬁned such that → → → → α(f. Prove that the number of homogeneous monomials j j X1 1 · · · Xp p .6. Recall that the set of linear maps from E to F is denoted as L(E. (1) Prove that the notion of an aﬃne space freely generated by a set I makes sense (mimic the construction of the free vector space K (I) ). Given some linear maps f : E → E ′ . and − ∈ E. Fill in the details of the proof of lemma 11. Problem 6 (10 pts). − → → → → → → − Hint. F ). g : F → F ′ .11.

− ) = (x− − y − . i. which is a real vector → → u u space. . where − . We call such a basis a real basis. Show that every vector in C ⊗ E can be written as j → λj ⊗ −j + i u j → µj ⊗ −j . 0 ). Prove that this is also the number of ﬁnite n multisets (j1 . Denoting the above vector space as EC . µj ∈ R. TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS p+n−1 . − ) = − + i− . u v u v → → → → → − → − Prove that if (− 1 . (− n . we can deﬁne multiplication by a complex scalar z ∈ C as z· i → xi ⊗ −i = u i → zxi ⊗ −i . u → → → → where λj . Hint. . . . we can deﬁne the tensor product C ⊗ E. u u v v u v → → v and the multiplication by a complex scalar z = x + iy deﬁned such that → → → → → → (x + iy) · (− . . − ) = i(− . . . u we can write → → → → (− . .420 CHAPTER 11. u (i) Prove that the above operation makes C ⊗ E into a complex vector space called the complexiﬁcation of E. y − + x− ). . . where xi ∈ C and −i ∈ E. − n ) is any basis of E. v v → → − and thus. (ii) Consider the structure E × E under the addition operation → → → → → → u (−1 . Let E be a real vector space of dimension n.e. as − + i− . u v u v u v Prove that the above structure is a complex vector space. . Since every element of C ⊗ E is of the form i xi ⊗ −i . . identifying E with the subspace of EC consisting of all vectors of the form (− . . jp ) such that j1 + · · · + jp = n. −2 ) = (−1 + −1 . − ∈ R ⊗ E (and recall that R ⊗ E is isomorphic u v u v to E). Prove that the dimension of n E is p+n−1 . Note that − → → → → − ( 0 . 0 ). 0 )) is a basis of e e e e EC . of total degree n in p variables is Problem 8 (10 pts). n Problem 9 (20 pts). jk ≥ 0. then ((− 1 . Viewing C as a vector space (of dimension 2) over R. −2 ) + (−1 . −2 + −2 ). prove that C ⊗ E is isomorphic to EC . 0 ).

11. if ϕ is symmetric. u v → v u u v v v → u → Furthermore. so is ϕC .6. . −2 )]. −2 + i−2 ) = ϕ(−1 . −2 ) + ϕ(−1 . −2 ) − ϕ(−1 . −2 ) + i[ϕ(−1 . u v u v (iv) Prove that every bilinear form ϕ : E × E → R can be extended to a bilinear map ϕC : EC × EC → C deﬁned such that → → u → → → → → → u → v ϕC (−1 + i−1 . PROBLEMS 421 (iii) Prove that every linear map f : E → E can be extended to a linear map fC : EC → EC deﬁned such that → → → → fC (− + i− ) = f (− ) + if (− ).

422 CHAPTER 11. TENSOR PRODUCTS AND SYMMETRIC TENSOR PRODUCTS .

Part IV Appendices 423 .

424 .

where k is any index such that k ∈ I. The ﬁrst two chapters of Strang [81] are highly recommended. Alg`bre lin´aire et g´om´trie classique. We recommend the following excellent books for an extensive treatment of linear algebra. wi = ui if i ∈ I. we denote as (ui )i∈I ∪k (v) the family (wi )i∈I∪{k} deﬁned such that. Algebra. 15]. Algebra. recall that a family (ai )i∈I of elements of A is simply a function a : I → A. by Strang [81]. e e e e e e Another useful and rather complete reference is the text Finite-Dimensional Spaces. G´om´tries aﬃnes. and geometry: Linear Algebra and its Applications. e Algebra 1. We can deal with an arbitrary set X by viewing it as the family (Xx )x∈X corresponding to the identity function id : X → X. Given a set A. by Walter Noll [57].1 Vector Spaces The purpose of the appendices is to gather results of Linear Algebra and Analysis used in our treatment of the algorithmic geometry of curves and surfaces. polynomials. If A is ring with additive identity 0. 425 . Alg`bre Lin´aire et G´om´trie El´mentaire by Dieudonn´ [25]. where J is a ﬁnite subset of I (the support of the family). by Bourbaki [14. by Berger [5. by e e e e Tisseron [83]. and wk = v. by Bertin [8]. by Lang [47]. a subfamily of (ui )i∈I is a family (uj )j∈J where J is any subset / of I.Appendix A Linear Algebra A. Given a family (ui )i∈I . We begin by reviewing some basic properties of vector spaces. by Boehm and Prautzsch [11] is also an excellent reference on geometry geared towards computer aided geometric design. 6]. Alg`bre. the notation and terminology is a bit strange! The text Geometric Concepts for Geometric Design. projectives. But beware. Algebra. The appendices contain no original material. by Van Der Waerden [84]. We agree that when I = ∅. we say that a family (ai )i∈I has ﬁnite support iﬀ ai = 0 for all i ∈ I − J. and euclidiennes. and we advise our readers to proceed “by need”. by Mac Lane and Birkhoﬀ [52]. except perhaps for the presentation and the point of view. A family (ai )i∈I is ﬁnite if I is ﬁnite. Given a family (ui )i∈I and any element v. by Michael Artin [1]. (ai )i∈I = ∅. there is probably more material than really needed. In fact. e e e e G´om´trie 1 and 2.

although all deﬁnitions and proofs hold for any commutative ﬁeld K. with identity element 0 . and v v v → → (−α) · − = −(α · − ).1. unless speciﬁed otherwise. . for all α ∈ K.1. . We begin by reviewing the deﬁnition of a vector space. xn ). and thus is nonempty. xn ) = (λx1 . − . of the form − . since it denotes both addition in the ﬁeld K and addition of vectors in E. − ∈ E. and multiplication by a scalar being multiplication in the ﬁeld.e. . The symbol + is overloaded. but readers may safely assume that K = R. we also assume that K = R. β ∈ K. for all α. . The ﬁeld K is often v v v called the ﬁeld of scalars. − → From (V0). we get 0 · − = 0 . . . we will refer to a K-vector space simply as a vector space. +. → However. i. Addition can be extended to Rn as follows: (x1 . a vector space always contains the null vector 0 . . λxn ). The deﬁnition below is stated for an arbitrary ﬁeld K. For every n ≥ 1. . . satisfying the following conditions: − → (V0) E is an abelian group w. we get α · 0 = 0 . . . and α · (−− ) = −(α · − ). The ﬁeld K itself can be viewed as a vector space over itself. . in the rest of this chapter. . 1 . As noted earlier.t. . . for all − ∈ E. − ∈ E. u u u u → → → (V3) (αβ) · − = α · (β · − ). if we write elements of E as vectors. The resulting algebraic structure has some interesting properties. for all α. we assume that all K-vector spaces are deﬁned with respect to a ﬁxed ﬁeld K. − ∈ E. xn + yn ). xn ) + (y1 . Unless speciﬁed otherwise or unless we are dealing with several diﬀerent ﬁelds. LINEAR ALGEBRA In this chapter. the ﬁeld of real numbers. We can also deﬁne an operation · : R × Rn → Rn as follows: λ · (x1 .1 and · : K × E → E (called scalar multiplication). β ∈ K. . those of a vector space. From − → − → → → → → − (V1). Deﬁnition A. . yn ) = (x1 + y1 . the element α · − is also denoted as α− .426 APPENDIX A. .r. Next. u u u → → → Given α ∈ K and − ∈ E. . . it is assumed that all families of scalars have ﬁnite support. Given a ﬁeld K. u u u → → → (V4) 1 · − = − . From (V2). it will always be clear from the type of u the arguments which + is intended. a vector space over K (or K-vector space) is a set E (of vectors) together with two operations + : E × E → E (called vector addition). addition v v of vectors being addition in the ﬁeld. . let Rn be the set of n-tuples x = (x1 . . we review the concepts of linear combination and linear independence. Rn is a vector space over R. → → → → → → (V1) α · (− + − ) = (α · − ) + (α · − ). u v u v u v → → → → (V2) (α + β) · − = (α · − ) + (β · − ). For simplicity. . Thus. .

u → → − → When I = ∅. we stipulate that − = 0 . . given a family (−i )i∈I . Indeed. µ ∈ K. u i∈I We agree that when I = ∅. VECTOR SPACES 427 → Deﬁnition A.A. and that any intersection of subspaces is a subspace. u i∈I → Equivalently. there is some family (λi )i∈I of scalars in K such that → → − λi −i = 0 and λj = 0 for some j ∈ I. Deﬁnition A.3.2. → Given a vector space E. Given a vector space E. − → − → we see that every subspace contains the vector 0 . Subspaces having such a “generating family” play an important role. and all λ. a family (−i )i∈I is linearly dependent iﬀ there is some family (λi )i∈I of scalars u in K such that → → − λi −i = 0 and λj = 0 for some j ∈ I. → → A family (−i )i∈I is linearly dependent iﬀ some −j in the family can be expressed as a u u linear combination of the others vectors in the family. a subset F of E is a linear subspace (or subspace) → → → → of E iﬀ F is nonempty and λ− + µ− ∈ F for all − . Letting λ = µ = 0. u − = → uj i∈(I−{j}) i∈I which implies that → −λ−1 λi −i .1. u j The notion of a subspace of a vector space is deﬁned as follows. the family ∅ is linearly independent.1. − ∈ F .1. u v u v It is easy to see that a subspace F of E is closed under arbitrary linear combinations of vectors from F . → → − λi −i = 0 implies that λi = 0 for all i ∈ I. Let E be a vector space. the subset V of E consisting of the v − → → null vector 0 and of all linear combinations of (−i )i∈I is easily seen to be a subspace of v E. The subspace { 0 } will be denoted as 0 (with a mild abuse of notation). A vector − ∈ E is a linear combination of a v − ) of elements of E iﬀ there is a family (λ ) of scalars in K such that → family ( ui i∈I i i∈I − = → v i∈I → λi −i . We say that a family (−i )i∈I is linearly independent v u iﬀ for every family (λi )i∈I of scalars in K. and motivate the following deﬁnition.

Given any ﬁnite family S = (−i )i∈I generating a vector space E and any u −) → linearly independent subfamily L = ( uj j∈J of S (where J ⊆ I). We prove the above result in the case where a vector space is generated by a ﬁnite family. B is a basis of E such that L ⊆ B ⊆ S. → → by adding v ui i∈I / → → → − Proof. → Lemma A. then the family (− ) ∪ (− ) obtained → → → E.5. v u → → then µ has an inverse (because K is a ﬁeld). a family (−i )i∈I of v − ∈ V spans V or generates V iﬀ for every − ∈ V . say B = (−h )h∈H . → → ing that v ui i∈I − → → → But then. . Consider the set of linearly independent families B such that L ⊆ B ⊆ S. Assume that µ− + i∈I λi −i = 0 . The next theorem also holds in general. It is a standard result of linear algebra that every vector space E has a basis. the family B = ( uh h∈H∪{p} is linearly independent. there is some family (λ ) of → → vectors vi v i i∈I scalars in K such that − = → → λi −i . but the proof is more sophisticated for vector spaces that do not have a ﬁnite set of generators (it uses Zorn’s lemma). and that → → for any two bases (−i )i∈I and (−j )j∈J . LINEAR ALGEBRA → Deﬁnition A. and thus we have − = − i∈I (µ−1 λi )−i . Indeed. We claim that u → B generates E.1. We begin with a crucial lemma. v v − ) . Thus. if B does not generate E.5. we say → → or generated by ( vi i∈I vi i∈I − ) that spans V and is linearly independent is → that V is ﬁnitely generated . there is a basis B of E such that L ⊆ B ⊆ S. showv u − is a linear combination of (− ) and contradicting the hypothesis. Thus. not only for ﬁnitely generated vector spaces. If µ = 0. A family ( ui i∈I called a basis of V . → Theorem A.1. and since the family (− ) is linearly independent. it has some maximal element. Thus. and the integer n is called the dimension of the vector space E. then there is some −p ∈ S that is not u a linear combination of vectors in B (since S generates E). Proof. and since L ⊆ B ⊂ B ′ ⊆ S. Given a linearly independent family (−i )i∈I of elements of a vector space u − ∈ E is not a linear combination of (− ) .6. we have λ − = 0 . If a subspace V of E is generated by a ﬁnite family (− ) . if v ui i∈I ui i∈I k → v − to the family (− ) is linearly independent (where k ∈ I). Given a vector space E and a subspace V of E. by lemma / −) → ′ A.1. we only prove the theorem for ﬁnitely generated vector spaces.1. with p ∈ H. if E u v has a ﬁnite basis of n elements. In particular. µ = 0.428 APPENDIX A. Then. we u u have λi = 0 for all i ∈ I. i∈I i i i i∈I Note that the proof of the above lemma holds for arbitrary vector spaces. for any family (λi )i∈I of scalars in K. v v i∈I → → We also say that the elements of (−i )i∈I are generators of V and that V is spanned by (−i )i∈I . I and J have the same cardinality.4. every basis of E has n elements. this contradicts the maximality of B. Since this → set is nonempty and ﬁnite.

Given a vector space E. In particular. |L| = n − m. we can always assume that L ∩ I = ∅.1. Then. where p is any member of I. or Artin [1]. We proceed by induction on |I| = m. Assume |I| = m + 1. In this case. Consider → the linearly independent family (−i )i∈(I−{p}) . §5 (pp.1. → Lemma A. The following replacement lemma shows the relationship between ﬁnite linearly independent families and ﬁnite families of generators of a vector space.1.A. and replace ρ by the injection ρ′ which agrees with ρ on L − {p} and such that ρ′ (p′ ) = ρ(p). the family (−i )i∈I is empty. (2) B is a maximal linearly independent family of E. and let ( vj j∈J be any ﬁnite family such that every −i is a linear u − ) . → Lemma A. The existence of such a maximal family can be shown using Zorn’s lemma. where |J| = n. Since −p is a linear combination of (−j )j∈J and u v −) → −→) −) → → the families ( ui i∈(I−{p}) ∪ (vρ(l) l∈L and ( vj j∈J generate the same subspace of E.1. 588-589). the problem is to garantee the existence of a maximal linearly independent family B such that L ⊆ B ⊆ S. we have i∈(I−{p}) → → u → − u λi −i − −p = 0 . Appendix §1 (pp. When m = 0. . we can replace L by (L − {p}) ∪ {p } where p′ does not belong to I ∪ L.8. where |I| = m. Appendix 2. The following lemma giving useful properties characterizing a basis is an immediate consequence of theorem A. the v following properties are equivalent: (1) B is a basis of E. 139-140). → → Thus. and the families (−i )i∈I ∪ (−→)l∈L and (−j )j∈J generate u vρ(l) v the same subspace of E.6 should consult either Lang [47]. and u the lemma holds trivially with L = J (ρ is the identity). Readers who want to know more about Zorn’s lemma and the proof of theorem A. there exists a set L and an injection ρ : L → J such that L ∩ (I − {p}) = ∅. −p is a u → linear combination of (−i )i∈(I−{p}) ∪ (−→)l∈L . and the families (−i )i∈(I−{p}) ∪(−→)l∈L and (−j )j∈J generate the same subspace u vρ(l) v ′ of E. for any family B = (−i )i∈I of vectors of E. By the induction u hypothesis.6 also holds for vector spaces that are not ﬁnitely generated. VECTOR SPACES 429 Theorem A. 878-884) and Chapter III. → Proof. §2 (pp. let (−i )i∈I be any ﬁnite linearly independent family u −) → → in E. (3) B is a minimal generating family of E.7.6.1. there exists a set L and an injection ρ : L → J → combination of ( vj j∈J → → such that L ∩ I = ∅. m ≤ n. Given a vector space E.1. → → |L| = n−m. Let u vρ(l) − = → up i∈(I−{p}) → λi −i + u l∈L λl −→ vρ(l) (1) If λl = 0 for all l ∈ L. If p ∈ L.

Thus. and thus.9. this would contradict the fact that ( → u i i∈I is linearly independent. A symmetric argument yields |J| ≤ |I|.9 also holds for vector spaces that are not ﬁnitely generated.1. Thus. v → → we obtain |J| ≤ |I|. Since (−i )i∈I is a basis of E. we obtain the following fundamental theorem. Remark: Theorem A. we have −→ = v− ρ(q) → → (−λ−1 λi )−i + λ−1 −p + u q q u i∈(I−{p}) l∈(L−{q}) (−λ−1 λl )−→ vρ(l) q (2) → → We claim that the families (−i )i∈(I−{p}) ∪ (−→)l∈L and (−i )i∈I ∪ (−→)l∈(L−{q}) generate u vρ(l) u vρ(l) the same subset of E.1. and assume that I is inﬁnite. the second family is obtained from the ﬁrst by replacing −→ by − . say u l = q. i i∈I j j∈J Proof. vi = 0}. we must have I = L. Assume that (−i )i∈I and (−j )j∈J are bases of E. Let E be a ﬁnitely generated vector space. we have |I| = |J| = n. u i∈I → → Lj . Putting theorem A. and by lemma A. let (−j )j∈J be a generating family of u v E. For every j ∈ J. by (2). Any family (−i )i∈I generating u −) → E contains a subfamily ( uj j∈J which is a basis of E. because the Lj are ﬁnite.6 and lemma A. i i∈I j j∈J When |I| is inﬁnite. since I = j∈J Lj with J inﬁnite and the Lj ﬁnite. the dimension |I| being a cardinal number which depends only on the vector space E. and vρ(q) ui i∈I vρ(l) l∈(L−{q}) . and vice-versa.1. → Theorem A. |I| = |J| for any two bases (− ) and (− ) u v of E. Furthermore. and the lemma holds for L − {q} and the restriction of the injection ρ : L → J to L − {q}. Furthermore. Indeed.8 implies theorem A. lemma A. we say that E is of inﬁnite dimension. Since λq = 0. Actually. by a → standard result of set theory.1. λl = 0 for some l ∈ L. the families − ) ∪ (−→) → −) → ( ui i∈I vρ(l) l∈(L−{q}) and ( vj j∈J generate the same subspace of E. let Lj ⊆ I be the ﬁnite set → Lj = {i ∈ I | −j = v Let L = j∈J → vi −i .1.1. for every two bases → → (− ) and (− ) u v of E.1. LINEAR ALGEBRA → contradicting the fact that (−i )i∈I is linearly independent. J must be inﬁnite. If (−j )j∈J is also a basis.430 APPENDIX A. But then. This → → can be shown as follows.6 with L = ∅ and → → → → S = (−i )i∈I .1. since L ∩ I = ∅ and |L| = n − m imply that (L − {q}) ∩ I = ∅ and |L − {q}| = n − (m + 1). by vρ(l) ρ(q) −→ is a linear combination of (− ) ∪ (−→) − → (1).8 implies that |I| ≤ |J|. and − is a linear combination of (− ) → → → v− up up ui i∈(I−{p}) ∪ (−→)l∈L . since otherwise. The ﬁrst part follows immediately by applying theorem A. Let (−i )i∈I be a basis of E. |I| ≤ |J|. I would be ﬁnite.6 when the vector space if ﬁnitely generated. one can prove that lemma A.1. since otherwise (−i )i∈L u u −) would be another basis of E. Since (−i )i∈I is linearly u u v u −) → independent and ( vj j∈J spans E.5. by a symmetric argument. The dimension of a vector space .8 together.

u with λj = λj +µj since µj = 0. → → v ui i∈I i∈I λi ui → Proof. u i∈I → and since (−i )i∈I is linearly independent. The converse is shown by contradiction. v u → → If (−i )i∈I is a basis of a vector space E. If (µi )i∈I is another family of u − = → − → scalars in K such that v i∈I µi ui . Thus dim(K) = 1. that u → is. is the standard vector space of dimension |I|. contradicting the assumption that (λi )i∈I is the unique family → → such that − = i∈I λi −i . then every family (a) where a ∈ K and a = 0 is a basis. Clearly. u A very important fact is that the family (λi )i∈I is unique. λi = µi for all i ∈ I. there would be a family (µi )i∈I of scalars not all null such that → → − µi −i = 0 u i∈I and µj = 0 for some j ∈ I. − = → v i∈I → → − λi −i + 0 = u i∈I → λi −i + u i∈I → µi −i = u i∈I → (λi + µi )−i . we must have λi − µi = 0 for all i ∈ I. and assume that − = → → − . → ( ui i∈I Given a ﬁeld K and any (nonempty) set I. . if the ﬁeld K itself is viewed as a vector space. we can form a vector space K (I) which. let (−i )i∈I be a family of vectors in E. Let u − ∈ E. For any vector − ∈ E. then we have → → − (λi − µi )−i = 0 . First. since the family u v − ) generates E. u i∈I → each vi is called the component (or coordinate) of index i of − with respect to the basis v −) .1. in some sense.A. VECTOR SPACES 431 E is denoted as dim(E). → → Let (−i )i∈I be a basis of a vector space E. Then. But then. there is a family (λ ) of scalars K such that → (u i i∈I i i∈I − = → v i∈I → λi −i . If (−i )i∈I was linearly u dependent.10. Given a vector space E. the family (λ ) → v v i i∈I of scalars such that i∈I λi ui − = → − is unique iﬀ (− ) is linearly independent. if (vi )i∈I is the unique u v family of scalars in K such that − = → → v vi −i .1. → Lemma A. for any vector − ∈ E. assume that (−i )i∈I is linearly independent.

Thus. − ∈ E. is clearly an injection. 2 Where K I denotes the set of all functions from I to K. LINEAR ALGEBRA Deﬁnition A. When I = {1.1. The → function ι : I → K (I) .1.432 APPENDIX A. K (I) = K I . Further−→ − −→ − → → more.1.11. u → u The above identity is shown by induction on the size of the support of the family (λi −i )i∈I .2. dim(K (I) ) = |I|. a linear map between E and F is a function f : E → F satisfying the following two conditions: → → → → f (− + − ) = f (− ) + f (− ) x y x y − ) = λf (− ) → → f (λ x x → → for all − .2 We deﬁne addition and multiplication by a scalar as follows: (λi )i∈I + (µi )i∈I = (λi + µi )i∈I . . we denote K (I) as K n . − ∈ E. e When I is a ﬁnite set. . Given a → family (−i )i∈I of vectors in E. Given two vector spaces E and F . In fact. we get f ( 0 ) = 0 . but dim(K I ) is strictly greater when I is inﬁnite. or linear map. . K (I) is a vector space. It is immediately veriﬁed that. A. such that ι(i) = −i for every i ∈ I.2 Linear Maps A function between two vector spaces that preserves the vector space structure is called a homomorphism of vector spaces. but this is false when I is inﬁnite. the family (− ) of vectors − deﬁned such that (e ) = 0 if j = i and (e ) = 1. given any family (λi )i∈I of scalars in K. addition and multiplication by a scalar are well deﬁned. Linear maps formalize the concept of linearity of a function. is e e i i∈I i i j i i clearly a basis of the vector space K (I) . x − → − → − → → → Setting − = − = 0 in the ﬁrst identity. Deﬁnition A.2. x y → for all λ ∈ K. Given a ﬁeld K and any (nonempty) set I. using the properties of deﬁnition A. we have u f( i∈I → λi −i ) = u i∈I → λi f (−i ). . and λ · (µi )i∈I = (λµi )i∈I . . because families with ﬁnite support are considered. n}. The basic property of x y linear maps is that they transform linear combinations into linear combinations. let K (I) be the subset of the cartesian product K I consisting of families (λi )i∈I with ﬁnite support of scalars in K.

→ → → → → → Proof. LINEAR MAPS 433 Given a linear map f : E → F . as the set → → → → Im f = {− ∈ F | f (− ) = − . Given any − . and u v v − ) generates F . Furthermore. x x Lemma A.2. x y x y → → − ) = − and f (− ) = − . the set Im f is a subspace of F and the set Ker f is a subspace of E. → → we have f ( x 0 y 0 − → → → → → f (λ− + µ− ) = λf (− ) + µf (− ) = 0 . → → iﬀ f ( x y 0 A fundamental property of bases in a vector space is that they allow the deﬁnition of linear maps as unique homomorphic extensions. Note that f (− ) = f (− ) x y x y → − − − ) = − . Given any − . y x y x − → and its Kernel (or nullspace) Ker f = f −1 ( 0 ). Thus. we have → → y v → → → → → → f (λ− + µ− ) = λf (− ) + µf (− ) = λ− + µ− . every vector u − ∈ E can written uniquely as a linear combination → x − = → x i∈I → xi −i . since (−i )i∈I is a basis of E. λ− + µ− ∈ Ker f . If such a linear map f : E → F exists. showing that Ker f is a subspace of E. Given a linear map f : E → F . µ ∈ K. Given any two vector spaces E and F . λ− +µ− ∈ Im f . as the set − → → → Ker f = {− ∈ E | f (− ) = 0 }. x y x y → → → → that is. − ∈ Im f . we deﬁne its image (or range) Im f = f (E). given any basis (−i )i∈I of E. f is injective iﬀ Ker f = 0.A. → Lemma A. − ∈ Ker f . and for all λ. there are some − . u v u v x y → → → → and thus.2. there is a unique linear map f : E → F such that → any other family of vectors ( vi i∈I → → → f (−i ) = −i for all i ∈ I. − ∈ E such that − = f (− ) and x y u v x u − = f (− ). given u − ) in F . f is injective iﬀ (−i )i∈I is linearly independent.3. → f is surjective iﬀ ( vi i∈I → Proof. we must have → f (− ) = x i∈I → xi f (−i ) = u i∈I → xi −i . showing that Im f is a subspace of F .2. as shown in the following lemma. and thus. The linear map f : E → F is injective iﬀ Ker f = 0 (where 0 is − → the trivial subspace { 0 }). v . u and by linearity.2. for some − ∈ E}.

If → Conversely.2. e . u i∈I → → and since (−i )i∈I is a basis. u i∈I then i∈I → λi −i = v i∈I → λi f (−i ) = f ( u → → and λi = 0 for all i ∈ I. since (−i )i∈I is linearly independent. Since (−i )i∈I is a basis of E. and u − )) is a basis of F (by lemma A.1.434 Deﬁne the function f : E → F . assume that f is injective. it is unique by the x u → → previous reasoning. an injective linear map f : E → F sends a basis −) → → ( ui i∈I to a linearly independent family (f (−i ))i∈I of F .3. Let (λi )i∈I be any family of scalars. and f is injective. v Since f is injective iﬀ Ker f = 0. then (f ( ui i∈I We can now show that the vector space K (I) of deﬁnition A.1. and obviously. v i∈I → → Since −i = f (−i ) for every i ∈ I. we have v u f( i∈I → λi −i ) = u i∈I → λi f (−i ) = u i∈I → → − λi −i = 0 . f (−i ) = −i . such that ι(i) = −i for every i ∈ I. and assume that → → − λi −i = 0 . Recall that → ι : I → K (I) .11 has a universal property which amounts to saying that K (I) is the vector space freely generated by I. v u we just showed that Ker f = 0. (−i )i∈I is a basis of E.7). when E and F have the same ﬁnite dimension n. which is also a basis when f is u → bijective. u v − ) is linearly independent. and (−i )i∈I is linearly independent. we have λi = 0 for all i ∈ I. by letting → f (− ) = x i∈I APPENDIX A. is an injection from I to K (I) . By the second part of lemma A. The part where f is surjective is left as a simple exercise. → f : E → F is injective. Also. LINEAR ALGEBRA → xi −i v → → for every − = i∈I xi −i . assume that ( vi i∈I f( i∈I − → → λi −i ) = 0 u − → → λi −i ) = 0 . u v Now. It is easy to verify that f is indeed linear. we have → → − λi −i = 0 .

v → This shows that f is unique if it exists. and by lemma A. with F nontrivial. which proves the existence and uniqueness of a linear map f e such that f = f ◦ ι. as in the following diagram: I CC / K (I) CC CC f C f C! C F Proof.2. there is some vector w . the following properties hold: u → → (1) The family (−i )i∈I generates E iﬀ for every family of vectors (−i )i∈I in F . → for every i ∈ I. since u v − ) generates E.2. there is a unique linear map f : K (I) → F such that → f (−i ) = f (i) for every i ∈ I. Since F is nontrivial. However. u and by linearity. → → most one linear map f : E → F such that f ( ui vi ι → → Proof. there is some some vector y y 0 ui i∈I − ∈ E which is not in the subspace generated → does not generate E.3. and for any function f : I → F . (1) If there is any linear map f : E → F such that f (−i ) = −i for all i ∈ I. → → there is some linear map f : E → F such that f ( ui vi → xi −i .2. Lemma A.5. given any family → (−i )i∈I of vectors in E. The following simple lemma will be needed later when we study spline curves. there is at u v − ) = − for all i ∈ I. we must have → f (− ) = x i∈I → xi f (−i ) = u i∈I → xi −i . since f = f ◦ ι. Since (− ) → → → → E.A. the family (−i )i∈I is a basis of K (I) . such that f = f ◦ ι. Conversely. LINEAR MAPS 435 Lemma A. every vector − ∈ E can written as some linear combination → → ( ui i∈I x − = → x i∈I → → (2) The family (−i )i∈I is linearly independent iﬀ for every family of vectors (−i )i∈I in F . for any vector space F . and (f (i))i∈I is a family e of vectors in F . If such a linear map f : K (I) → F exists.4. u v − ) = − for all i ∈ I. there is a unique linear map f : K (I) → F . assume that (−i )i∈I does not generate u − ∈ F such that − = − . Given any two vector spaces E and F .2. Given any set I. we must have → e f (i) = f (ι(i)) = f (−i ).

u i∈I → By the assumption. there is some family i i∈I (λi )i∈I of scalars (not all zero) such that → → − λi −i = 0 . the relation ≡M is an equivalence relation with the following two congruential properties: → → → → → u → → → 1. we have. u y → → − → and since − = 0 . Conversely. for all i ∈ I0 . assume that (− ) u is linearly dependent. we have a linear map such that f (−i ) = 0 for all i ∈ I.1. y u Although in this course. → (2) if the family (−i )i∈I is linearly independent.6 again.436 APPENDIX A. Then. and − = −0 . such that fi (−i ) = u → − . deﬁning f : E → F to be the constant linear → family in F such that vi 0 − → − → → map with value 0 .3. then the conclusion follows by lemma u → A.3.2. there is a basis ( ej j∈I0∪J of E. The subspace M induces a relation → → ≡M on E. By lemma u − → → → → A. Lemma A. If −1 ≡M −1 and −2 ≡M −2 . → → → → u v u v M We have the following simple lemma. we will not have many occasions to use quotient spaces. deﬁned as follows: For all − . and let M be any subspace of E. for every i ∈ I. Letting (−i )i∈I be the e u w ej v → − = − for all i ∈ I. and g(−j ) = 0 . By deﬁnition of the basis (−j )j∈I0 ∪J of E.6.3. By lemma A. A. g(−i ) = 0 for all e u i ∈ I. they are fundamental in algebra. Then. Given any vector space E and any subspace M of E.2. and f (− ) = − . → → → → → such that −i = −i . and u v u v u v v . u v − ≡ − iﬀ − − − ∈ M. this contradicts the fact that there is at most one such map. there is some linear map fi : E → F . then −1 + −2 ≡M −1 + −2 . for j ∈ I − {i}. this implies λi = 0. there is a unique linear map g : E → F such that g(− ) = − . − ∈ E. LINEAR ALGEBRA → → → by (−i )i∈I . for all w y e − → → → j ∈ (I0 ∪ J) − {j0 }. for every i ∈ I.1.1.3 Quotient Spaces Let E be a vector space. Thus. and since f = g. and by lemma A. and they are needed to deﬁne tensor products. for some j0 ∈ J. there is a linearly independent subfamily (−i )i∈I0 of (−i )i∈I u u u −) → generating the same subspace. The next section may be omitted until needed. which are useful to provide nice conceptual proofs of certain properties of splines. we would get → → y u 0 i j − → 0 = fi ( i∈I → λi −i ) = u i∈I → → λi fi (−i ) = λi − . (−i )i∈I is linearly independent.

2. and thus λ− ≡M λ− . which is that if we attempt to deﬁne the direct sum E ⊕ F of two vector spaces using the cartesian product E × F . we know that Ker f is a subspace of E. u v u v − ] = [λ− ]. the above operations do not depend on the speciﬁc choice of representatives → → in the equivalence classes [− ]. we want to think of the elements of E ⊕ F as multisets of two elements. 2}. It is obvious that ≡M is an equivalence relation. There is a subtle point.2} . It is possible to do so by considering the direct sum of a family (Ei )i∈{1. we have −1 + −2 ≡M −1 + −2 .1. we don’t quite get the right notion because elements of E × F are ordered pairs.4 Direct Sums Before considering linear forms and hyperplanes. The vector space E/M is called the quotient space of E by the subspace M. Given any linear map f : E → F .3. u v v w → w → → u → → → and −1 + −2 ∈ M.3. but we want E ⊕ F = F ⊕ E.A. → → w → → → w → → w → are equivalent to u1 v1 u2 v2 w1 2 1 2 → u → → → → w → (−1 + −2 ) − (−1 + −2 ) = −1 + −2 . Thus. we begin by considering the case where I = {1. u u u is a surjective linear map called the natural projection of E onto E/F . − ∈ M. then → → → → u v w w → → → λ− − λ− = λ− . [− ] ∈ u v E/M. Thus. A. [− ] ∈ E/M.3. deﬁned such that π(− ) = [− ] for every − ∈ E. .1 shows that we can deﬁne addition and multiplication by a scalar on the set E/M of equivalence classes of the equivalence relation ≡M . If w u v v − − − = − . with − . u v u v 437 → → → → Proof. → → λ[ u u By lemma A. w u v Lemma A. The function π : E → E/F . Deﬁnition A. if − ≡M − . Note that −1 ≡M −1 and −2 ≡M −2 u v u v − − − = − and − − − = − . u v w → → → and λ− ∈ M. and thus. we deﬁne the notion of direct sum and prove some simple lemmas. with − ∈ M. and it is immediately veriﬁed that Im f is isomorphic to the quotient space E/Ker f .4. then λ− ≡M λ− . DIRECT SUMS → → → → 2. Given any vector space E and any subspace M of E. we have → → → → [− ] + [− ] = [− + − ]. since M is a subspace of E. we deﬁne the following operations of addition and multiplication by a scalar on the set E/M of equivalence → → classes of the equivalence relation ≡M as follows: for any two equivalence classes [− ]. since M is a subspace of E. It is also immediate to verify that E/M is a u v → → → vector space. For simplicity. and more generally of a family (Ei )i∈I .

− . u v u v with addition → → → → → u → → → { 1. → v → v → → v u1 → u2 → u1 u2 → → v2 1 2 1 → → → → λ − . 2. → → sisiting of the two vectors u v Remark: In fact. −1 + −2 }.− + − . as the set → → → → E1 ⊕ E2 = {{ 1.2} Ei and E1 × E2 . − } | − ∈ E1 . we can deﬁne E1 × · · · × En for any number of vector spaces. This is not to be confused with the cartesian product E1 × E2 . and when E1 = . − } + { 1. 2. LINEAR ALGEBRA Deﬁnition A.438 APPENDIX A. − . − . with addition and u v u v multiplication by a scalar deﬁned such that − . 2. − ∈ E2 }. u v u v We deﬁne the injections in1 : E1 → E1 ⊕ E2 and in2 : E2 → E1 ⊕ E2 as the linear maps deﬁned such that. − . . 0 }. − . u v1 u v u v v and scalar multiplication → → → → λ{ 1. −2 } = { 1. 2. λ− . u v u v There is a bijection between i∈{1. 2. v u v u → → Thus. λ− }.2} Ei of the family (Ei )i∈{1. −1 . λ− . where − ∈ E1 . that is. . elements of Of course. 1. and − ∈ E2 . 0 .− = − + − . . E1 ⊕ E2 is just the product i∈{1. − } of E1 ⊕ E2 can be viewed as an unordered pair conu v − and − .2} Ei are unordered pairs! i∈{1. 2. u u 1 and − → → → in2 (− ) = { 1. v v Note that → → → → E2 ⊕ E1 = {{ 2. we deﬁne the (external) direct sum E1 ⊕ E2 of E1 and E2 . 2. − → → → in (− ) = { 1. but as we just saw. − }. The vector space E1 × E2 → → → → is the set of all ordered pairs − . −1 + −2 . = En . − } = { 1. 2. − } | − ∈ E2 . every member { 1. − ∈ E1 } = E1 ⊕ E2 . The following property holds. Given two vector spaces E1 and E2 .2} . −2 .− + − . − = λ− . 2.4.1. as a multiset consisting of two elements. − . we denote this product as E n .

4. u v u v → → → π2 ({ 1. − }) = f (− ) + g(− ). such that (f + g) ◦ in1 = f . 2. Given two vector spaces E1 and E2 .3. − .4. w w w . If we deﬁne the projections π1 : E1 ⊕ E2 → E1 and π2 : E1 ⊕ E2 → E2 . as in the following diagram: E nn6 O 1 nnn nnn π1 nnn nnn f ×g nn / E1 ⊕ E2 PPP PPP PPP π2 P g PPPP PP( f → → → → (f + g)({ 1. Deﬁne → → for every − ∈ E1 and − ∈ E2 . u v v D E2 Proof. Given two vector spaces E1 and E2 . We already noted that E1 ⊕ E2 is in bijection with E1 × E2 . Lemma A. 2. − . − . DIRECT SUMS 439 Lemma A. g(− ) }. and π2 ◦ (f × g) = g. Deﬁne → for every − ∈ D. u v u and we have the following lemma.4.A. For every pair of linear maps f : E1 → G and g : E2 → G. there is a unique linear map f + g : E1 ⊕E2 → G. such that π1 ◦ (f × g) = f . E1 ⊕E2 is a vector space. It is immediately veriﬁed that f + g is the unique linear u v map with the required properties. 2. − }) = − .2. and (f + g) ◦ in2 = g. f (− ) . 2. there is a unique linear map f × g : D → E1 ⊕ E2 . It is immediately veriﬁed that f × g is the unique linear map with the w required properties. as in the following diagram: PPP f PPP PPP PPP P( f +g / E1 ⊕ E2 O nn6 nnn nn in2 nnn nnn g nnn in1 E1 PPP G E2 Proof. for every pair of linear maps f : D → E1 and g : D → E2 . − }) = − . → → → (f × g)(− ) = { 1. such that → → → π1 ({ 1.

Since U ∩ V = 0. → → Proof. LINEAR ALGEBRA It is a peculiarity of linear algebra that sums and products of ﬁnite families are isomorphic. this is no longer true for products and sums of inﬁnite families.3 iﬀ for every − ∈ E.4. Thus U ∩ V = 0. w w u v u v We say that E is the (internal) direct sum of U and V . we have a contradiction. → → → x x 0 0 x → → → → → → → → − − → − → − Conversely. Given a vector space E. If − ∈ U ∩ V and − = 0 . U + V = V + U. we must have u′ = − and v ′ = − . x u v x u v Then. → → → − Proof. u′ ∈ U and − . and assume that E = U ⊕ V .2. Lemma A. (2) E = U + V and U ∩ V = 0. −′ − → → − − → v − v = → − u′ . Let E be a vector space. let U. letting i1 : U → E and i2 : V → E be the inclusion maps. where I and J are disjoint. where − .6. We deﬁne the sum U + V of U and V as the set → → → → → → U + V = {− ∈ E | − = − + − . Let (−i )i∈I be a basis of U and let (−j )j∈J be a basis of V . we say that E is a direct (internal) sum of U and V . Then. and we write E = U ⊕ V (with a slight abuse of notation. with a slight abuse of notation! . 3 Again. v u u v and thus E = U ⊕ V . V are subspaces of a vector space E. The following two simple lemmas holds. Deﬁnition A. and U ⊕ V = V ⊕ U. since E and U ⊕ V are only isomorphic). since x x → → − can be written both as − + − and − + − .440 APPENDIX A. ( ui i∈I vj j∈J is a basis of E. When U. V be any subspaces of E.5. for some − ∈ U and some − ∈ V }. such that − = − + − . First. there exist unique vectors − ∈ U and − ∈ V . Note that by deﬁnition. → → → → → → x u v x u v It is immediately veriﬁed that U + V is the least subspace of E containing U and V . when U ⊕ V is isomomorphic to E under the map i1 + i2 given by lemma A. u v − ) ∪ (− ) → → Clearly. assume that E is the direct sum of U and V .4. Lemma A. It is also convenient to deﬁne the sum U + V of U and V .4. assume that − = − + − and − = u′ + v ′ . dim(E) = dim(U) + dim(V ). The following properties are equivalent: (1) E = U ⊕ V . v ′ ∈ V . u − → → → − → → − → → → − where v ′ − − ∈ V and − − u′ ∈ U. Let E be a vector space.4. However.4. denoted as E = U ⊕ V .

we denote . If f : E → F is surjective. The following lemma is an obvious generalization of lemma A.4. λ(fi )i∈I = (λfi )i∈I . and let f : E → F be a linear map. The (external) direct sum i∈I Ei of the family (Ei )i∈I is deﬁned as follows: i∈I Ei consists of all f ∈ i∈I Ei . If f : E → F is injective.4.A. It is one of the many versions of the axiom of choice.2. ini (x) = (fi )i∈I . let (Ei )i∈I be a family of vector spaces. for all i ∈ I. such that. In particular. if Ei = ∅ for every i ∈ I.1. deﬁned such that. let us recall the notion of the product of a family (Ei )i∈I . The direct sum i∈I Ei is a vector space. Let E and F be vector spaces. for all i ∈ I. for all i ∈ I. for all i ∈ I. and addition and multiplication by a scalar are deﬁned as follows: (fi )i∈I + (gi )i∈I = (fi + gi )i∈I . there is an injective linear map s : F → E called a section. for every i ∈ I. that.9. Lemma A. (I) Remark: When Ei = E. By lemma A. then. We also have injection maps ini : Ei → where fi = x.8. there is a basis (− ) → → A. We now deﬁne direct sums.3. we have the projection πi : i∈I Ei → Ei . is often denoted as (fi )i∈I . then. and r(− ) = − for all → → → → r : F → E can be deﬁned such that r( vi ui vj w Lemma A. (f ( ui i∈I vj j∈J − = f (− ).3. a linear map → → of F . . for all j ∈ (I − {i}). such that r ◦ f = idE . f (i) ∈ Ei . is the set of all functions f : I → i∈I Ei .7.11. we ﬁnd the vector space K of deﬁnition A. and let (Ei )i∈I be a family of vector spaces. Let I be any nonempty set. Deﬁnition A. → Proof. where I ⊆ J. πi ((fi )i∈I ) = fi . which have ﬁnite support. ( i∈I hi ) ◦ ini = hi .2. and let G be any vector space. Since f : E → F is an injective linear map. i∈I Ei . deﬁned such that. and where vi ui − ) = − . when i∈I Ei as E (I) Ei = K.6. its product i∈I Ei . there is a surjective linear map r : F → E called a retraction. Given a family of sets (Ei )i∈I . For every i ∈ I. such that. such that f ◦ s = idF . there is a unique linear map hi : i∈I i∈I Ei → G.2. and for every family (hi )i∈I of linear maps hi : Ei → G. then i∈I Ei = ∅.1.4. DIRECT SUMS 441 We now give the deﬁnition of a direct sum for any arbitrary nonempty index set I.4. Let I be any nonempty set. First. by lemma u − )) is linearly independent in F . for every i ∈ I.4. and fj = 0. By theorem A. We also have the following basic lemma about injective or surjective linear maps. A member f ∈ i∈I Ei . Let (−i )i∈I be a basis of E.

by lemma A. On the other hand.3 (again). there is a unique linear map s : F → E such that s( vj uj → → Also.9 is obvious. The property of a short exact sequence given by lemma A. Since (− ) → → → → → is surjective. We now have the following fundamental lemma. assume that f : E → F is surjective. and this is denoted as 0 −→ E −→ F −→ G −→ 0.4. then − = s(− ) for some − ∈ G because − ∈ Im s. (a) Since s : G → F is a section of g. Since r(f (−i )) = −i for all w w u u i ∈ I. be three vector spaces. u v v − → → → because g ◦ s = idG . which shows that − = s(− ) = 0 . Let (−j )j∈J be a basis of F .3. say − = 0 . and we have F = Ker g + Im s. 4 5 (b) For any retraction r : F → E of f . Thus. f is injective and g is surjective. we say that we have a short exact sequence. LINEAR ALGEBRA → → → − → → j ∈ (J − I).4.2. The existence of a section s : G → F of g follows from lemma A. Given a sequence of linear maps E −→ F −→ G. if u u → − ∈ Ker g ∩ Im s. But then.5 f g f g .10. when Im f = Ker g.3.10 is often described by saying f g that 0 −→ E −→ F −→ G −→ 0 is a (short) split exact sequence. since f and s are injective. f + s : E ⊕ G → F is an isomorphism. and for every − ∈ F . g(− ) = − because → → → → → → u u v v u u 0 − ∈ Ker g. we have F = Im f ⊕ Im s. The existence of a retraction r : F → E of f follows from lemma A.4. Let E. and assume that Im f = Ker g.9. by lemma A.2. v v j j F The converse of lemma A. The proof of (b) is very similar. Im f ⊕ Ker r is isomorphic to F . Since f : E → F v − ∈ F . we have the following result. we have r ◦ f = idE . − − s(g(− )) ∈ Ker g. Im f = Ker g. → Now.9. since f (s(− )) = − .2. g : F → G a surjective linear map. the following properties hold. Ker g ⊕ Im s is isomorphic to F . we have g ◦ s = idG . (a) For any section s : G → F of g.442 APPENDIX A.10. and u v since by assumption.4. and so. Lemma A. we say that the f g sequence E −→ F −→ G is exact at F . u − → → → → → → → g(− − s(g(− ))) = g(− ) − g(s(g(− ))) = g(− ) − g(− ) = 0 . If in addition to being exact at F .4 → Proof. f : E → F an injective linear map. we must have f ◦ s = id . As a corollary of lemma A. → → a basis of F . for every vj uj uj vj vj j∈J is −)=−. F and G. Then. → u → → → → − g(− ) = g(s(− )) = − = 0 . where − is any given vector in E. F = Ker g ⊕ Im s. u u u u u u → → Thus. and the linear map f + s : E ⊕ G → F is an isomorphism. by lemma A. there is some − ∈ E such that f (− ) = − .4.4.

w v u . f is linear. − − − ∈ U. where f (− ) = − iﬀ − . We → → → → → → → pair u v w u v w v also need to verify that f is linear. U ∩ V = 0. Since E = U ⊕ V . Then. such that − = − + − . If − −− =− → → → w v u and → → − where − . → → → → v w w v We claim that R is a functional relation that deﬁnes a linear isomorphism f : V → W → → → → → → between V and W . Lemma A. Consider where Ker f −→ E is the inclusion map. E is isomorphic to Ker f ⊕ Im f . to get dim(E) = dim(Ker f ) + dim(Im f ). if u → where − ∈ U. and f is surjective. then we have u → where λ− ∈ U. If − − − ∈ U v w v w w v −′ − → → −′ − → → and w − v ∈ U. for every − ∈ W .A. i Ker f −→ E −→ Im f. and thus. we have u → → − where − + u′ ∈ U. − ∈ U × V . Then. U ∩ W = 0.4.10 to any section Im f −→ E of f ′ to get an isomorphism between E and Ker f ⊕ Im f . Similarly. Let E be a vector space. If E = U ⊕ V and E = U ⊕ W . dim(E) = dim(Ker f ) + dim(Im f ). f′ i f′ → → → → − → − → − (− + w ′ ) − (− + v ′ ) = (− + u′ ). u′ ∈ U. then w − w ∈ U. and → → and w v v v −′ → − →.6. Proof.12.4. Let E and F be vector spaces. then. deﬁned such that − . f is injective. The following lemma will also be useful.11. that is w ′ = − . u −′ −′ → → − → w − v = u′ . and lemma A. Proof.4. Let R be the relation between V and W . Similarly. R is functional. and let f : E → F a linear map. and since U ⊕ W is a direct sum. and E −→ Im f is the surjection associated f s with E −→ F . Thus. Thus. w v u − −− =− → → → w v u → → → λ− − λ− = λ− . there exists a unique → v = v w − . and since U ⊕ V is a direct sum. − ∈ R iﬀ − − − ∈ U. then there is an isomorphism f : V → W between V and W . Then.4. DIRECT SUMS 443 Lemma A. then −′ − − ∈ U. we apply lemma A. if − − − ∈ U w w w v → → − − −′ ∈ U. and − → → − → − → → → → thus w ′ − − = 0 .4. Thus. − ∈ R (R is the graph of f ).

the bases (−1 . Conversely.t. Given two vector spaces E and F and a linear map f : E → F . and thus we denote v − . . . Let f : E → F be a linear map. . LINEAR ALGEBRA Given a vector space E and any subspace U of E. −n ) and (−1 . . which is generated by (f (−1 ). Since the rank rk(f ) of f is the u u v vm → → dimension of Im f . dim(F )). . and (−1 .13. . . (ii) follows from dim(E) = dim(Ker f ) + dim(Im f ).14. We call dim(V ) the codimension of U. Deﬁnition A. .4. . We have the following simple lemma. As for (iii). we have rk(f ) = rk(M(f )). Deﬁnition A. .15. −n ) be u u → → a basis of E. − ). The notion of rank of a linear map or of a matrix is often needed. then E = U ⊕ V for some subspace V of dimension 1. and we denote it as codim(U). the rank of f is the maximum u u → → number of linearly independent vectors in (f (−1 ). . the rank rk(A) of the matrix A is the maximum number of linearly independent columns of A (viewed as vectors in K m ). − ) a basis of F . since U is a → / → → such that x x x . It can be shown using duality that the rank of a matrix A is also equal to the maximal number of linearly independent rows of A. A subspace U of codimension 1 is called a hyperplane. the rank of a matrix A is the dimension of the subspace of K m → → generated by the columns of A. .11.1.4. . f (−n )). for every matrix representing f . and let (−1 . . which is equal to the number u u of linearly independent columns of M(f ). rk(f ) = dim(Im f ). f (−n )). Let E and F be two vector spaces. Clearly. . (iii) rk(f ) ≤ min(dim(E). . . . we have rk(f ) = codim(Ker f ). The rank of a matrix is deﬁned as follows. (ii) rk(f ) + dim(Ker f ) = dim(E). dim(E) = dim(Ker f )+dim(Im f ). . .4. and by deﬁnition. . we have rk(f ) ≤ dim(F ). Given a linear map f : E → F . . Proof. Since by lemma A. Thus.12 shows that the dimension of any subspace V such that E = U ⊕V depends only on U. and let M(f ) v vm → → → → be its matrix w.4. lemma A. Lemma A. let − ∈ E be a vector → → →/ → V as K v v v x − → − ∈ U (and thus. we have rk(f ) ≤ dim(E). a → subspace V of dimension 1 is generated by any nonzero vector − ∈ V . since F and K m are isomorphic. Indeed. and since rk(f ) + dim(Ker f ) = dim(E). the following properties hold: (i) rk(f ) = codim(Ker f ). If U is a hyperplane. Given a m × n-matrix A = (ai j ) over the ﬁeld K. and we write E = U ⊕ K − . However.7.444 APPENDIX A. the rank rk(f ) of f is the dimension dim(Im f ) of the image subspace Im f of F .4. We claim that E = U ⊕ K − . − = 0 ). − ∈ U. Since rk(f ) = dim(Im f ).r. since Im f is a subspace of F . . . In view of lemma A.

a linear map f : E → K is called a linear form. v u x v x →/ → → Since − ∈ U.1. called linear forms. (c) Given any hyperplane H in E and any (nonnull) linear form f ∈ E ∗ such that H = Ker f . → → → v h → v0 f (−0 ) v − → → − = − + f( v )− . and since − ∈ U. → E = H ⊕ K v0 → → / → (b) If H is a hyperplane. there is some vector −0 ∈ E such that f (−0 ) = 0. and denoted as E ∗ . that is. we have U ∩ K − = 0. → → → written in a unique way as v h v0 and . this shows that E = U + K − . → → Proof. there is a well deﬁned function f : E → K. The following properties hold: (a) Given any nonnull linear form f ∈ E ∗ . there is a (nonnull) linear form f ∈ E ∗ such that H = Ker f . → f(v 0 − → − − f ( v ) − = − ∈ H. HYPERPLANES AND LINEAR FORMS 445 → → →/ → − → hyperplane. Since E = U ⊕ K − . and thus.A. its kernel H = Ker f is a hyperplane.5. E = H ⊕ K −0 for some −0 ∈ H. For every v f Thus. every − ∈ E can be v v v − = − + λ− . We now prove that hyperplanes are precisely the Kernels of nonnull linear forms. since f (−0 ) = 0. Thus. − → − − f( v )− → → v − ) v0 → f ( v0 → = f (− ) − v → f (− ) − v → − → − → − ) f ( v0 ) = f ( v ) − f ( v ) = 0. Let E be a vector space. E = H + K −0 . Also. Thus. we must have → → → → →/ written in a unique way as x u v u x − = −λ−1 − + λ−1 − .5. and H is a hyperplane. → → v h − ) v0 → f ( v0 → → →/ → that is. we have −0 ∈ H. (a) If f ∈ E ∗ is nonnull. Then. H = Ker g iﬀ g = λf for some λ = 0 in K. − ∈ E can be v v v x − = − + λ− . we have → H = Ker f . we have E = U ⊕ K − for some − ∈ U (with − = 0 ).5 Hyperplanes and Linear Forms Given a vector space E over a ﬁeld K. and thus E = U ⊕ K − . A. Let v v − ∈ E. Then. v v v v − . Lemma A. The set of all linear forms f : E → K is a vector space called the dual space of E. In the next section. (b) For any hyperplane H in E. where − ∈ U. → → → → → λ = 0. H ∩ K −0 = 0. This argument shows that a x x x hyperplane is a maximal proper subspace H of E. for every linear form g ∈ E ∗ . we shall see that hyperplanes are precisely the Kernels of nonnull linear maps f : E → K.

it → that f is a linear form. Conversely. u u u → → f (− ) is a linear combination of the coordinates of − . . . Since f ( v0 → is clear that λ = 0 iﬀ − ∈ H. the linear form f is nonnull. x x We leave as an exercise the fact that every subspace V = E of a vector space E. We leave as a simple exercise the veriﬁcation v v v − ) = 1. for some −0 v v − ) = 0 and g(− ) = 0. since both f and g vanish on H. Thus. but also vanishes on → K −0 . Ker f = H. . g = λf . for any linear u u → → → form f ∈ E ∗ . . assume that → → H = Ker g for some nonnull linear form g. . for every − = x1 −1 + · · · + xn −n ∈ E. . if g = λf for some λ = 0. for every − = h + λ−0 . observe that → → such that f ( v0 v0 → g(−0 ) v g− − f f ( →) v 0 → → → − → such that. for every i. . −n ). 1 ≤ i ≤ n. with v → g(−0 ) v λ= − . we have x u u → f (− ) = λ1 x1 + · · · + λn xn . Also. Clearly. Then. f ( →) v 0 → → If E is a vector space of ﬁnite dimension n and (−1 . From (a). LINEAR ALGEBRA (c) Let H be a hyperplane in E. f (− ) = λ. and let f ∈ E ∗ be any (nonnull) linear form such that H = Ker f . with respect to the basis (−1 . we have E = H ⊕ K −0 . x → → → where λi = f (−i ) ∈ K. −n ) is a basis of E. that is. v is a linear form which vanishes on H. is the intersection of all hyperplanes that contain V . as expected. . then H = Ker g. Thus.446 APPENDIX A. by deﬁnition. .

−− − − −→ −− − − −→ → since a + u + v = a + a(a + u) + a(a + v) − − Since f preserves barycenters. Indeed.2.1 Aﬃne and Multiaﬃne Maps This section provides missing proofs of various results stated earlier. λi ai is the barycenter of a family ((ai .7. 447 . If we recall that x = (with i∈I λi = 1) iﬀ i∈I a + u + v = (a + u) + (a + v) − a. we get aa. We claim that the map deﬁned such that −− − − − − − − −→ f (v) = f (a)f (a + v) − → − → − → for every v ∈ E is a linear map f : E → E ′ .Appendix B Complements of Aﬃne Geometry B. f (a + λv) = λf (a + v) + (1 − λ)f (a). we can write −− − − −→ → since a + λv = a + λa(a + v) + (1 − λ)− and also aa. − → − → Lemma 2. there is a unique linear map f : E → E ′ . a + λv = λ(a + v) + (1 − λ)a.7. Let a ∈ E be any point in E. λi ))i∈I of weighted points → − bx = i∈I − → λi bai for every b ∈ E. We begin with Lemma 2. such that f (a + v) = f (a) + f (v).2 Given an aﬃne map f : E → E ′ . − → for every a ∈ E and every v ∈ E Proof.

. v vm Proof. . we get f (b + v) = f (a + v) − f (a) + f (b). First. k = |S|. where S ⊆ {1. . . . .448 we get APPENDIX B.. .1.. showing that f (u + v) = f (u) + f (v). 1 ≤ i ≤ m. . . since −− − − −→ → − − → → b + v = a + ab + v = a + a(a + v) − − + ab. . . . . . ai−1 . −−− − −→ −− − −→ −−−− = f (a)f (b) + f (b)f (a + v). ai+1 . . COMPLEMENTS OF AFFINE GEOMETRY −− − − −→ −−−−− −− − − − − − − −→ −−− −−→ −− − − − − − − −→ f (a)f (a + λv) = λf (a)f (a + v) + (1 − λ)f (a)f (a) = λf (a)f (a + v). . Lemma 4. . f is a linear map. . aa b + v = (a + v) − a + b. . am ). −− − − − − − − −→ = f (a)f (a + v).. k=|S| S={i1 . Pick any b ∈ F . let fi : E → F be the map ai → f (a1 . . and all −1 . . . there are 2m − 1 unique multilinear − → − → maps fS : E k → F . am + − ) = f (a1 . m}. − ∈ E . am ) ∈ E m . . so that f (a) = b + h(a). we show that we can restrict our attention to multiaﬃne maps f : E m → F . . Consequently. which implies that −− − −→ −− − −→ −− − −−−− −−−− − −→ −− − − −→ f (b)f (b + v) = f (b)f (a + v) − f (b)f (a) + f (b)f (b). f (a + u + v) = f (a + u) + f (a + v) − f (a). vi vi → → → − for all a1 . . for every a1 . . . am ∈ E. ai−1 . ai . . and since f preserves barycenters. which shows that the deﬁnition of f does not depend on the choice of a ∈ E.3 For every m-aﬃne map f : E m → F . . The fact that f is unique is obvious: we must have f (v) = −− − − − − − − −→ f (a)f (a + v).... showing that f (λv) = λf (v). .ik }. . and deﬁne h : E m → F . . −− − −→ −−−− −− − − − − − − −→ Thus. am ) + v vm S⊆{1. . For any other point b ∈ E. −k ). . . . . . such that → → f (a1 + −1 . .. . We claim that h is multiaﬃne. We also have from which we get −− − − − −→ −− − − − −−−−−− − − − − → −− − − − − − − −→ f (a)f (a + u + v) = f (a)f (a + u) + f (a)f (a + v). f (b)f (b + v) = f (a)f (a + v). . . ai+1 . where h(a) = bf (a) for every a = (a1 . k≥1 → → fS (−1 . am ∈ E. For every i. −− −→ − → where F is a vector space. S = ∅.m}.. . .

. . am . . . Given an m-aﬃne map f : E m → F . . → v ∆v1 f (a) = f (a1 + −1 . . . .. aik + −k . . . a2 . . 1 ≤ i ≤ m. am ). . − ) ∈ E m . . we have −− − − − −→ −−−−−− −→ → → → − ) = b(f (a) + − (− )) = − − + − (− ). . . Thus. . v vk (2) We have m ∆vm · · · ∆v1 f (a) = (−1)m−k k=0 1≤i1 <. ak+1 + −→. a2 . We prove (1). . . . . a2 . . . . . am ). . . which shows that − → hi is an aﬃne map with associated linear map fi .. . it is aﬃne in its ﬁrst argument. . . − and (m − k)-aﬃne in ak+1 .1. . . . . am ) ∈ E m . and where fi is the linear map associated with fi . . . and since it is the diﬀerence of two multiaﬃne maps in a2 . v i+1 . a v vi 1 i−1 i i → ∆v1 f (a) = f (a1 + −1 . am ). . a + − . . . ai+1 . . . we have where 1 ≤ k ≤ m − 1. . . am ) → is a linear map in −1 . . AFFINE AND MULTIAFFINE MAPS − → and let hi : E → F be the map ai → h(a1 . ai−1 . . . . . Since f is m-aﬃne. . . . . → ∆ f (a) = f (a . am ) − f (a1 . am ) − f (a1 . ai1 + −1 . . . . Since f is multiaﬃne. . . . . am . . v− k+1 We claim that the following properties hold: → → (1) Each ∆vk · · · ∆v1 f (a) is k-linear in −1 . . . . . . . . . a2 . am ) − ∆vk · · · ∆v1 f (a). . . .B. . → hi (ai + u fi u bf (a) fi → u 449 − → where a = (a1 . vi vi Properties (1) and (2) are proved by induction on k. we deﬁne v vm ∆vm ∆vm−1 · · · ∆v1 f inductively as follows: for every a = (a1 . and generally. . Thus. . am ). ai . . . . . a2 . am . am ). . . leaving (2) as an easy exercise. . . we now assume that F is a vector space. . ∆vk+1 ∆vk · · · ∆v1 f (a) = ∆vk · · · ∆v1 f (a1 . for all i. it v is (m − 1)-aﬃne in a2 . .<ik ≤m → → f (a1 . a . . for − → → → every (−1 . am ) − f (a1 . and so. .

ik }. . . .m}. . . S = ∅.. . .. . . . and it is a sum of the above form. − ∈ E . ai1 + −1 . ∆vm · · · ∆v1 f is a m-linear map. . . . . . . from above. am . . aik + −k . k=|S| S={i1 . am ). . .. . . . and all −1 . . . . and a = (a1 . am ). . .. v v 1 1 m m 1 m S i1 ik S⊆{1. . . . a ) − ∆ · · · ∆ f (a) v− vk+1 vk v1 vk v1 1 k+1 k+1 m vk v1 → → is linear in − →. a + − ) = f (a . . . .. where n = m − 1. < jn . . . we get ∆vm · · · ∆v1 f = f{1. am ).. . am ) vi vi → → vm in the above sum contains at most m − 1 of the −1 . . . . a + − ) = v v 1 1 m m m−1 ∆vm · · · ∆v1 f (a) + and since every (−1)m−k−1 k=0 1≤i1 <. it is (m − k − 1)-aﬃne in ak+2 . . ai1 + −1 . . . n}. . a + − ) = f (a . .450 APPENDIX B. then v → → vj vj ∆vjn · · · ∆vj1 g(a) = f{j1 . . . . Then. . am . and all −1 . which all cancel out except for a single f (a1 . . .. . .. and since it is the diﬀerence of two (m − k − 1)-aﬃne maps in v v− k+1 ak+2 . . .. .. .. . . it is v− v vk k+1 → (k + 1)-linear in −1 . . . so we can apply the induction hypothesis. . .. . . .. . we can write → → f (a + − .<ik ≤m → → f (a1 . . in view of (2).. . . .. COMPLEMENTS OF AFFINE GEOMETRY → → Assuming that ∆vk · · · ∆v1 f (a) is k-linear in −1 . .m}.. . . v vk am . . . a ) + v v f (− . . a ) + v 1 1 m m 1 m S i1 ik S⊆{1. . which gives us sums of k-linear maps. . . . am ). . ∆ ∆ · · · ∆ f (a) = ∆ · · · ∆ f (a . k≥1 → → → − vm for all a1 . .. .... . . − ). . .. . . . . . . .. . am ∈ E. m}. .m} is also m-aﬃne. . This can be done using the ∆vm · · · ∆v1 f . .m} .. and conclude the uniqueness of all the fS . .ik }. am ∈ E. . . g(a + − . .. . . since it is aﬃne in ak+1 . j1 < . . . We claim the following slightly stronger property. .. Indeed. − and (m − k)-aﬃne in ak+1 . . jn } ⊆ {1. − ∈ E . . k=|S| S={i1 . and since it is the diﬀerence of two k-linear maps in −1 . −→. . . But g − f{1.. −n ). . − . .jn } (−1 . which proves the existence of multilinear maps fS such that → → → → f (a + − . k≥1 → → → − vm for all a1 . . As a consequence of (1). . . . where {j1 .. by induction. . . − )... . a + −→. . that can be shown by induction on m: if → → → → v v v f (− . This concludes the induction. . where S ⊆ {1. − . . . .. for 1 ≤ k ≤ m − 1. . and of 2m − 1 terms of the form (−1)m−k−1 f (a1 . . . . . aik + −k . We can now show the uniqueness of the fS . vi vi → → f (a1 . we can apply the induction v hypothesis.. v We still have to prove the uniqueness of the linear maps in the sum.

. .1 Given two aﬃne spaces E and F . .. η(1) + · · · + η(m) η(1) + · · · + η(m) .. η(1) + · · · + η(m) η(1) + · · · + η(m) and h η(1)a1 + · · · + η(m)am η(1) + · · · + η(m) =f η(1)a1 + · · · + η(m)am η(1)a1 + · · · + η(m)am .. since f is multiaﬃne.. . for any polynomial function h of degree m... .. k Proof..1... ..B. . the polar form f : E m → F of h is unique.m} k=|H|. 1 ≤ i ≤ m}. . we have (η(1) + · · · + η(m))m h η(1)a1 + · · · + η(m)am η(1) + · · · + η(m) η(1)a1 + · · · + η(m)am η(1)a1 + · · · + η(m)am = (η(1) + · · · + η(m))m f . Since η(m) η(1) +···+ = 1.m} k=|H|. m}.. . . the expression i∈H ai E= (−1)k k m h k H⊆{1.. . 1} | η(i) = 0 for some i. k≥1 can be written as E= η∈C (−1)η(1)+···+η(m) (η(1) + · · · + η(m))m h η(1)a1 + · · · + η(m)am η(1) + · · · + η(m) . .im )∈{1.. .m}m η(i1 ) · · · η(im )f (ai1 ... m} → {0. aim ).....4. . AFFINE AND MULTIAFFINE MAPS 451 Lemma 4.. k≥1 i∈H f (a1 . .. and is given by the following expression: 1 m! (−1)m−k k m h H⊆{1. . Then. be the set of characteristic functions of all nonempty subsets of {1. . Let C = {η : {1. . am ) = ai . η(1) + · · · + η(m) η(1) + · · · + η(m) = (i1 ..

.. aim ) = f (a1 . . (−1)η(1)+···+η(m) η(i1 ) · · · η(im ) = 0.. . . in which case η∈C (−1)η(1)+···+η(m) η(i1 ) · · · η(im ) = (−1)m . am ). . m} such that j = i1 . im ) is a permutation of (1. .. . . .. m).. let η ∗ be deﬁned such that. we get η∈C If (i1 .452 Thus. Let J = {η ∈ C | η(j) = 0}. COMPLEMENTS OF AFFINE GEOMETRY (−1)η(1)+···+η(m) (η(1) + · · · + η(m))m h = η∈C (−1)η(1)+···+η(m) η∈C η(1)a1 + · · · + η(m)am η(1) + · · · + η(m) (i1 .im )∈{1. . . in this case f (ai1 . .. . .... . aim ). m).. aim ) If (i1 . . .. .m}m (−1)η(1)+···+η(m) η(i1 ) · · · η(im ) f (ai1 . there is some j ∈ {1. . . . . . η(i1 ) · · · η(im )f (ai1 . . . .. . . . for every i = j. and since (−1)η ∗ (1)+···+η ∗ (m) η ∗ (i1 ) · · · η ∗ (im ) = (−1)η(1)+···+η(m)+1 η(i1 ) · · · η(im ) = −(−1)η(1)+···+η(m) η(i1 ) · · · η(im ). . . . η ∗ (i) = η(i). . im ) is not a permutation of (1.. . m}.. . . Then. then η(i1 ) · · · η(im ) = 0 iﬀ η(i) = 1 for every i ∈ {1.im )∈{1. . and η ∗ (j) = 1. . . . (−1)η(1)+···+η(m) η(i1 ) · · · η(im ) = η∈J η∈C (−1)η(1)+···+η(m) η(i1 ) · · · η(im ) + (−1)η η∈J ∗ (1)+···+η ∗ (m) η ∗ (i1 ) · · · η ∗ (im ). im .m}m = (i1 . . .. . and for every η ∈ J. .. we have E= η∈C APPENDIX B. Note that η ∗ (1) + · · · + η ∗ (m) = η(1) + · · · + η(m) + 1. . Since f is symmetric.

. .. ..m} k=|H|. λm ∈ R. am + λ−1 − ).2. vi vi S⊆{1. the above identity implies that f (a1 .1 Given any aﬃne space E and any vector space F ..m} j ∈S / → → → − for all a1 . . .. . . m). . am ∈ E. .1.. .ik }. . . . . . k=|S| S={i1 .1.. . . . for any m-aﬃne − → − → map f : E m → F . . . k≥1 which concludes the proof.. . . v vm 1 v m vm Proof. . We ﬁrst prove by induction on k. . . . am ) + → → λj fS (−1 . . there is a unique m-linear map f : (E)m → F extending f . . and (−1)m+k = (−1)m−k .. HOMOGENIZING MULTIAFFINE MAPS and since there are m! permutations of (1. . . k≥1 → → fS (−1 . .. .. .. am ∈ E.. such that. . . am + − ) = f (a1 . .B. if → → f (a1 + −1 . − → Lemma 10. . . am ) = H⊆{1. − ∈ E . then → → f (−1 + λ1 a1 ..m}. .. ai . . am ) = 1 m! (−1)m−k k m h i∈H H⊆{1. −k .. .2 Homogenizing Multiaﬃne Maps This section contains the proof of Lemma 10. we have → → → → f (−1 + λ1 a1 . .. k≥1 453 (−1)k k m h i∈H ai k . . . . . . for λi = 0. . and all λ1 .. vi vi − → → → for all a1 . v vm 1 ≤ i ≤ m. . . that → → → → f (a1 . Furthermore. .ik }.. . . where the fS are uniquely determined v vm multilinear maps (by lemma 4.. . . − + λm am ) v vm = λ1 · · · λm f (a1 . . . vi vi vi vi . . . . . ... k=|S| S={i1 . . −1 . −k ). − + λm am ) = λ1 · · · λm f (a1 + λ−1 −1 . and all −1 . .... − ∈ E .. . k B.4.. .3). am ) + v vm S⊆{1.m}.4. . . . we get (−1)m m!f (a1 . k≥1 j∈{1. . all −1 . 1 ≤ k ≤ m. am ) = fS (−1 ... .m} k=|H|. . . −k ). . Let us assume that f exists. . . −k ). . Since (−1)m (−1)m = 1. .

. − ∈ E . . . v v + 1 j1 jl m T ={j1 . . . −k+1 . . . . . . am ) = f (a1 .. . . a + −k+1 . . a + −1 . . am ) vi v− i → → v f (− . . −k+1 . . . 1≤l≤k → → f (a1 . . . we have → → → → f (a1 . . . . v → → f (−1 . . . . vj vj vj vj for every T = {j1 . . . v v v and using the expression of f in terms of the fS . 1 ≤ l ≤ k. . . . . am ∈ E. am ) = f (a1 . we have → → → f (a1 + −1 . . . am ) = fT (−1 . . for any a1 ∈ E. . . . a2 . . am ) = f (a1 . .454 APPENDIX B. . . am ). . we also have → → f (a1 + −1 . am ). . . for all a1 . . . . m}. . am ) + f (−1 . . . a2 . . for any a ∈ E. . . .jl } T ⊆S. am ) = f (a1 . . . .. . . . . . . . . we get → → v− f (a1 . . am ) = f (a1 + −1 . .. . . a + −k+1 . v + T j1 jl T ={j1 . . . . . . a + −1 . . . . vi i i . . . . . am ) + f{1} (−1 ). − . . . . . .. . and → → → − all − . . a2 . . . a2 . . ik } and k = |S|. . we have → → f (a1 + −1 . . . . am ) + f (a1 . . −1 . . . . . a2 . . a + −1 . . v v but since f extends f . with k + 1 = |S|. . . a. . . . . assuming for simplicity of notation that i1 = 1. −l ). . . . . . . . . by the induction hypothesis.. . . Since f is multilinear. . . . . and since f extends f . such that S = {i1 . . a. . since f is m-linear. . v v 1 m For k = 1. am ) = f{1} (−1 ) v v Assume that the induction hypothesis holds for all l < k + 1. . . . . . . . . . am ) vi v− i However. . . a + −1 . . . . jl }. . . . a + −k+1 . . . . . −1 . . . . ik+1 }. . . . . . am ). . a ). −1 . a2 . .. . . .jl } T ⊆S. . . .. . . . a + −k+1 . and let S = {i1 . . . . . am ) i → → f (a . . a. . a2 . 1≤l≤k Since f extends f . . . we have → → − for all −1 ∈ E . . . . . . . a2 . . . am ) vi i → → = f (a1 . am ) + f (−1 . . COMPLEMENTS OF AFFINE GEOMETRY for every S ⊆ {1. . . v v Thus. . − . . . a. a2 . . − ). −l . . . am ) + f (a1 . we have → → v− vi = f (a1 . . we also have → → → → v− v− vi f (a1 . a2 ..

. . . . and clearly. . a. the lemma is proved. . . . . . v 1 v m vm and since f is m-linear.. . . . . .. . . . −k+1 ) vi v− vi v− i i → → v v f (− .. −k+1 .m} j ∈S / This expression agrees with the previous one when λi = 0 for some of the λi . am ) = f (a1 . . . . . vi v− vi v− i i − → This shows that f is uniquely deﬁned on E . . . . . a + −k+1 . . am + λ−1 − ). .ik }. and this shows that f is uniquely deﬁned. we conclude that → → → → f (a1 .m}. . . . am ) + → → vi vi λj fS (−1 . . am ) + fS (−1 . . k≥1 → → vi vi λ−1 · · · λ−1 fS (−1 .... . .B. a.. k≥1 j∈{1.. a + −1 .. − ). . . we get 455 → → → → f (a1 . .. − + λm am ) v = λ1 · · · λm f (a1 . . .. . . .ik }. . . .2. 1 ≤ i ≤ m. . i1 ik and thus. . . . −k+1 ). .. ... −k ). k=|S| S={i1 . . Clearly. . . . λm (am + λ−1 − )). . 1 ≤ i ≤ m. . . am ) + 1 v m vm S⊆{1. . . . . .. we get → → → → f (−1 + λ1 a1 . v vm 1 v m vm We can expand the right-hand side using the fS . 1 v m vm 1 v m vm Since f extends f .. we get → → vm f (−1 + λ1 a1 .. . the above expression deﬁnes an m-linear map. . am + λ−1 − ). . . −1 . . . . . . . . . Thus. ... . λm (am + λ−1 − )) = λ1 · · · λm f (a1 + λ−1 −1 ...m}. we get → → → → f (λ1 (a1 + λ−1 −1 ). .. 1≤l≤k Thus. . S⊆{1. . . HOMOGENIZING MULTIAFFINE MAPS and by expanding this expression in terms of the fT . − + λm am ) = λ1 · · · λm f (a1 + λ−1 −1 . assume that λi = 0. k=|S| S={i1 . . −k ). − + λm am ) = f (λ1 (a1 + λ−1 −1 ). . . the above deﬁnes a multilinear map. am + λ−1 − ) = f (a1 . We get → → → → vm f (−1 + λ1 a1 . . .. + T j1 jl T ={j1 .. . . . am ) = fS (−1 . .jl } T ⊆S. . . . . . Now. . and we get → → f (a1 + λ−1 −1 .

where u ∈ M.4. and at the notion of direct sum of aﬃne spaces which will be needed in the section on diﬀerentiation. where f is the composition of the inclusion map from M ∩ N to M with in1 .456 APPENDIX B. By deﬁnition of f and g.4. we have the maps f + g : M ∩ N → M ⊕ N. u = f (w).3. we have the Grassmann relation: dim(M) + dim(N) = dim(M + N) + dim (M ∩ N).10 and lemma A. Given any aﬃne space E. with − ∈ M and bc ∈ N. and thus. for any two nonempty aﬃne subspaces M and N. Then. and v = g(w). we need a result of linear algebra. As a consequence. f +g i−j → − (2) M ∩ N consists of a single point iﬀ ab ∈ M + N for some a ∈ M and some b ∈ N. and g is the composition of the inclusion map from M ∩ N to N with in2 . First. there is some w ∈ M ∩ N. Then. we ax → → − − → → − → − → have ab = − − bc. The second part of the lemma follows from lemma A. which means that f + g is injective. Conversely. Given a vector space E and any two subspaces M and N.3 Intersection and Direct Sums of Aﬃne Spaces In this section. which is possible since M and N are nonempty. the following facts hold: → − (1) M ∩ N = ∅ iﬀ ab ∈ M + N for some a ∈ M and some b ∈ N. → − (3) If S is the least aﬃne subspace containing M and N. the following is a short exact sequence: 0 −→ M ∩ N −→ M ⊕ N −→ M + N −→ 0. − − → → − → and M ∩ N = { 0 }.1. (1) Pick any a ∈ M and any b ∈ N. i − j is surjective. Given a vector space E and any two subspaces M and N.6. and thus. Lemma B. then S = M + N + R ab. and v ∈ N. and that Im (f + g) = Ker (i − j).2. Lemma B. by deﬁnition of i and j. we take a closer look at the intersection of aﬃne subspaces. We have the canonical injections i : M → M +N and j : N → M +N. Proof. there are several interesting linear maps.3. We now prove a simple lemma about the intersection of aﬃne subspaces. injections f : M ∩N → M ⊕N. assume ac ac . the canonical injections in1 : M → M ⊕N and in2 : N → M ⊕N. ab ∈ M + N. such that. i(u) = j(v) = w ∈ M ∩ N. i(u) = j(v). with the deﬁnitions above. and i − j : M ⊕ N → M + N. and g : M ∩N → M ⊕N. if M ∩ N = ∅. for any c ∈ M ∩ N. as desired. It is obvious that i − j is surjective and that f + g is injective. → − → Since M = {− | x ∈ M} and N = { by | y ∈ N}. Proof. Assume that (i − j)(u + v) = 0. Im (f + g) = Ker (i − j). COMPLEMENTS OF AFFINE GEOMETRY B. and thus. and thus.

for some x ∈ M ax and some y ∈ N. We can now state the following lemma Lemma B. −−→ −− Since M ∩ N = c + M ∩ N for any c ∈ M ∩ N. since N being an aﬃne subspace. But we also have → − − → xy − → → ab = ax + − + yb. the result follows from part (2) of the lemma. and since M ∩ N = { 0 }. if E = → → − − M ⊕ N. b ∈ M ∩ N} = { ab | a.2 implies that for any two nonempty aﬃne subspaces M and N.2 shows that if M ∩ N = ∅ then ab ∈ M + N for all a ∈ M and all b ∈ N. then M∩N consists of a single point iﬀ M ∩ N = { 0 }. we get 0 = − + yb − by. (3) It is left as an easy exercise. INTERSECTION AND DIRECT SUMS OF AFFINE SPACES 457 → − → → − − → that ab ∈ M + N for some a ∈ M and some b ∈ N. Thus. b is the middle of the segment xy xy → − → [x. then ab ∈ E − → − → − → for all a ∈ M and all b ∈ N. Remarks: → − (1) The proof of Lemma B. (2) Lemma B. if M ∩ N = ∅. then −−→ − − −− → → M ∩N = M ∩ N. if S is the least aﬃne subspace containing M and N. → → → − → − − → and thus. x ∈ M ∩ N.3. ab = − + by. that is.3. y]. − = 2 by. Thus x also belongs to N. because → − → − → − − − → → −−→ −− M ∩ N = { ab | a. x = 2b − y is the barycenter of the weighted points (b. Indeed. (2) Note that in general.3. 2) and yx (y. if E = M ⊕ N. b ∈ N} = M ∩ N . then the following properties hold: . we have − − → → M ∩ N = c + M ∩ N for any c ∈ M ∩ N. Then. then M ∩ N consists of a single point. Thus. and M ∩ N = ∅.3.3. it follows that if M∩N = ∅. −1).B. it is closed under barycenters. − − → → − → From this. Given an aﬃne space E and any two nonempty aﬃne subspaces M and N. b ∈ M} ∩ { ab | a. and since − = 2 yb. This fact together with what we proved in (1) proves (2).

Thus. Thus.2 and lemma B. we deﬁne the direct sum of the family (Ei )i∈I of aﬃne spaces. let ai be some chosen point in Ei .1. as the − → following aﬃne space Ea . but we leave it as an exercise. We deﬁne the direct sum i∈I (Ei . +i i∈I be a family of aﬃne spaces (where I is nonempty). +a : Ea = {(xi )i∈I ∈ − → E = i∈I i∈I − → Ei . Proof. ai ) of the projection πi : i∈I Ei → Ei . − → +a : Ea × E → Ea is deﬁned such that. COMPLEMENTS OF AFFINE GEOMETRY dim(M) + dim(N) < dim(E) + dim(M + N). ai ) → Ei . using lemma B. Let Ei . for all j ∈ I − {i}. and dim(S) = dim(M) + dim(N) + 1 − dim(M ∩ N). (2) If M ∩ N = ∅. where each Ei is nonempty. i i a and xj = aj . ai ) such that. Given an indexed family (Ei )i∈I of aﬃne spaces (where I is nonempty). the same aﬃne space is obtained no matter which points are chosen.3. For each i ∈ I. Ei . and let a = (ai )i∈I . where pi is the restriction to i∈I (Ei . there is a diﬃculty if we take E to be the direct product i∈I Ei . Ei | xi = ai for ﬁnitely many i ∈ I}. − → Deﬁnition B. When I is ﬁnite. relative to a ﬁxed choice of points (one point in each Ei ). where xi = y. ina (y) = (xi )i∈I . then APPENDIX B. We now consider direct sums of aﬃne spaces. Ei .3.4. E . It is not diﬃcult. When I is inﬁnite. we get isomorphic aﬃne spaces. +i . However. i and we also call pa a projection.458 (1) If M ∩ N = ∅. then dim(S) = dim(M) + dim(N) − dim(M ∩ N). i . we deﬁne an aﬃne space E whose associated vector space − → − → is the direct sum i∈I Ei of the family ( E i )i∈I . u We deﬁne the injections ina : Ei → i∈I (Ei . ai ) of the family (Ei )i∈I relative to a. where each Ei is − → really an aﬃne space Ei .3. because axiom (AF3) may not hold. We also have functions pa : i i∈I (Ei . → (xi )i∈I + (−i )i∈I = (xi + ui )i∈I . pa ((xi )i∈I ) = xi .

i − → − → → a we have ini (y) = a + ini (−y). ai ). we have f (a) = in1 (f1 (a)) + · · · + inm (fm (a)). except that the order of the factors in (E1 . ai ) as (E1 . . but the injections ina do. In i this case. . This is consistent with the corollary of lemma B. The order of the factors (Ei .B. m}. Ei . instead of the unordered concept of the ﬁnite direct sum of aﬃne spaces (E1 . as follows: E = E1 × · · · × Em . + . am ). . for every a ∈ E. . where xi = ai on a ﬁnite subset I ′ of I. we have x = (xi )i∈I and y = (yi )i∈I . by deﬁnition of i∈I (Ei . . .3. . am ) + (u1. . Ea and +a do not depend on the choice of a. When I is ﬁnite and I = {1. a1 ) ⊕ · · · ⊕ (Em . Remark: Observe that the intersection of the aﬃne subspaces ina (Ei ) reduces to the single i point a. Clearly. The veriﬁcation that this is an aﬃne space is obvious. . we deﬁne the aﬃne space E.8 for a direct sum of aﬃne spaces and aﬃne maps. Sometimes. . am + um ). and wi = 0. . . This is because for every y ∈ Ei . . am ). . +i is also denoted simply as E1 × · · · × Em . . Fm are m aﬃne spaces. Then. . fm (a)). . for any function f : E → (F1 . ai ) is irrelevant. +i . . +i . . where ini : Ei → ai i∈I Ei is the injection from the vector − → − → space Ei into i∈I Ei . is deﬁned such that (a1 . ai ). Given − → − → any m aﬃne spaces Ei . E1 × · · · × Em and (E1 . we have use for the ﬁnite (ordered) product of aﬃne spaces. ai ) are aﬃne maps. for all i ∈ I − (I ′ ∪ I ′′ ). If the Fi are vector spaces. . if E is an aﬃne space and F1 . . and we denote each (xi )i∈I as (x1 . The ﬁnite product of the aﬃne spaces − → Ei . . Ei . m ≥ 1. bm ). and that designated origins are picked in each Ei . m}.3. it is − → → − w xi yi clear that the only vector w ∈ i∈I Ei such that y = x + w. Given any two points x. where wi = −→. am ) is irrelevant. ai ) is an aﬃne space. Ei . a1 ) ⊕ · · · ⊕ (Em . . . We leave as an exercise to prove the analog of lemma A. xm ). is w = (−i )i∈I . a1 ) ⊕· · ·⊕(Em . um ) = (a1 + u1 . . When I = {1. . − → and + : E × E → E. E = E1 × · · · × Em . . a1 ) ⊕ · · ·⊕(Em . Let us check (AF3). b1 ) ⊕ · · · ⊕ (Fm . . letting fi = pi ◦ f . The injections ina : Ei → i∈I (Ei . for i ∈ I ′ ∪ I ′′ . and yi = ai on a ﬁnite subset I ′′ of I.3. . am ) are isomorphic. y ∈ i∈I (Ei . we write f = in1 ◦ f1 + · · · + inm ◦ fm . INTERSECTION AND DIRECT SUMS OF AFFINE SPACES 459 It is easy to verify that i∈I (Ei . . called the (ﬁnite) − → − → − → − → product of the aﬃne spaces Ei . and thus. It is also obvious that in1 ◦ π1 + · · · + inm ◦ πm = id. . we write i∈I (Ei . .4. . E . we see immediately that f (a) = (f1 (a).

this is an immediate consequence of lemma 4. .4. θm ∈ E. .3 can also be stated as follows in terms of f⊙ . Since the tensors of the form θ1 · · · θm .4. and the aﬃne blossom f⊙ of F : we have m Span(F (E)) = f⊙ E . given a polynomial map F : E → E of degree m. m F : E → E is the polynomial map associated with f . the proof will be complete. E .5. θm ) = f⊙ (θ1 · · · θm ). . . . . and since f (θ1 .1.4 Osculating Flats Revisited This Section contains various results on osculating ﬂats as well as a generalized version of lemma 5. . First. and f⊙ : E → E is the unique linear m map from the tensor product E associated with f .1.1 for the aﬃne symmetric tensor power of E.5. and thus m Span(f ((E)m )) ⊆ f⊙ E . we just have to show that f (θ1 . Given any two aﬃne spaces E and E. we relate Span(F (E)) to f⊙ m E . Given an aﬃne space E. Now.2 (conditions on polar forms for polynomial maps to agree to kth order). . . we have m Span(F (E)) = f⊙ Proof. we show that E . Remark: We can easily prove a version of lemma B.4. . m Span(F (E)) ⊆ Span(f ((E) )) ⊆ f⊙ The ﬁrst inclusion is trivial.460 APPENDIX B. if we can show that f⊙ (θ1 · · · θm ) ∈ Span(F (E)). However. COMPLEMENTS OF AFFINE GEOMETRY B. Lemma B. θm ) ∈ Span(F (E)). and a subset S of E. for every polynomial map F : E → E of degree m with polar form f : E m → E. . lemma 10. generate m E. its range is a vector space. Since f⊙ is a linear map. where θ1 . since a multilinear map is multiaﬃne. we have f ((E)m ) ⊆ f⊙ m m E . . Thus. if f : (E)m → E is the homogenized version of f . First. we denote as Span(S) the aﬃne subspace (the ﬂat) of E generated by S. .

is determined such that a g⊙ (θ) = f⊙ (κ(θ) am−k ). j and thus. . . for any k nonzero vectors −1 . Dujk F (a) = mk f⊙ (am−k − 1 · · · − k ). . . and where κ : E→ deﬁned as follows: since the tensors k E is a bijective weight-preserving linear map i i δ11 · · · δnn ak−i . for any a ∈ E. κ is the unique linear map such that mi i1 i δ · · · δnn ak−i .B. Duk F (a) = mk f⊙ (am−k −1 · · · −k ). By lemma B. k for all θ ∈ k E. δn ) is a basis of − → E .4. we have → u → uj Duj1 . if E is of ﬁnite dimension n and (δ1 . Lemma B. .4. form a basis of k E. for any k. u u k We can now ﬁx the problem encountered in relating the polar form ga of the polynomial k map Ga osculating F to kth order at a. Letting b = a + h1 δ1 + · · · + hn δn . −k . where 1 ≤ k ≤ m. we have ∂ ∂x1 i1 ∂ ··· ∂xn i i Gk (a) = k i g⊙ (δ11 · · · δnn ak−i ). ∂ ··· ∂xn where i1 + · · · + in = i. since on one hand. where i = i1 + · · · + in . we have Gk (b) = F (a) + a 1≤i1 +···+in hi1 · · · hin 1 n i1 ! · · · in ! ≤k in ∂ ∂x1 i1 ··· ∂ ∂xn in F (a). with the polar form f of F . . . where E and E are normed aﬃne spaces. the polar form g : E k → E of the polynomial map Gk : E → E that osculates F to kth order at a. . we get ∂ ∂x1 i1 in i i F (a) = mi f⊙ (δ11 · · · δnn am−i ). with 0 ≤ k ≤ m. . and on the other hand. . Duk F (a) can be computed from the homogenized polar form f of F as follows: → → Du1 .2.4. . a . . Given an aﬃne polynomial function F : E → E of polar degree m. ki 1 i i κ(δ11 · · · δnn ak−i ) = Proof.2. . u u the k-th directional derivative Du1 . where → → E and E are normed aﬃne spaces.3. Given an aﬃne polynomial function F : E → E of polar degree m. OSCULATING FLATS REVISITED 461 Lemma B.4. .

Given an aﬃne polynomial function F : E → E of polar degree m. for any k. bk ).4. such that mi i1 i δ · · · δnn ak−i . and where κ : E → k E is a bijective weight-preserving linear map. .1. the aﬃne subspace spanned by the range of the multiaﬃne map (b1 . . ki 1 we have g⊙ (θ) = f⊙ (κ(θ) am−k ). m−k is the osculating ﬂat Osck F (a). i i g⊙ (δ11 · · · δnn ak−i ) = f⊙ mi i1 i δ · · · δnn ak−i am−k . where E and E are normed aﬃne spaces. . . we get E . Since κ is weight-preserving and since Osck F (a) = Span(G(E)).4.3. . . Proof. deﬁning the bijective linear map κ : i i κ(δ11 · · · δnn ak−i ) = E→ k E. with 0 ≤ k ≤ m. a.4. COMPLEMENTS OF AFFINE GEOMETRY where i1 + · · · + in = i. bk ) → f (a. From lemma B. that is. Thus. for all θ ∈ k E. . . k for all θ ∈ k E. we obtain the relationship between osculating ﬂats and polar forms stated without proof in section 5. .4. we have g⊙ (θ) = f⊙ (κ(θ) am−k ).462 APPENDIX B. Let G be the polynomial map osculating F to kth order at a. ki 1 k and thus. At last. Lemma B. . and E is of ﬁnite dimension n. we conclude that i i i i k i g⊙ (δ11 · · · δnn ak−i ) = mi f⊙ (δ11 · · · δnn am−i ). we conclude that Osck F (a) = {f⊙ (θ am−k ) | θ is an aﬃne combination of simple k-tensors of points in E}. .4. b1 . . Clearly. and for any a ∈ E. From lemma B. . κ is weight-preserving. Span(G(E)) = {g⊙ (θ) | θ is an aﬃne combination of simple k-tensors of points in E} k = g⊙ E . we know that k Span(G(E)) = g⊙ where g is the polar form of G.

Given any two aﬃne spaces E and E. We can also give a short proof of a generalized version of lemma 5. . since f agrees with f on E m . . iﬀ f (u1. −i ∈ E . . uk ∈ E. the aﬃne subspace spanned by the range of the multiaﬃne map (b1 . .e. a) = g(u1. Lemma B. where 0 ≤ i ≤ k. . . . . Dui F (a) = Du1 . . → → by letting θ = ak−i −1 · · · −i in u u f⊙ (θ am−k ) = g⊙ (θ am−k ). . .4.2. uk ∈ E. . where E is of ﬁnite dimension. . . . for all k-tensors θ ∈ k E. . . .5. uk ∈ E. By lemma B. we have f⊙ (θ am−k ) = g⊙ (θ am−k ).2. iﬀ their polar forms f : E m → E and g : E m → E u u agree on all multisets of points that contain at least m − k copies of a.5. since → → Du1 . . . . . . . .B. . Proof. . . . . Dui G(a). . . Dui G(a) = mi g⊙ (am−i −1 · · · −i ). where u1 . and thus. . two polynomial maps F : E → E and G : E → E of polar degree m agree to kth order at a. for every k ≤ m. for every a ∈ E. We have shown that these tensors generate k E. . Du1 . .4. m−k is the osculating ﬂat Osck F (a). . for all u1 . iﬀ f⊙ (u1 · · · uk am−k ) = g⊙ (u1 · · · uk am−k ). → → → − for all −1 . . . . a. a. . that is. . a). This can be restated as f⊙ (θ am−k ) = g⊙ (θ am−k ). k for all simple k-tensors θ ∈ E of the form u1 · · · uk . . b1 . . bk ) → f (a. . . uk . . . a. bk ). . m−k m−k for all u1 . i. . uk . Assume that the polar forms agree as stated. . .4. . OSCULATING FLATS REVISITED 463 However. Dui F (a) = mi f⊙ (am−i −1 · · · −i ) u u and → → u u Du1 . .

for i = j. we mention that there is a third version of the tensor product.4. we have f⊙ (am−k θ) = g⊙ (am−k θ). 3. Dui G(a). . . . . . . Roughly speaking. . . . this tensor product has the property that − ∧ ···∧ − ∧ ···∧ − ∧···∧− = − → → → → → u1 ui uj un 0 → → whenever −i = −j . . Du1 . we mention that Ramshaw’s paper [65] m contains a fascinating study of the algebra and geometry of the tensor powers A and m A for m = 2. The corresponding algebra (E) plays an important role in diﬀerential . u u → → u u Du1 . δn ) be a basis of E . for all θ ∈ k E. Let (δ1 . for all basis tensors θ of k E. uk ∈ E. this is true for k-tensors of the form θ = u1 · · · uk . . . assume that F and G agree to kth order at a.464 we have APPENDIX B. where u1 . f⊙ (u1 · · · uk am−k ) = g⊙ (u1 · · · uk am−k ). since by assumption these derivatives agree. We have shown that the family of tensors i i δ11 · · · δnn ak−i .2. since k i i E. . forms a basis of i1 + · · · + in . for all u1. Dui G(a) = mi g⊙ (am−i −1 · · · −i ). Dui F (a) = Du1 . . Dui F (a) = mi f⊙ (am−i −1 · · · −i ) u u and i → → i letting −1 · · · −i = δ11 · · · δnn . the exterior (or alternating) tensor product (or Grassmann algebra). and thus. and thus. In particular. where i = → → Du1 . and since u u i → → i am−k θ = δ11 · · · δnn am−i = −1 · · · −i am−i . and thus. COMPLEMENTS OF AFFINE GEOMETRY → → → → u u u u f⊙ (am−i −1 · · · −i ) = g⊙ (am−i −1 · · · −i ). . Letting θ = δ11 · · · δnn ak−i . The M¨bius strip even manages to show its tail! o To close this chapter. For those readers who got hooked on tensors. It follows that the exterior product changes sign when two u u factors are permuted. and much more. we have shown that − → Conversely. . . . uk ∈ E. . . by lemma B. where i = i1 + · · · + in .

For an extensive treatment (in fact.4. by the subspace of T(E) generated by → → all vectors in T(E). tensor algebras. etc. . the reader is urged to consult Bourbaki ([14] Chapter III. The exterior algebra (E) can also be obtained as the quotient of T(E). Chapter IV). and [15]. Determinants can also be deﬁned in an intrinsic manner. OSCULATING FLATS REVISITED 465 geometry. of the form − ⊗ − (this requires deﬁning a multiplication operation u u (⊗) on T(E)).B. very extensive!) of tensors.

466 APPENDIX B. COMPLEMENTS OF AFFINE GEOMETRY .

z). Recall that R+ = {x ∈ R | x ≥ 0}. y) ≥ 0. by Lang [48]. or distance. z)| ≤ d(x. Recall that the absolute value |x| of a real number x ∈ R is deﬁned such that |x| = x if x ≥ 0. 72. 73]. metric spaces are deﬁned. condition (D3) expresses the fact that in a triangle with vertices x. y. 467 . Deﬁnition C. and d(x.1. Let us give some examples of metric spaces. normed vector spaces are deﬁned. z. 71. Most spaces considered in this book have a topological structure given by a metric or a norm. y) to any two points x. x). z ∈ E: (D1) d(x. From (D3). A metric space is a set E together with a function d : E × E → R+ . (D3) d(x. |x| = −x if x < 0. called a metric. (D2) d(x. z) ≤ d(x. assigning a nonnegative real number d(x. (symmetry) (positivity) (triangular inequality) Geometrically. a First Course. and their basic properties are stated. we immediately get |d(x. Closed and open sets are deﬁned. and for a complex √ number x = a + ib. y) − d(y. y ∈ E. y) + d(y.1 Metric Spaces and Normed Vector Spaces This appendix contains a review of basic topological concepts. by Schwartz [70. and the Analysis Courses Analyse I-IV. and we ﬁrst review these notions. y. The chapter ends with the deﬁnition of a normed aﬃne space. as |x| = a2 + b2 . the length of any side is bounded by the sum of the lengths of the other two sides. We begin with metric spaces. y) = 0 iﬀ x = y.1. First. by Munkres [54]. z). We recommend the following texts for a thorough treatment of topology and analysis: Topology. Next. Undergraduate Analysis. y) = d(y.Appendix C Topology C. and satisfying the following conditions for all x.

It should be noted that ρ is ﬁnite (i. We will need to deﬁne the notion of proximity in order to deﬁne convergence of limits and continuity of functions. x) ≤ ρ} is called the closed ball of center a and radius ρ. Deﬁnition C. ]a. . x) < ρ} S(a. . the absolute value of x − y. we introduce some standard “small neighborhoods”. yn ). and d(x. is called the open ball of center a and radius ρ. x) = 0. This is the so-called natural metric on R. y) = |x − y|. open on the right) (interval open on the left. an open ball of center a and radius ρ is the set of points inside the sphere of center a and radius ρ. For this. Example 2: Let E = Rn (or E = Cn ). . ρ) = {x ∈ E | d(a. we deﬁne the following sets: [a. . . Example 3: For every set E. Then. a closed ball of center a and radius ρ < 1 consists only of its center a. deﬁned such that d(x. . not +∞). with ρ > 0. for every ρ ∈ R. excluding the boundary points on the circle. b[ = {x ∈ R | a ≤ x < b}. b]. d) is a metric space. b[ = {x ∈ R | a < x < b}. TOPOLOGY Example 1: Let E = R. B(a. A subset X of a metric space E is bounded if there is a closed ball B(a. and the set is called the sphere of center a and radius ρ. . the distance between the points (x1 . [a. for every a ∈ E. xn ) and (y1 . b] = {x ∈ R | a ≤ x ≤ b}. b]. y) = |x − y|. y) = 1 iﬀ x = y. and d(x. b ∈ R such that a < b. ([a.1. We have the Euclidean metric d2 (x. Given a metric space E with metric d. an open ball of center a and radius ρ is the open interval ]a − ρ. (closed interval) (open interval) (interval closed on the left.2.468 APPENDIX C. the set B(a. .e. the set B0 (a. an open ball of center a and radius ρ is the set of points inside the disk of center a and radius ρ. ρ) = B0 (a. Example 4: For any a. closed on the right) ]a. ρ) = {x ∈ E | d(a. ρ) = {x ∈ E | d(a. In E = R with the distance |x − y|. if d is the discrete metric. ρ). b] = {x ∈ R | a < x ≤ b}. In E = R2 with the Euclidean metric. One should be aware that intuition can be midleading in forming a geometric image of a closed (or open) ball. excluding the boundary points on the sphere. y) = |x1 − y1 |2 + · · · + |xn − yn |2 1 2 . and d(x. we can deﬁne the discrete metric. a + ρ[. In E = R3 with the Euclidean metric. ρ). . For example. ρ) ∪ S(a. and a closed ball of center a and radius ρ ≥ 1 consists of the entire space! Clearly. ρ) such that X ⊆ B(a. x) = ρ} Let E = [a.

a + ρ[. which is closed on the left. b]. There are three standard norms. − ). we easily get → → → → x y y x | − − − |≤ − −− .C. we can restrict ourselves to open or closed balls of center 0 . x x → → − +− ≤ − + − . if we deﬁne d such that → → → → x y d(− . x y it is easily seen that d is a metric. → → → → → → d(− + − . The associated metric is |x − y|. − . A vector space E together with a norm From (N3). the absolute value of x. METRIC SPACES AND NORMED VECTOR SPACES 469 If E = [a. as in example 1.1. and satisfying the following u u → → → conditions for all − . Thus. as in example 4. or the ﬁeld C of complex numbers. an open ball B0 (a. − ) = − − − . . . Given a normed vector space E. and d(x. where K is either the ﬁeld R of reals. Deﬁnition C. and − = 0 iﬀ − = − . .3. we have the norm − . Example 6: Let E = Rn (or E = Cn ). A norm on E is a function : E → R+ . For every → (x1 . Note that the metric associated with a norm is invariant under translation. deﬁned such that. . We now consider a very important special case of metric spaces. y) = |x − y|. that is. − + − ) = d(− . . xn ) ∈ E. → → assigning a nonnegative real number − to any vector − ∈ E. → → y x x y (positivity) (scaling) (convexity inequality) is called a normed vector space. x u y u x y − → For this reason. Let us give some examples of normed vector spaces. Let E be a vector space over a ﬁeld K. with ρ < b − a.1. − ∈ E: x y z (N1) (N2) (N3) → → → − ≥ 0. normed vector spaces. and x = |x|. x 1 − → x 1 = |x1 | + · · · + |xn |. → x 0 x x → → λ− = |λ| − . is in fact the interval [a. every normed vector space is immediately a metric space. Example 5: Let E = R. ρ).

≤ n x 2 In a normed vector space. 1 ≤ i ≤ n} is an open set.5. there is some open ball B0 (a. The following inequalities hold for all − ∈ Rn (or − ∈ Cn ): x x − → x − → x − → x ∞ ∞ 2 → ≤ − x → x ≤ − → x ≤ − 1 2 1 → . xn ) ∈ E | ai ≤ xi ≤ bi .4. it is easy to show that the open n-cube {(x1 . A subset U ⊆ E is an open set in E iﬀ either U = ∅. The open sets satisfy some important properties that lead to the deﬁnition of a topological space. deﬁned such that. but this can be found in any standard text. xn ) ∈ E | ai < xi < bi . . we deﬁne a closed ball or an open ball of radius ρ as a closed − → ball or an open ball of center 0 . − → x = max{|xi | | 1 ≤ i ≤ n}. B0 (a. Deﬁnition C. We may use the notation B(ρ) and B0 (ρ). bi ]. we can deﬁne the closed n-cube {(x1 . In E = Rn . ∞ Some work is required to show the convexity inequality for the Euclidean norm. We will now deﬁne the crucial notions of open sets and closed sets. TOPOLOGY − → x → and the sup-norm − x ∞ 2 = |x1 |2 + · · · + |xn |2 1 2 .1.1.470 → we have the Euclidean norm − . . The following lemma is easy to show. 1 Recall that ρ > 0. ρ) ⊆ U. deﬁned such that.1 A subset F ⊆ E is a closed set in E iﬀ its complement E − F is open in E. given n intervals [ai . x ≤n − ∞ √ − ≤ n → . . Let E be a metric space with metric d. it is possible to ﬁnd a metric for which such open n-cubes are open balls! Similarly. Note that the Euclidean distance is the distance associated with the Euclidean norm. ρ) such that. . 1 ≤ i ≤ n}. . x ∞ √ − → . The set E itself is open. . . and of a topological space. . every open ball of center a is contained in E. x 2 APPENDIX C. since for every a ∈ E. . In fact. which is a closed set. . or for every a ∈ U. with ai < bi . → → Lemma C.

the family of open sets is not closed under inﬁnite intersections.2 Continuous Functions. LIMITS 471 Lemma C. For example. Proof. O is closed under ﬁnite intersections. x) ≤ η. with the standard topology induced by |x − y|. . we can pick Ua = B0 (a. i. . . . . If each (Ei . b ∈ Ub . which is not open.6. b)/2 works too). . The reader is referred to standard texts on topology. It is easy to show that they all deﬁne the same topology. . O is closed = x1 = 1 + · · · + xn 2 1 n . . C. the same set of open sets. b)/3 (in fact ρ = d(a. = max{ x1 .e. . then d2 (f (a). CONTINUOUS FUNCTIONS. the family O of open sets deﬁned in deﬁnition C. . ∅ and E belong to O. i. xn ) 1 2 ∞ i∈I Ui ∈ O. xn ) (x1 . such that.1. . for any two distinct points a = b in E. . that is. For the last point. It is straightforward. +1/n[. xn ) (x1 . xn n }. The above lemma leads to the very general concept of a topological space. we must have Ua ∩ Ub = ∅. there are three natural norms that can be deﬁned on E1 × · · · × En : (x1 . . which is the product topology. and E ∈ O. a ∈ Ua . f (x)) ≤ ǫ. we have U1 ∩ · · · ∩ Un ∈ O. One can also verify that when Ei = R. (O2) For every arbitrary family (Ui )i∈I of sets Ui ∈ O. each Un is open. . if d1 (a. there exist two open sets Ua and Ub such that.e. in R under the metric |x − y|. (O3) ∅ ∈ O. there is some η > 0. ρ). we have under arbitrary unions. i ) is a normed vector space. . 2 n 1 2 x1 + · · · + xn 1 .2. letting Un =] − 1/n. letting ρ = d(a. and Ua ∩ Ub = ∅. Given a metric space E with metric d. such as Munkres [54].1. Limits If E and F are metric spaces deﬁned by metrics d1 and d2 . One should be careful that in general. By the triangle inequality. Furthermore. . the topology product on Rn is the standard topology induced by the Euclidean norm.e. i. but n Un = {0}. . f is continuous at a iﬀ for every ǫ > 0.C.5 satisﬁes the following properties: (O1) For every ﬁnite family (Ui )1≤i≤n of sets Ui ∈ O. for every x ∈ E. ρ) and Ub = B0 (b.

there is some n0 ≥ 0. Given an aﬃne space (E. E ). for every − ∈ E. When E is a metric space with metric d.1. we will need to consider aﬃne spaces (E.472 APPENDIX C. a sequence (xn )n∈N converges to some a ∈ E iﬀ When E is a normed vector space with norm a ∈ E iﬀ Finally. if E and F are normed vector spaces deﬁned by norms → continuous at − iﬀ a → for every ǫ > 0. and f /g is continuous at a if g(a) = 0. Given a normed aﬃne space. |x − y|) the reals under the standard topology. b + − ) = d(a. x if − −− → → x a 1 and 2. a sequence (xn )n∈N converges to some → → for every ǫ > 0. b) = ab .3. x a C. − → − → Deﬁnition C. E ) is a normed aﬃne space iﬀ E is a normed vector space with norm . . and (R. λf .3 Normed Aﬃne Spaces − → For geometric applications. TOPOLOGY 1 Similarly. for all n ≥ n0 .1.2. such that. d(xn . there is a natural metric on E itself. then f + g. for all n ≥ n0 .1. −n − − ≤ ǫ. we consider normed aﬃne spaces. a) ≤ ǫ. then → → f (− ) − f (− ) x a 2 ≤ ǫ. if f and g are continuous at a. Observe that this metric is invariant under translation. we can show easily that every real polynomial function is continuous. that is → → d(a + − . u u . Left as an exercise. If E is a topological space. where the space of translations E is a − → − → vector space over R or C. we say that (E. such that. Proof. b). such that. for any a ∈ E. The following lemma is useful for showing that real-valued functions are continuous. f · g. Using lemma C. E ) where the associated − → space of translations E is a vector space equipped with a norm. there is some n0 ≥ 0. Lemma C. are continuous at a. for any two functions f : E → R and g : E → R. for every ǫ > 0. for any λ ∈ R. there is some η > 0. f is ≤ η. deﬁned such that → − d(a.2.

→ − − → Note that the map (a. Of course. . by giving it the norm (x1 . We are now ready to deﬁne the derivative (or diﬀerential) of a map between two normed aﬃne spaces. .3. . xn ) = max( x1 1 . . and similarly for the map − → − → → → → a → a + − from E × E to E. am ) into a normed aﬃne space. for every ﬁxed a ∈ E and λ > 0. This will lead to tangent spaces to curves and surfaces (in normed aﬃne spaces). . xn n ). . a1 ) ⊕ · · · ⊕ (Em . under the same norm. and each Ei is also a normed aﬃne space with norm i . . In fact. the map − → a + − is a homeomorphism from E u u u to Ea . is we consider the map h : E → E. y). Similarly. am ). deﬁned such that. h(y)) = λd(x. the ﬁnite product E1 × · · · × Em is made into a normed aﬃne space. → h(x) = a + λ− ax. b) → ab from E × E to E is continuous. a1 ) ⊕ · · · ⊕ (Em .C. and it is also complete. we make (E1 . . NORMED AFFINE SPACES 473 Also. Rn is a normed aﬃne space under the Euclidean metric. then d(h(x). If an aﬃne space E is a ﬁnite direct sum (E1 . .

TOPOLOGY .474 APPENDIX C.

A thorough treatment of diﬀerential calculus can be found in Munkres [55]. Let f : A → R. Schwartz [71]. we say that f is diﬀerentiable on A. The main idea behind the concept of the derivative of f at a. or . h = 0}. or Df (a). We ﬁrst review the notion of derivative of a realvalued function whose domain is an open subset of R. or 475 x → f (a) + f ′ (a)(x − a). including the chain rule. Lang [48]. dx If f ′ (a) exists for every a ∈ A. First. h∈U h df (a). and consider any a ∈ A. Of course. dx where U = {h ∈ R | a + h ∈ A. Let A be any nonempty open subset of R. . we deﬁne directional derivatives and the total derivative of a function f : E → F between normed aﬃne spaces. In this case. We show how derivatives are represented by Jacobian matrices. we review the deﬁnition of the derivative of a function f : R → R.1 Directional Derivatives. is that locally around a (that is. and Avez [2].1. Total Derivatives This appendix contains a review of basic notions of diﬀerential calculus. the map df a → f ′ (a) is denoted as f ′ . the derivative of f at a ∈ A is the limit (if it exists) f (a + h) − f (a) lim . Deﬁnition D.1. This limit is denoted as f ′ (a).Appendix D Diﬀerential Calculus D. or Df . h→0. in some small open set U ⊆ A containing a). Cartan [16]. For any function f : A → R. Basic properties of derivatives are shown. and let a ∈ A. denoted as f ′ (a). is to give an adequate notion of linear approximation. Next. the function f is approximated linearly by the map Part of the diﬃculty in extending this idea to more complex spaces. where A is a nonempty open subset of R. we will use linear maps! Let us now review the formal deﬁnition of the derivative of a real-valued function.

A − {a} is also open. where E and F are normed aﬃne spaces. Also. is the limit f ′ (a− ). h∈U lim ǫ(h) = 0. such that. We should note however. we don’t! . and if they are equal. −− − − −→ −−−−− − → − → it is notationally more pleasant to denote f (a)f (a + h ) as f (a + h ) − f (a). then f is continuous on the left at a (and similarly on the right). where ǫ(h) is deﬁned for all h such that a + h ∈ A. f (a + h) − f (a) lim . Nevertheless. how do we deﬁne the quotient by a vector? Well. we say that the derivative of f at a on the left. But now. and derivative of f at a on the right. in the → − rest of this chapter. h < 0}. it will be notationally convenient to assume that − → the vector space associated with E is denoted as E . if the derivative of f on the left at a exists. The composition of diﬀerentiable functions is diﬀerentiable. We can also deﬁne f ′ (a) as follows: there is some function ǫ(h).1 has a derivative f ′ (a) at a.1. in deﬁning derivatives. whenever a + h ∈ A. DIFFERENTIAL CALCULUS Note that since A is assumed to be open. Remark: We can also deﬁne the notion of derivative of f at a on the left.476 APPENDIX D. the vector ab will be denoted as b − a. h→0. and that the vector space associated − → with F is denoted as F . the unique vector translating f (a) to f (a + h ). For example. then it is continuous at a. We would like to extend the notion of derivative to functions f : A → F . If a function f as in deﬁnition D. and A is some nonempty open subset of E. h If E and F are normed aﬃne spaces. h∈U h where U = {h ∈ R | a + h ∈ A. that this quantity is a vector and not a point. U is indeed open and the deﬁnition makes sense. If f is diﬀerentiable on A. then f is continuous on A. and h→0. − → Since F is a normed aﬃne space. making sense of f (a + h ) − f (a) is easy: we can deﬁne −− − − −→ −−−−− − → − → this as f (a)f (a + h ). if it exists. The ﬁrst diﬃculty is to make sense of the quotient f (a + h) − f (a) . Remark: A function f has a derivative f ′ (a) at a iﬀ the derivative of f on the left at a and the derivative of f on the right at a exist. and since the function h → a + h is continuous and U is the inverse image of A − {a} under this function. Thus. f (a + h) = f (a) + f ′ (a) · h + ǫ(h)h.

2 makes sense.3. the existence and value of a directional derivative is independent of the choice of norms in E and F . and yet not be continuous at a. DIRECTIONAL DERIVATIVES. the inverse image U u of A − {a} under the above map is open.D. Let E and F be two normed aﬃne spaces. a function can have all directional derivatives at a.1. for t in some small closed interval [r. the points → of the form a + t− form a line segment). the points of the form a + t− form a line. when E has dimension ≥ 2. the vector − .1.1. deﬁnes the direction of the tangent line at a to this curve. we say that f is diﬀerentiable . and the directional derivative Du f (a). a → → − In the special case where E = R and F = R. all nonnull vectors − share something in common. and the deﬁnition of the limit in deﬁnition D.r. The directional derivative is sometimes called the Gˆteaux derivative. u → f (a + t− ) − f (a) u t → makes sense. As u a consequence. curve which lies on the image f (E) of E under the map f : E → F (more exactly. and that the image of this line deﬁnes a curve in u F . Now. Deﬁnition D. let A be a nonempty open subset of E. t = 0} (or U = {t ∈ C | a + t− ∈ A. as long as they are equivalent norms. and since A − {a} is open. u u → Since the map t → a + t− is continuous. and yet their composition may not. a small → curve segment on f (A)). where t ∈ R u (or C) (more exactly. and let f : A → F be any function. denoted as D f (a). For any a ∈ A. there is no reason to believe → that the directional derivatives w.1. Let E and F be two normed aﬃne spaces. where t ∈ R (or t ∈ C). However. s] in A containing a.r. u viewed as a vector).t. For any a ∈ A. the real number 1. The idea is that in E. we introduce a more uniform notion. also denoted as f ′ (a). t∈U lim → f (a + t− ) − f (a) u . s] to F ).e. When E = R (or E = C) and F is any normed vector space.1. This curve (segment) is deﬁned by the map t → f (a + t− ). u → the directional derivative of f at a w. is the limit (if it u u exists) t→0. Thus. t = 0}). t → → where U = {t ∈ R | a + t− ∈ A. in the sense of deﬁnition D. We can consider the vector f (a + t− ) − f (a). Indeed. and let f : A → F be any function.1. and we let − = 1 (i. which is that their deﬁnition is not suﬃciently uniform. Since the notion of limit is purely topological. let A be a nonempty open − → − → → subset of E. it is immediately veriﬁed that D1 f (a) = f ′ (a). TOTAL DERIVATIVES 477 → → − A ﬁrst possibility is to consider the directional derivative with respect to a vector − = 0 u − → → in E .t. directional derivatives present a serious problem. Deﬁnition D. for any − = 0 in E . provides a suitable generalization of the notion of derivative. from u R to F (more exactly from [r. the derivative D1 f (a). This leads us to the following deﬁnition. Two functions may have all directional derivatives in some open sets.2.

and we will assume this from now on. and let f : A → F be any function. or df (a). and it is called the Fr´chet derivative. or total diﬀerential . the − → inverse image U of A − {a} under the above map is open in E . − → − → − → Since the map h → a + h from E to E is continuous. Lemma D. since h = 0 . locally around a. or dfa . and since A is open in E. or diﬀerential . h = 0 }. − → − − → − → − → → where U = { h ∈ E | a + h ∈ A. Let E and F be two normed aﬃne spaces. and u u furthermore. h→0. − → h − → and that the value ǫ( 0 ) plays absolutely no role in this deﬁnition. However.478 APPENDIX D. The condition for f to be diﬀerentiable at a amounts to the fact that the right-hand side of the above expression − → − − → − → → − → approaches 0 as h = 0 approaches 0 . Again. where ǫ( h ) is deﬁned for every h such that a + h ∈ A and h→0. For any a ∈ A. we note that the derivative Df (a) of f at a provides an aﬃne approximation of f . or Dfa . let A be a nonempty open subset of E. the next lemma implies this as a corollary. the existence and value of a derivative is independent of the choice of norms in E and F . or derivative. h∈U lim − → − → ǫ( h ) = 0 . such that − → − → − − → → f (a + h ) = f (a) + L( h ) + ǫ( h ) h − → − → − → − → for every a + h ∈ A. Since the notion of limit is purely topological. then f is − → − → → continuous at a. it does no harm to − → − → assume that ǫ( 0 ) = 0 . as long as they are equivalent norms.1. of f at a. and it makes sense to say that − → − → lim ǫ( h ) = 0 . DIFFERENTIAL CALCULUS − → − → − → at a ∈ A iﬀ there is a linear continuous map L : E → F and a function ǫ( h ). u . ǫ( h ) is uniquely determined since − → − → − → f (a + h ) − f (a) − L( h ) ǫ( h ) = . → Du f (a) = Df (a)(− ). h∈U − → − → − → → − Note that for every h ∈ U. Note that the continuous linear map L is unique. and f has a directional derivative D f (a) for every − = 0 in E . or total e derivative. if Df (a) is deﬁned. or f ′ (a). if it exists.4. when a + h ∈ A. The linear map L is denoted as Df (a). The following lemma shows that our new deﬁnition is consistent with the deﬁnition of the directional derivative. In fact.

For any two normed aﬃne spaces E and F . since L is continuous.4. . for |t| ∈ R small enough (where t ∈ R or u − → → → t ∈ C). and not a function from the aﬃne space E − − → → to the aﬃne space F . the variable xj really has nothing to do with the formal deﬁnition.1. Lemma D. If h = 0 approaches 0 . . . u u t t and the limit when t = 0 approaches 0 is indeed Du f (a). is a “logical obscenity”. . DIRECTIONAL DERIVATIVES. u u u u → f (a + t− ) − f (a) u |t| → → → u = L(− ) + ǫ(t− ) − . This is just another of these situations where tradition is just too hard to overthrow! The notation We now consider a number of standard results about derivatives. Indeed. for every a ∈ E. −n )) of E. .5. f is continuous at a. . . we obtain the u u 1 n and for t = 0. we have u u → → → → f (a + t− ) = f (a) + tL(− ) + ǫ(t− )|t| − . are called the partial derivatives of f with → → respect to the frame (a0 . then Df (a) = 0. −n )). .6. then − → Df (a) = f . For any − = 0 in E . . and also denoted as df . Given two normed aﬃne spaces E and F .1. (−1 .1. This way. Deﬁnition D. for every a ∈ E. . for every function f : E → F . and → → − → − thus. . F ).1. If f : E → F is a continuous aﬃne map. for any frame (a0 . we can also do it for an inﬁnite frame). and this assumption is then redundant. − ) (actually. ǫ( h ) h approaches 0 . if they exist. . It is important to note that the derivative Df (a) of f at a is a continuous linear map − → − → from the vector space E to the vector space F . If Df (a) exists for every a ∈ A. −n )) for E. . . TOTAL DERIVATIVES 479 − → − − → → − → − → − → Proof. for every a ∈ E. it is easily shown that every linear map is continuous. . ∂j f (a). . where (−1 . The uniqueness of L follows from lemma D. the linear map associated with f . . called the derivative of f on A. (−1 . u u the directional derivatives Duj f (a). (−1 . if E is of ﬁnite dimension → → n. and letting h = t− . if f : E → F is a constant function. we can deﬁne the directional derivatives with respect to the vectors in the → → basis (− . although customary and going back to ∂xj Leibnitz. deﬁnition of partial derivatives. or ∂xj ∂f (a) for a partial derivative. . . The partial derivative Duj f (a) is also denoted as u u ∂f (a). → → → → When E is of ﬁnite dimension n. . as follows.D. we have a + t− ∈ A. −n ) u u u u − → is a basis of E . for every frame (a0 . when E is of ﬁnite dimension. we get a map Df : A → L( E . . Also.

then D(g ◦ f )(a) exists. b ) exists.9. Given a normed aﬃne space E and a normed vector space F . let A be an open set in E. Given two normed aﬃne spaces E and F . for every (− . g → where − is the linear map associated with the aﬃne map g. Df −1 (f (a)) = (Df (a))−1 . for any two functions f : E → F . and D(g ◦ f )(a) = Dg(f (a)) ◦ Df (a). and assume that Df exists on A and that Df −1 exists on B. let f : A → F such that Df (a) exists. Lemma D. and let B an open set in F . Then. Straightforward. b ) + f (− . for any continuous bilinear → → → − → − → map f : E1 ×E2 → F . Lemma D.1.1.1. if Df (a) exists and Dg(f (a)) exists.11.1. F . and F . such that f (A) ⊆ B.10.480 Proof.1. for any a ∈ A. let f : A → B be a bijection from A to B. let B some open subset in F . Straightforward. for every a ∈ A.8. E2 . and let g : F → G be a continuous aﬃne map. D(λf )(a) = λDf (a). → and v 2 → → → − → → → − → → Df (− . Proof. if Df (a) and Dg(a) exists. a u v u a v Proof.9 has many interesting consequences. For any functions f : A → F and g : B → G. − ) = f (− . . We mention two corollaries. and G. Proof. We now state the very useful chain rule. g Lemma D.7.1. It is not diﬃcult. APPENDIX D. for every a ∈ E. Then. F . but more involved than the previous two. − ). Given three normed vector spaces E1 . Given three normed aﬃne spaces E. Given three normed aﬃne spaces E. Straightforward. let A be some open subset in E. Df (− . Lemma D. and → D(g ◦ f )(a) = − ◦ Df (a). b ) ∈ E1 ×E2 . and for every − ∈ E1 a a u − ∈E . b )(− . for any a ∈ A. and D(f + g)(a) = Df (a) + Dg(a). and G. D(g ◦ f )(a) exists. for any open subset A in E. DIFFERENTIAL CALCULUS Lemma D. Lemma D. then D(f + g)(a) and D(λf )(a) exists.

.1. for any − → → → → → frame (b .1. The lemma is then a simple application of lemma D. Thus. for every x ∈ E. bm ). . − )) of F . we can work with the vector − → space F instead of the aﬃne space F . v vm . Thus. In the special case where F is a normed aﬃne space of ﬁnite dimension m. TOTAL DERIVATIVES 481 − → − → Lemma D. fb (x) = f (x) − b. if F is of ﬁnite dimension m. − → → → → → for any frame (b0 . .9. − → and thus. fm ). . Given normed aﬃne spaces E and F = (F1 . . In other words. implies a global property.13. DIRECTIONAL DERIVATIVES. Then. b1 ) ⊕ · · · ⊕ (Fm . . − )) (where K = R v vm or K = C). For any two normed aﬃne spaces E and F . . as far as dealing with derivatives. . The following lemma is an immediate corollary of lemma D. . . Df (a) exists iﬀ every Dfi (a) exists. Observe that f (a + h ) − f (a) = (f (a + h ) − b) − (f (a) − b). and → → f (x) = b0 + f1 (x)−1 + · · · + fm (x)− . .11 has the remarkable consequence that the two vector spaces E and F have the same dimension. . for any a ∈ A. . We now consider the situation where the normed aﬃne space F is a ﬁnite direct sum F = (F1 . for any function f : A → F . where (− .D. (−1 . u u v u → m → → − for every − ∈ E . − )) of F . . v vm for every x ∈ E. (− .12. that the two vector spaces E and F have the same dimension. (−1 . . . .12. . Df (a) is equal to Dfb (a). . b1 ) ⊕ · · · ⊕ (Fm . . Lemma D. . . letting Fi be the standard normed aﬃne space K with its natural structure. u → → x = b0 + x1 −1 + · · · + xm − . and → → → →v Df (a)(− ) = Df1 (a)(− )−1 + · · · + Dfm (a)(− )− . . . for every a ∈ E.1. − ) is a basis of F . v vm v vm a function f : E → F is diﬀerentiable at a iﬀ each fi is diﬀerentiable at a. the coordinates of x in the frame (b0 . . letting f = (f1 . .1.1. . . the existence of a bijection f between an open set A of E and an open set B of F . we note that F is isomorphic to the direct sum F = (K. where b = (b1 . . given any open subset A of E. − ) is a basis of F . bm ). . fm ). 0) ⊕ · · · ⊕ (K. a local property. . every function f : E → F is represented by m functions (f1 . 0). and Df (a) = in1 ◦ Df1 (a) + · · · + inm ◦ Dfm (a). . − → − → Proof. where fb : E → F is deﬁned such that. every point x ∈ F can be v v v v 0 1 m 1 m expressed uniquely as → → where (x1 . bm ). Lemma D. where (−1 . xm ) ∈ K m . where fi : E → K (where K = R or K = C). . .1. . such that f is diﬀerentiable on A and − → − → f −1 is diﬀerentiable on B. .

when E is of dimension n. . Df (c) = Dfa (c − a). . an ) and a normed aﬃne space F .5 is really that of the vector derivative. for every − ∈ E . u u for some obvious (Ej . an ). .r. an ). If D(f ◦ic )(cj ) exists. and a normed aﬃne space F . (−1 . − )). deﬁning − → → → → → − fa : E → F such that. The notion ∂j f (c) introduced in deﬁnition D. . DIFFERENTIAL CALCULUS We now consider the situation where E is a ﬁnite direct sum. This notion is a generalization of the notion deﬁned in deﬁnition D. Since every c ∈ E can be written as c = a + c − a. we have functions f ◦ ic : Ej → F . . . Given a normed aﬃne space E = (E1 . an ). The deﬁnition of ic and of Dj f (c) also makes sense for j a ﬁnite product E1 × · · · × En of aﬃne spaces Ei . . Note that Dj f (c) ∈ L(Ej .1. for any function f : A → F . Lemma D. . . .1. . u u u u → → − for every −i ∈ Ei . . we can work with the function fa whose domain is the vector space E . whereas Dj f (c) is the corresponding linear map. Although perhaps confusing. we call it the partial derivative of f w. .5. The lemma is then a simple application of lemma D. . aj ) (as explained just after lemma lemma D. D. In fact. . The following lemma holds. we identify the two notions. a1 ) ⊕ · · · ⊕ (En . . given any open subset A of E. u u u − → and thus. and E has a frame (a0 . such that. → → and a frame (a0 . . . j ic (x) = (c1 . F ).482 APPENDIX D. We also denote this derivative as Dj f (c). cj+1 .14. j For any function f : A → F . we deﬁne the continuous functions ic : Ej → E. cn ).9. . we can write E = (E1 . . its jth argument.12). 1 ≤ i ≤ n. −n )) has been chosen. (−1 . . . . and then → Dj f (c)(λ−j ) = λ∂j f (c). j − − → → at c. . cj−1. given any open subset A of E. . . We will use freely the notation ∂j f (c) instead of Dj f (c). every function f : E → F is determined by m functions fi : E → R v vm (or fi : E → C). .1.2 Jacobian Matrices → → If both E and F are of ﬁnite dimension. −n ) = D1 f (c)(−1 ) + · · · + Dn f (c)(−n ). . where → → f (x) = b0 + f1 (x)−1 + · · · + fm (x)− . u Proof. .t. u and the two notions are consistent. a1 ) ⊕ · · · ⊕ (En . . Given a normed aﬃne space E = (E1 .1. cn ) ∈ A. a1 ) ⊕ · · · ⊕ (En . The same result holds for the ﬁnite product E1 × · · · × En . x. which j j contains cj . for any c = (c1 . and → → → → Df (c)(−1 . . v vm . clearly. then each Dj f (c) exists. for every c ∈ A. (−1 .if Df (c) exists. . deﬁned on (ic )−1 (A). −n )) and F has a u u → → frame (b0 . fa (− ) = f (a + − ). where a = (a1 .1.

.D. . This matrix is called the Jacobian matrix of Df at a. . and compute the derivative as in deﬁnition D.. . r sin θ).. is equal to the components of the vector Df (a)(−j ) over v vm u → → the basis (−1 . the usual derivative. we have → → v → v → → Df (a)(−j ) = Df1 (a)(−j )−1 + · · · + Dfi (a)(−j )−i + · · · + Dfm (a)(−j )− . ∂fm ∂fm (a) (a) ∂x1 ∂x2 .. leaving the others ﬁxed. . . . .1. . it is easy to compute the ∂fi (a). Its determinant det(J(f )(a)) is called the Jacobian of Df (a). → → → → Df (a)(−j ) = ∂j f1 (a)−1 + · · · + ∂j fi (a)−i + · · · + ∂j fm (a)− . −n ) and u u → → → (−1 . . .2. ∂f1 (a) ∂xn ∂f2 (a) ∂xn . . We simply treat the function fi : Rn → R as a function of its j-th partial derivatives ∂xj argument. ∂n fm (a) . . . . . From a previous remark. . However.t.1. u and from lemma D. that is. .1. When E = Rn and F = Rm . the bases (−1 . . JACOBIAN MATRICES for every x ∈ E.. .13.. ∂1 fm (a) ∂2 fm (a) .r. we know that this determinant in fact only depends on Df (a).. . . deﬁned such that. ∂fm (a) ∂xn or ∂f1 ∂f1 ∂x1 (a) ∂x2 (a) ∂f2 ∂f2 ∂x (a) ∂x (a) 2 J(f )(a) = 1 . u v v vm 483 → → Since the j-th column of the m × n-matrix J(f )(a) w. From lemma D. ∂2 f1 (a) . − ). the linear map Df (a) is determined by the m × n-matrix J(f )(a) = v vm ∂fi (a))): (∂j fi (a)).4. θ) = (r cos θ. u u → u → u vm that is. . . . . (or J(f )(a) = ( ∂xj ∂1 f1 (a) ∂1 f2 (a) J(f )(a) = . ∂n f1 (a) ∂2 f2 (a) .. . for any function f : Rn → Rm . ∂n f2 (a) . . we have → Df (a)(−j ) = Duj f (a) = ∂j f (a). and not on speciﬁc bases. For example. . . . − ) representing Df (a). . .1. f (r. . consider the function f : R2 → R2 . . . partial derivatives give a means for computing it.. . .

even though [a. h). Letting f = (f1 . we also let Df (a) denote the vector Df (a)( 1 ) = D1 f (a). DIFFERENTIAL CALCULUS cos θ −r sin θ sin θ r cos θ and the Jacobian (determinant) has value det(J(f )(r.1. For example. a function f : [0. we deﬁne the right derivative D1 f (a+ ) at a. when E = [0. Given a function f : R → F (or f : C → F ). Letting ϕ = (f. and F = R3 . When E = R. We usually identify Df (a) with its Jacobian matrix D1 f (a). we have J(f )(r. f3 ). θ)) = r. g. its Jacobian matrix at a ∈ R2 is ∂f ∂f ∂u (a) ∂v (a) ∂g (a) ∂g (a) J(ϕ)(a) = ∂v ∂u ∂h ∂h (a) (a) ∂u ∂v . Then. − → By abuse of notation. Df (a)(λ) = λD1 f (a). and the left derivative D1 f (b− ) at b. a function ϕ : R2 → R3 deﬁnes a parametric surface. as in the case of a real-valued function. or velocity vector (in the real case) at a. In the case where E = R (or E = C). b] ⊆ R to a normed aﬃne space F . It is often useful to consider functions f : [a.2. b] → F from a closed interval [a. the Jacobian matrix of Df (a) is a column vector. 1].484 Then. 1] → R3 deﬁnes a (parametric) curve in R3 . for every λ ∈ R (or λ ∈ C). which is the column vector corresponding to D1 f (a). and the vector D1 f (a) is the velocity of the moving particle f (t) at t = a. its Jacobian matrix at a ∈ R is ∂f1 ∂t (a) ∂f2 J(f )(a) = ∂t (a) ∂f 3 (a) ∂t When E = R2 . the physical interpretation is that f deﬁnes a (parametric) curve which is the trajectory of some particle moving in Rm as a function of time. Deﬁnition D. In fact. and its derivative Df (a) on [a. θ) = APPENDIX D. In this case. b] is not open. b]. and F = R3 . for any function f : R → F (or f : C → F ). and we assume their existence. This case is suﬃciently important to warrant a deﬁnition. f2 . the vector Df (a)(1) = D1 f (a) is called the vector derivative. this column vector is just D1 f (a). where F is a normed aﬃne space.

. .9..2. and h = g ◦ f . ∂fn (a) ∂xp ∂hi (a) = ∂xj k=1 ∂gi ∂fk (b) (a). . ... . . ∂fn (a) . denoted as gradf (a) or ∇f (a). ∂p f1 (a) ∂1 g1 (b) ∂2 g1 (b) . . . −m ) is the product of the Jacobian matrices J(g)(b) u u w → → → → → → w. . . . . − ) and (−1 . ∂f1 (a) . (b0 . . . ∂gm ∂gm (b) (b) ∂y1 ∂y2 . given any − ∈ Rn . F .. JACOBIAN MATRICES 485 When E = R3 . . . and F = R. ∂n gm (b) or ∂g1 ∂g1 (b) (b) ∂y1 ∂y2 ∂g2 ∂g2 ∂y (b) ∂y (b) 2 J(h)(a) = 1 . . by lemma D. . and (a0 . − ): v vn ∂1 f1 (a) ∂2 f1 (a) . . . ∂gm ∂fn (a) (b) ∂x1 ∂yn k=n . . we have the familiar formula ∂f1 (a) ∂xp ∂f2 (a) ∂xp . ∂x2 ∂f2 (a) . −p )) is an aﬃne frame for u u → → → → E. . v ∂x1 ∂xn → the scalar product of gradf (a) and − . . (−1 .. −p ) and (−1 . . . . . ∂x1 ∂xn Its transpose is a column vector called the gradient of f at a. . −m )) is an aﬃne frame for v vn w w G. if Df (a) exists and Dg(b) exists. . . when f : Rn → R. . . . − )) is an aﬃne frame for F .D. . (−1 . . .t the bases (−1 .1. . → Then. ∂n g2 (b) ∂1 f2 (a) ∂2 f2 (a) .t the bases (−1 . . for a function f : R3 → R. . ∂yk ∂xj . . .. . . for any functions f : A → F and g : B → G. . . . . . . . the Jacobian matrix at a ∈ Rn is the row vector J(f )(a) = ∂f ∂f (a) · · · (a) . B is an open subset of F . . −p ) and v vn w w u u → → (−1 . . ∂p fn (a) ∂1 gm (b) ∂2 gm (b) . ∂x ∂y ∂z More generally. ∂p f2 (a) J(h)(a) = . . ∂n g1 (b) ∂1 g2 (b) ∂2 g2 (b) . . . . if A is an open subset of E. . such that f (A) ⊆ B.t the bases (−1 . and (c0 .r. letting b = f (a). . . . and J(f )(a) w.r. the Jacobian matrix at a ∈ R3 is J(f )(a) = ∂f ∂f ∂f (a) (a) (a) . .r. . . v → → When E. . . . . . . . . (−1 . ∂x2 Thus. . . . .. the Jacobian matrix J(h)(a) = J(g ◦ f )(a) → → → → w w. for any a ∈ A. . . . . . and G have ﬁnite dimensions. ... . ∂x2 . . . . −m ). note that v ∂f ∂f → → Df (a)(− ) = v (a) v1 + · · · + (a) vn = gradf (a) · − . ∂f1 ∂g1 (a) (b) ∂yn ∂x1 ∂g2 ∂f2 (a) (b) ∂yn ∂x1 . . . .. . . . . ∂1 fn (a) ∂2 fn (a) . . .

the closed segment [a. we have For any − = 0 . x + y2 → h → − → . Let E and F be two normed aﬃne spaces. − → − → and let f : A → F be a continuous function on A. Given any a ∈ A and any h = 0 in − → − → E . 0) = 0. y) = 4 if (x. b[ is the set of all points a + λ(b − a). where 0 ≤ λ ≤ 1. λ ∈ R. given any two points a. Du f (0) exists for all − = 0 .486 APPENDIX D. y) = 1 . if a function f : A → F is diﬀerentiable at a ∈ A. for some M ≥ 0. As a u matter of fact. when 2 2 in fact. but yet. but the explicit formula for Du f (0) is not linear. letting − = u u k − → − → → h2 k f ( 0 + t− ) − f ( 0 ) u = 2 4 . t t h + k2 so that Du f (0) = 0 h2 k if k = 0 if k = 0. then its Jacobian matrix is well deﬁned. if f : A → F is diﬀerentiable at every − → point of the open segment ]a. if Df (0) existed. If E is an aﬃne space (over R or C). the function f is not continuous at (0.2. If f is diﬀerentiable on A. and x2 y f (x. F ). then f deﬁnes a − − → → function Df : A → L( E . and the open segment ]a. λ ∈ R. On the other hand. 0) = 0. a + h ] is contained in A. there are suﬃcient conditions on the partial derivatives for Df (a) to exist. given an open subset A of E. on the parabola y = x2 . . namely continuity of the partial derivatives. the limit is 1 . f (0. f (0. For example. and we would have → Du f (0) = Df (0)(− ) = αh + βk. There are functions such that all the partial derivatives exist at some a ∈ A. then − → − → f (a + h ) − f (a) ≤ M h . y) = (0. f (x.2. 0). We ﬁrst state a lemma which plays an important role in the proof of several major results of diﬀerential calculus. DIFFERENTIAL CALCULUS Given two normed aﬃne spaces E and F of ﬁnite dimension. Lemma D. and not even continuous at a. One should be warned that the converse is false. let A be an open subset of E. It turns out that the continuity of the partial derivatives on A is a necessary and suﬃcient condition for Df to exist and to be continuous on A. consider the function f : R2 → R. b] is the set of all points a + λ(b − a). − → → Thus. For example. and when we approach the origin on this parabola.a+h[ max Df (x) ≤ M. if the closed segment [a. and x∈]a. it would be u a linear map Df (0) : R2 → R represented by a row matrix (α β). b ∈ E. However. the function is not diﬀerentiable at a. 0). where 0 < λ < 1. deﬁned such that. a + h [.

and using the more general partial derivatives Dj f introduced before lemma D. . if L : E → F is a continuous linear map. or E = E1 × · · · × En . see Lang [48]. where M = maxx∈]a. for all i. .3 holds.3.2. Deﬁnition D. iﬀ f is continuous on A. .14.2. We say that f : A → F is of class C 1 on A.3 gives a necessary and suﬃcient condition for a function f to be a C 1 -function (when E is of ﬁnite dimension). Given two normed aﬃne spaces E and F . F ) is deﬁned and continuous on A iﬀ ∂f every partial derivative ∂j f (or ) is deﬁned and continuous on A. given any open subset A of E. Given two normed aﬃne spaces E and F . Lemma D. where E is of ﬁnite dimension → → n. for all j. or a C 0 -function on A. given any u u − − → → function f : A → F . (−1 . .3 gives a necessary and suﬃcient condition for the existence and continuity of the derivative of a function on an open set. 1 ≤ i ≤ m.4.2. −n )) is a frame of E. . ∂xj → → As a corollary. Lemma D. − )) is a frame of F . Lemma D. It is easy to show that the composition of C 1 -functions (on appropriate open sets) is a C 1 -function. a1 ) ⊕ · · · ⊕ (En . iﬀ Df exists and is continuous on A. 487 The above lemma is sometimes called the “mean-value theorem”. .2 can be used to show the following important result. we say that a function f : A → F is of class C 0 on A. For a complete treatment of higherorder derivatives and of various versions of the Taylor formula. It should be noted that a more general version of lemma D.2. 1 ≤ j ≤ n. JACOBIAN MATRICES − → − → As a corollary. an ).2. . assuming that E = (E1 . It is also possible to deﬁne higher-order derivatives. . a C 1 -function is of course a C 0 -function. and (b0 . ∂xj Lemma D.2. if F is of ﬁnite dimension m.2. and an open subset A of E.1. (−1 . 1 ≤ j ≤ n. F ) is deﬁned and continuous on A iﬀ every partial derivative ∂fi ∂j fi (or ) is deﬁned and continuous on A. j. . the derivative Df : A → L( E . and where (a0 . the v vm − − → → derivative Df : A → L( E . or a C 1 -function on A.D. Since the existence of the derivative on an open set implies continuity. then − → − → − → f (a + h ) − f (a) − L( h ) ≤ M h .a+h[ Df (x) − L .

488 APPENDIX D. DIFFERENTIAL CALCULUS .

[11] W. e [3] A. G´om´trie diﬀ´rentielle: vari´t´s. Computer Aided Design. Beatty. Computer Aided Geometric Design. [8] J. o 1983.E. 489 . Boehm and H. courbes et sure e e ee faces. ﬁrst edition. Universitext. 1992. [6] Marcel Berger. 1987. 15(5):260–261. Avez. 1983. Bartels. Bertin. G´om´trie 1. ﬁrst edition. Alg`bre lin´aire et g´om´trie classique.A. Geometric Concepts for Geometric Design. Boehm and G. and surfaces. 1994. English edition: Geometry 1. English edition: Diﬀerential e geometry. 1988. e e Springer Verlag. and J. Prautzsch. 115. Conditions for tangent plane continuity over recursively generated B-spline surfaces. Storry. John C.J. Ball and D. Algebra. curves. 1990. second edition. Puf. e e e e [9] W. Kahman. Barsky. Concerning subdivison of B´zier triangles. Masson. e e Springer Verlag. Calcul Diﬀ´rentiel.T. Nathan. Nathan. Collection Math´matiques. 1984. Letter to the editor. Subdividing multivariate splines. Boehm. GTM No. [4] Richard H. ﬁrst edition. 1981. [2] A. AK Peters. Morgan Kaufmann.Bibliography [1] Michael Artin. English edition: Geometry 2. Universitext. 7(2):83–102. Masson. 1990. Springer Verlag. 1:1–60. 1991. Computer Aided Design. An Introduction to Splines for Use in Computer Graphics and Geometric Modelling. [12] Wolfgang B¨hm. 15:345–352. ACM Transactions on Computer Graphics. ﬁrst edition. A survey of curve and surface methods in CAGD. Farin. [7] Marcel Berger and Bernard Gostiaux. G. Prentice Hall. Farin. [5] Marcel Berger. e [10] W. G´om´trie 2. ﬁrst edition. 1991. and Brian A. manifolds.

Diﬀerential Geometry of Curves and Surfaces. Brunel University. e e [17] E.H. Doo. Recursively generated B-spline surfaces on arbitrary topological meshes. A subdivision algorithm for smoothing down irregularly shaped polyhedrons. 1981. Bologna. and R. In Computer Graphics (SIGGRAPH ’98 Proceedings). Behavior of recursive division surfaces near extraordinary points. Davis. 1978. IEEE Computer Graphics and Image Processing. Sabin. 14(2):87–111. e e e [16] Henri Cartan. Riesenfeld. Hermann.G. 10(6):350–355. [20] E. Chaikin. Lyche. 3:346–349. 1989. [21] Philip J. e e e [15] Nicolas Bourbaki. Yvinec. Discrete B-splines and subdivision techniques in computer aided geometric design and computer graphics. [22] Carl de Boor. Computer Aided Design. Ediscience International. Alg`bre Lin´aire et G´om´trie El´mentaire. Introduction to Numerical Matrix Analysis and Optimization. . Michael Kass. 1994. Dissertation. 1976. 1995. Chapitres 4-7. El´ments de Math´matiques. [19] P. Cohen. Alg`bre. Cours de Calcul Diﬀ´rentiel. El´ments de Math´matiques. PhD thesis. Alg`bre. Doo and M. Oxbridge.-D. French edition: Masson. ﬁrst edition. 1978. Doo. Prentice Hall. Hermes. G´om´trie Algorithmique. An algorithm for high-speed curve generation. Italy. second edition. 1978. 1998. [26] Manfredo P. Chapitres 1-3. pages 157–165. Computer Graphics and Image Processing. 1974. England. ﬁrst edition. Springer Verlag. Hermann. 1990. 1970. 1978.490 BIBLIOGRAPHY [13] J. Cambridge University Press. pages 85–94. 1978. and Tien Truong. [14] Nicolas Bourbaki. [18] G. [29] D. [27] D. Ciarlet. T. IEEE. [28] D. Clark. Subdivision surfaces in character animation. Masson. Computer Aided Design. [23] Paul de Faget de Casteljau. In Proceedings on Interactive Techniques in computer aided design. 10(6):356–360. [24] Tony DeRose. e e e e e e 1965. second edition. e e ﬁrst edition. ﬁrst edition. do Carmo. 1986.W. Collection M´thodes. Hermann. 1994. 1980. Catmull and J. Shape Mathematics and CAD.M. A practical Guide to Splines. Boissonnat and M. [25] Jean Dieudonn´. Circulant Matrices. A recursive subdivision algorithm for ﬁtting quadratic surfaces to irregular polyhedrons. ACM. AMS Chelsea Publishing.

from Projective Geometry to practical use. Curves and Surfaces for CAGD. e 3:83–127. Triangular Bernstein-B´zier patches. GTM No. Courbes Splines Rationelles. second edition. A ﬁrst course. J. Principles of Dynamics. 1990. ACM. Duchamp. Schweitzer. and W. Three-Dimensional Computer Vision. Computer e Aided Design. Algebraic Curves. [45] J. Cohn-Vossen. e 15:159–166. [34] D. [44] H. Geometry and the Imagination. N. [32] Gerald Farin. Gray. Geometric and Solid Modeling. Hoppe. 1989. ﬁrst edition. Computer Aided Geometric Design. AK Peters. Advanced Book Classics. Adapative subdivision algorithms for a set of B´zier triangles. 1986. RMA 12. McDonald. Jin. CRC Press. Jeannin. [41] Joe Harris.-C. 1983. ﬁrst edition. [37] William Fulton. Goldman. Masson. second edition. MIT Press. DeRose. ﬁrst edition. Springer Verlag. Prentice Hall. [39] A. Farin. Academic Press. Hoﬀman. Fiorot and P. T. Chelsea Publishing Co. RMA 24. Modern Diﬀerential Geometry of Curves and Surfaces. Stuetzle. [46] Jan J. 1994. Addison Wesley. . 1993. Hilbert and S. [33] Olivier Faugeras. [40] Donald T. Filip. [43] Christoph M. Lasser. Computer Aided Design. A geometric Viewpoint. Koenderink.J. Fiorot and P. 1996. Hoschek and D. ﬁrst edition. ﬁrst edition. AK Peters. 1992. J. H. the MIT Press. 1992. [36] J. 18:74–78. ﬁrst edition. NURB Curves and Surfaces. [31] Gerald Farin.-C. 1988. Courbes et Surfaces Rationelles. [35] J. 1989. 1989. 1998. Halstead. In Computer Graphics (SIGGRAPH ’94 Proceedings). Computer Aided Geometric Design. Algebraic Geometry. Piecewise smooth surface reconstruction. Masson. 133. [42] D. fourth edition. ﬁrst edition. Morgan Kaufmann. Solid Shape. Subdivision algorithms for B´zier triangles. Jeannin. pages 295–302. M.. ﬁrst edition. 1986. 1995. T. ﬁrst edition. 1952. [38] R.BIBLIOGRAPHY 491 [30] G. Greenwood. 1997.

1987. 1975. pages 95–104. Kluwer Academic Publishers Group. . Peter Schroder. Masters’s thesis. 1997. Geometry. Topology. Computer Aided Geometric Design. Preparata and M. Reif. [59] Dan Pedoe. Unterteilungsalgorithmen f¨r multivariate splines. University of Utah. [64] F. BIBLIOGRAPHY [48] Serge Lang. 1988. 1997. Analysis on Manifolds. [58] Joseph O’Rourke. Springer Verlag. Cambridge University Press. [57] Walter Noll. Undergraduate Analysis. and David Dobkin. 1995. Computational Geometry: An Introduction. Addison Wesley. 16(4). 1997. A G1 triangular spline surface of arbitrary topological type. 6(1):29–73. [63] Helmut Prautzsch. Algebra. 1987. Metaxas. 1993. Department of Mathematics. 1994. SIAM Journal of Numerical Analysis. [51] Charles Loop. [54] James R. Reif. ﬁrst edition.I. 1991. Dover. 1994. Peters and U. Shamos. 35(2). Munkres. In Computer Graphics (SIGGRAPH ’98 Proceedings). Analysis of generalized B-spline subdivision algorithms. [62] Les Piegl and Wayne Tiller. ACM Transactions on Graphics. ﬁrst edition. [53] Dimitris N. [52] Saunders Mac Lane and Garrett Birkhoﬀ. [50] Charles Loop. [56] A. Monograph in Visual Communications. a First Course.492 [47] Serge Lang. Springer Verlag. Polyhedral subdivision methods for free-form surfaces. PhD thesis. UTM. 11:303–330. Addison Wesley. Algebra. ﬁrst edition. ﬁrst edition. Dissertation. [55] James R. Prentice Hall. The simplest subdivision scheme for smothing polyhedra. 1998. MAPS: Multiresolution adaptive parameterization of surfaces. Munkres. Germany. A comprehensive course. Wim Sweldens. 1967. 1987. ﬁrst edition. Lawrence Cowsar. [61] J. Peters and U. 1984. ACM Transactions on Computer Graphics. Finite-Dimensional Spaces. TU u Braunschweig. Smooth Subdivision Surfaces based on triangles. ACM. [49] Aaron Lee. ﬁrst edition. 1997. 1988. ﬁrst edition. third edition. Salt Lake City. Kluwer Academic Publishers. Braunschweig. Springer Verlag.P. Computational Geometry in C. [60] J. The NURBS Book. PhD thesis. Utah. Macmillan.H. Nasri. Physics-Based Deformable Models. second edition. ﬁrst edition.

Dover. Hermann. Calcul Diﬀ´rentiel et Equations Diﬀ´rentielles. 1993. Collection Enseignement des Sciences. Projective Geometry. Applications ` la Th´orie de la Mesure. 1992. In Computer Graphics (SIGGRAPH ’98 Proceedings). 1993. Technical report. ACM. Analyse IV. Sidler. 1998. Wavelets for Computer Graphics Theory and Applications. Mathematical Methods for CAD. Palo Alto. e Hermann. [75] H. ﬁrst edition.-J. Calcul Int´gral. Collection Enseignee ment des Sciences. Mathematical Methods in Computer Aided Geometric Design. A uniﬁed appproach to subdivision algorithms near extraordinary vertices. 1975. Collection a e Enseignement des Sciences. 1996. G´om´trie Projective. 1989. In Tom Lyche and Larry e Schumaker. 1988. [80] Gilbert Strang. 1986. Exact evaluation of Catmull-Clark subdivision surfaces at arbitrary parameter values. [73] Laurent Schwartz. 1995. [78] Jos Stam. Undergraduate Texts in Mathematics. Analyse I. 4:304–310. pages 395–404. 1993. Th´orie des Ensembles et Topologie. Hermann. Morgan Kaufmann. Hermann. Springer Verlag. On Chaikin’s algorithm. [76] J. e e [77] Ernst Snapper and Troyer Robert J. CA 94301. David Sewell. A general subdivision theorem for B´zier triangles. Report No. [71] Laurent Schwartz.BIBLIOGRAPHY 493 [65] Lyle Ramshaw. [66] U. Non-uniform recursive subdivision surfaces. ﬁrst edition. Academic Press. Risler. Blossoming: A connect-the-dots approach to splines. Masson. [67] R. [79] Eric J. [70] Laurent Schwartz. Tony D. ﬁrst edition. pages 387–394. and Malcom Sabin. 1991. 12. [69] Pierre Samuel. Wellesley-Cambridge Press.-C. editors. Riesenfeld. 1998. pages 573–581.P. Stollnitz. Reif. 1987. 1989. [74] Thomas Sederberg. ﬁrst edition.F. IEEE Computer Graphics and Image Processing. ACM. Digital SRC. 1992. ﬁrst edition. InterEditions. In Computer Graphics (SIGGRAPH ’98 Proceedings). Salesin. Collection e e Enseignement des Sciences. Analyse II. Mass. Analyse III. DeRose. [72] Laurent Schwartz. [68] J. Seidel. Computed Aided Geometric Design. Introduction to Applied Mathematics. Jianmin Zheng. ﬁrst edition. Boston. Metric Aﬃne Geometry. 19. and David H. .

Projective Geometry. projectives. Algebra. and W. [83] Claude Tisseron. Ginn.494 BIBLIOGRAPHY [81] Gilbert Strang. [88] D. 1997. Hermann. [85] O. [82] Gilbert Strang and Nguyen Truong. Subdivision and Multiresolution surface representation. G´om´tries aﬃnes. second edition. 1988. Wavelets and Filter Banks. No. e e 1994. California. 1938. 2. Pasadena. 1. second printing edition. W. 1973. third edition. 1946. Linear Algebra and its Applications. The Classical Groups. Princeton University Press. [84] B. 1946. seventh edition. PhD thesis. second edition. . 1996. Wellesley-Cambridge Press. Vol. [86] O. Vol. W. ﬁrst edition. 1. Young.L. Veblen and J. ﬁrst edition. 1997. [87] Hermann Weyl. 1. Interpolation subdivision for meshes with arbitrary topology. Vol. Princeton mathmetical Series. [89] Denis Zorin. Veblen and J. Young. P. In Computer Graphics (SIGGRAPH ’96 Proceedings). Schroder. pages 189– 192. Zorin. Projective Geometry. Their Invariants and Representations. Sweldens. Ungar. Saunders HBJ. ACM. Caltech. Ginn. et euclidiennes. Van Der Waerden. Dissertation.