Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
67, 479489 (1992)
Biological Cybernetics
© SpringerVerlag
1992
From basis functions to basis fields: vector field approximation from sparse data
Ferdinando A. MussaIvaldi
Department of Brain and Cognitive Sciences, Massachussetts Institute of Technology, Cambridge, MA 02139, USA Received June 1, 1992/Accepted in revised form June 1, 1992
Abstract. Recent investigations (Poggio and Girosi 1990b) have pointed out the equivalence between a wide class of learning problems and the reconstruction of a realvalued function from a sparse set of data. However, in order to process sensory information and to generate purposeful actions living organisms must deal not only with realvalued functions but also with vectorvalued mappings. Examples of such vectorvalued mappings range from the optical flow fields associated with visual motion to the fields of mechanical forces produced by neuromuscular activation. In this paper, I discuss the issue of vectorfield processing from a broad computational perspective. A variety of vector patterns can be efficiently represented by a combination of linearly independent vector fields that I call "basis fields". Basis fields offer in some cases a better alternative to treating each component of a vector as an independent scalar entity. In spite of its apparent simplicity, such a componentbased representation is bound to change with any change of coordinates. In contrast, vectorvalued primitives such as basis fields generate vector field representations that are invariant under coordinate transformations.
1 Introduction From a mathematical perspective, learning a task is equivalent to establishing an associative map between input and output patterns from a collection of elements (or examples) of this mapping. Often, the learning process has been considered starting from tabula rasathat is from a completely unstructured network of computational elements, each element implementing some elementary form of inputoutput relation (Hopfield 1982; Rumelhart et al. 1986). However, by focusing exclusively on such elementary computational modules as the McCulloch and Pitts neurons, one may miss one of the most relevant features of any learning system, that is the ability to take advantage of past experience as a building ground for developing new
skills. Past experience may lead to the development of functional units which implement relatively complex inputoutput maps and which should be considered in their own right as computational primitives. The idea of learning associative mappings by combining complex computational primitives has been developed into a formal theory by Poggio and Girosi (l990a, b). Their approach is based upon the equivalence between regularizing an illposed problem (Tikhonov and Arsenin 1977; Poggio et al. 1985) and reconstructing (or approximating) a surface by combining local basis funttions (Micchelli 1986; Powell 1987). From this theory, Poggio and Girosi derived a class of threelayer networks whose hidden units implement a set of basis functions. Here, I propose to extend the same paradigm to the representation of vectorial maps. This is important in a wide range of computational tasks involving the reconstruction of a continuous vector field from a sparse set of examples. Usually, in the processing of vectorfield information the data are available as physical coordinates, measured with respect to some reference system. In contrast, the most relevant information about a vector field may often be stated in terms of coordinateinvariant expressions involving relations among vector components, such as curl and divergence. In this case, the goal of vector field reconstruction is to derive these relations independently of the particular system of coordinates in which the data have been measured and encoded. Poggio (1990) suggested that the paradigm of surface approximation by combining local basis functions may be relevant to biological information processing. However, in order to elaborate sensory information and to generate purposeful actions living organisms must deal not only with realvalued functions but also with vectorvalued mappings. A wellknown example of vector mapping in the domain of natural and artificial vision, is the "optical flow" (Gibson 1950; Horn 1986), that is the field of velocity vectors associated with the motion of a scene relative to an observer. Similarly, in the study of motor control, there is now evidence that

v2. In fact.. Here. In this context.. We may regard these vectors as samples of an unknown field. . TM. cK).g. the array does not Similarly. the addition and scalar multiplication as defined above may not be meaningful. For example. MussaIvaldi et al. In a nonredundant manipulator. More precisely. vM. . For such an array. .). There are important instances in which an array of realvalued maps cannot be described as a vector field. each vector.. FI(X) and F2(X) and a real number. is a collection of N realvalued components. here I discuss this issue from the broad perspective established by the problem of approximating a spatial pattern of vectors. that is + F2(X) = (Fl(x) + Fi(x). x '. Shadmehr et al. The square boundary delimits the approximation domain. . find a set of parameter values which minimizes the distance between F(x) and F(xlcl. x2.480 neural activities code and control a field of elastic forces generated by muscle viscoelastic properties (Hogan 1985.. brightness. is a collection of N real numbers defined with respect to some coordinate system.. This paper is only concerned with the representation and approximation of continuous and differentiable vector fields". 1992).. the operation of addition and scalar multiplication are defined. componentbased representations are bound to change with any change of coordinates. F(x) = (FI (x). . . 2 The vectorfield approximation problem Generally speaking. Bizzi et al. Fig.. Basis fields offer an alternative to the direct approach where each component of a vector is seen as a scalar entity that is represented by a combination of scalar basis functions.. (vL v~. 3 The central ideas and results can be applied without much of a change to the broader domain of tensor fields . Xi. by using vectorvalues primitives such as basis fields one may derive vector field representations that are invariant under coordinate transformations.. 8 vectors (the "data") are presented at 8 distinct points. etc. let Iff be a region in 91N• Suppose that M vectors. t! belong to a vector space although someone might still call it (improperly) a "feature vector". Each point.F1(x) + FJ. a set of graphical features defined over an image may be expressed as an array of realvalued maps (e. vector addition is not properly defined for the array of joint angles because finite rotations in general do not add commutatively. I will spell the components of a single multidimensional object by subscript indices Let. In contrast. each joint angle is a scalar inversekinematics map over the space of endpoint locations. However.. . Given the relevance of vectorfield processing to diverse computational contexts. I adopt the convention of using superscript indices to indicate distinct multidimensional objects (points.(x» aF(x) = (aFI(x). F(x). so that: FI(X) v. viv). 1). The goal of vectorfield approximation is formally identical to the goal of the ordinary approximation of scalar maps (Rice 1964): given an appropriate definition of distance between two fields. . F(x) . Vi E 91. Similar arguments apply to the array of joint configuration variables. In the next sections I show that a variety of vector patterns is efficiently represented by a combination of linearly independent vector fields. i = 1.N.. aFN(x».• . . Vi. vectors and matrices). be a known vectorvalued function of x and of K realvalued parameters. I call these fields "basis fields" because they are directly related to the scalar basis functions used for reconstructing scalar maps from numerical examples. are given at M distinct points in Iff. A particular application to motor control of the concepts presented in this paper can be found in MussaIvaldi and Giszter (1992). that is a vectorvalued map F = {vlvi = Fi(X). of tangent spaces to all points of M is called the tangent bundle of M.. F(xlcl' C2'. of N realvalued maps.. contrast. a vector field on a subset Iff of 91N is a map that assigns to each point x E Iff an element of the tangent space I of 91N at x (Spivak 1965). In contrast.F N(X» with the implicit but important assumption that such a collection belongs to a vector space: given two fields". 1985.. a. I will address the issue of reconstructing and representing a vector field as an approximation problem. ••• . . in spite of their apparent simplicity. .. Vector field approximation. xM (Fig. Therefore the array of joint configurations does not have the structure of a vector field. F2(x).. Vi. 1 Given a point x of a manifold M. I will describe a vector field on 91N as a collection. a vector field is a map M 1+ TM 2 Here. The set. This alternative is particularly useful when one is concerned with different coordinate representation of the same vector field.. . 1991. . 1. Therefore. . the tangent space of M at x is the vector space of all tangents to Mat x. c. x E Iff} .cK)..
. P is a linear differential operator and II . F(Xl) to the vector data o'. here we will consider only a fixed set of such centers. In this case the unknown potential function may be represented as U(x) = i=1 L K . = rj• k with i. if the potential function falls within the linear span of the basis functions G(x. (4) is an exact representation for the potential (that is. ••• . is said to be irrotational if and only if a scalar potential function. YI' . i.CK )= i=1 L K Cigi(X).Gtx. the gradient is a linear operator and one can combine Eqs. With a set of realvalued data. I') . c. For simplicity.f(x')2 + A IIPfll2 (1) = i=1 L I ciG(x. which represents what is known apriori about the unknown mapping. f(x): 9lN>+91. (4). Ii). .. F(x). the approximating field is the gradient of a scalar map: F(x) = VU(x) U(x) (3) Then. . Xi. Because of the formal. 3 From basis function to basis fields: Irrotational fields A natural way to perform a transition from scalar to vector approximation is to begin by assuming that the vectorial data originate from an irrotational field. The symbols /\ and V indicate respectively the external product" and the differential operator (a/aXI. the centers."" a/ aXN)' Applying this condition to the approximating field. Poggio and Girosi showed that a general solution to the approximation problem can be expressed as a sum of Green's functions. G(x. one should stress that this correspondence is formally correct only if Eq. gi(X). an approximating function may be represented as a weighted sum of predefined basis functions. one may approximate the potential weighted sum of scalar basis functions.e. (5) is not necessarily consistent with the best approximation achieved by Eq. (2) 123 . x') of the selfadjoint operator Pp (p is the adjoint of P) centered at the data points Xi: f(x) = i=1 i=1 L m (Yi . If these constraints are fulfilled.J Therefore. i'.b) have shown that a set of basis functions can be determined from what is known apriori about the smoothness properties of the unknown mapping. h. a" b. .xM in 9lN.Gix. vex). 'YM' defined at a set of points. is a twoindex antisymmetric tensor. with r'. . The purpose of this work is to extend the same approach to the approximation of vector data.1)/2 differential constraints: aFi(x)/axj = aFj(x)/axi Poggio and Girosi (1990a. Following the approach suggested by Poggio and Girosi (l990a. An equivalent condition is that curl(v) = V /\ V =0 . is defined over $ such that vi(x) = au/axilx Unlike Eq. Eq. N). The use of basis fields in the representation of the net vector field (5) corresponds to the use of basis functions in the representation of the potential (4).ab. U(x) by a (4) = where. Otherwise. The second term represents an addtional "regularization goal": f should also minimize a cost functional. I') = VG(x. L m c. j. establishes a trade off between these two goals.b). The first term on the right side of the above expression represents the "interpolation goal": the function f should pass as close as possible to the data.. in 3D the external product can also be represented as a vector c.. II is the L2 norm.e. However. However. However. N. (5) with the expression of a scalar map as a linear combination of basis functions. . r. I'). These Green's functions form a basis for an mdimensional subspace of the space of smooth functions. (here we do not make a distinction between covariant and contravariant indices). x '. basis fields. Taking the gradient of The external product of two Ndimensional vectors.. U(x). (5) can be used directly for the field approximation by relating the values of F at the data points. I will assume that the metric is Euclidean both in the input and in the output space of the approximation. i. IIPfll.. there are only three distinct absolute values. I call the gradient fields. of the basis functions may be different from the sampling points.b.: f(xlcl. (3) and (4) to obtain F(x) = L i=1 K c. analogy of Eq.C2. c.. the best approximation achieved by Eq.. x') .481 In the case of a scalar map. In the particular case of N = 3. I'). 4>(x. Equation (4) cannot be used for solving the approximation problem since the data are given not as scalar values of U(x) but as vector values for F(x). A vector field.j = 1. I') J (5) with 4>(x.<j)(x. The regularization parameter. k an even permutation of 4 (i = 1. scalar approximation involves finding a function. (4).. A. .j = a. which minimizes a functional H[fl i. corresponds to imposing a set of N(N . I') + e(x) where e(x) is an error function which we may assume to be continuous and differentiable. that is: IIvll IIxll = = Ctl vl yl2 Ctl xl yI2 The main results can be applied with some precautions to the more general Riemannian metric.
2A. .f')/q2) cjJ(x. In contrast. so as to ensure a significant overlap between neighbouring fields: a = J2. L K cicjJ(xj. The concept of linear independence for vector fields can be defined operationally as follows. t') vanishes at the centerz'. they are not RBFs.til!) . this field cannot provide any contribution to the approximation of a vector placed at its center. . Clearly. Girosi and Poggio (1990) have pointed out that with a sufficiently large number of Green's functions it is possible to approximate arbitrarily well any continuous multivariate function.till)(oh/o The corresponding gradient field is = «x Ilx  + + til!) . taken at M locations. Concerning this question. t')/q2 ....t')T(x .. B + + [x . 5) is formally equivalent to the expansion of a scalar function into a sum of basis functions. t') are not rotationally invariant.. x. 2. (8) With this variance. CI> C2. such that F(xJ) = i~1 . the basis fields obtained by taking the gradients of RBFs have central symmetry. t'. If the Green's functions correspond to the optimization of smoothness (as is the case with spline interpolation). has been determined in relation to the spacing between the centers.. one may informally observe that the amplitude of the vectorial error. A good approximation for U(x) is likely to result in a bad approximation for F(x) if the error function e(x) oscillates. Xi. '1(x).e. The crosses indicate the center of 16 basis fields covering the input domain For any point..2(x . (5) is a good approximation given that Eq. i. as is typically the case with polynomial interpolation.482 the above expression leads to the corresponding representation of the unknown field.~ for j = 1. Vi.t')G(x. . t'). . t'. on a compact domain. Thus. A generic RBF has the form: G(x. Fig. In this example. t') = exp( (x . one for each vector component. the vector cjJ(x. In .. Each field is the gradient of a Gaussian potential Four isopotential lines surround the center of each Gaussian._J . in a scalar approximation a Gaussian basis function provides the most significant contribution precisely at its center.t')/llx . t') = .1 4 Approximation algebra The expression for the approximating field (Eq. namely F(x) = with '1(x) = Ve(x) i~1 A:~'~'~'~'~'_'_' ~ L K C. \ \ ~I . Note that the components of cjJ(x. This equivalence becomes more rigorous if the fields. This constitutes a set of M vectorial equations which may be expanded into a system of MN scalar equations. the maximum vector amplitude of each basis field is reached at the four nearest surrounding centers. A set of central basis fields is shown in Fig. B Approximation grid. M. I I I '4".. Therefore. This implies that as K + 00. The variance. Irrotational basis fields. ••• . U(x). cjJi(X) are linearly independent. '1(x) will also converge to zero. CK. are placed at the intersections of a 4 x 4 square grid which covers the domain of the approximation. Each field is the gradient of a Gaussian potential function: G(x. Eq. t') . B.. A Four central fields. is directly related to the smoothness of e(x). t') is directed toward the "center". Solving the interpolation problem is equivalent to finding a set of parameters. t') = h( ·'''''''1111 • • • .. the centers of 16 basis fields. Let us consider a set of M vectors. the minimization of e(x) does not imply the minimization of'1(x) and viceversa. a.. (4) is. t') cjJ(x. It remains an open and interesting question under which conditions on F(x) and cjJ(x.1. then it seems reasonable to expect that the error field. A particular instance of basis fields is derived by taking the gradient of radial basis functions (RBFs).</>i(X) '1(X) + (7) . e(x) converges uniformly to zero. Note that the basis field cjJ(x.
. Each data set (Fig. <PN(XM.. 3. one achieves in a single step the double goal of approximating and integrating" a vector field. An open issue remains to establish under which conditions the matrix tf> is guaranteed to be non singluar for any choice of sampling points. (I) <PI (X2. The interpolating fields are plotted on the right panels of Fig. (9) depends numerically only upon the locations of the data. Each isopotential line corresponds to setting L i=1 K civ(x. if K < MN . <PI (X2.(~ <PI (X2. Poggio and Girosi I990b). These basis fields are sufficient to generate an exact interpolation of eight 2D s~mpled vectors (there are 8 x 2 = 16 equations in 16 unknown coefficients. tf> . tf>+ = (tf>Ttf»Itf>T (II) S A potential function is the integral of its own gradient 6 If a set of K basis fields is linearly independent with respect to the interpolation problem (i... . Right: Approximating fields with isopotentiallines. Approximation results. . . The interpolation has a unique solutions because there are 16 basis fields and 16 unknown coefficients.i~tegrability conditions. 3. Note that the matrix tf> ofEq. In fact. Therefore. Tl_Ietwo.) Fig. 3.. If the number of basis fields is insufficient to interpolate the data that is. (I) = constant. 2.a basis for a K dimensional functional space. <PN(X2. (~ (2) <PI (XM. there are 16 equations. are the same. (9) can be solved exactly and uniquely for C provided that tf> be nonsingluar. The quah~y of the interpolation. The two sets of vector data (top and bottom of Fig. 2. <PN(X2. CK) . This definition is a direct extension ~f the concept of basis functions used for scalar multivariable interpolation (Powell 1987. Therefore. ? . 3) are different. + the coefficient matrix <.e data because we are dealing with an exact interpolation. . (2) ••. + + <PN(XM. v~n. then with MN > K the same set of basis field leads to a nonsingular approximation.. the two sets of sampling points.. <PI (XM. Equations (4) and (5) show that the s~me numencal coefficients appear in the representations of the vector field and of the potential function. the MoorePenrose pseudoinverse can be computed from Eq. + + ~ = (CI. I' . ••• . the matrix tf> is square and Eq. one per each data component.. + \ + . sets of8 samples (top and bottom) are presented at the same samphngpomts. <PN(XI. The high degree of smoothness displayed by these examples is contingent upon the matching between an equallyspaced set of centers and the relatively uniform distribution of the sampling points. left) consists of 8 randomly generated 2dimensional vectors. in terms of the smoothness of t~e mtel1'0lating field.then the MoorePenrose pseudoinverse? of tf>. (2) (2) <P1(XI. (I) (I) (I) <PI (x '. C2. The new fi~lds are combinations of the 16 basis fields described in Fig. When this condition is fulfilled by any set of distinct points {Xi} E S we may say that the K fields <P I. depends critically upon the spatial relation between the locations of the data and the centers of the basisfields.. <PN(XI... if K = MN). tf>= <PN(XI. this expanded system of equations can be written as (9) where we have introduced the (unknown) parameter vector C \ r" ~ + . the two interpolating fields have been ?btained by using the same inverse matrix. x'. (ll) since (rPT rP) is guaranteed to have fullrow rank . Left: Input vector patterns. oFdoxj = oEj /oxi ~re ImphcI~ly satisfied by having chosen a set of basis fields with zero curl. (I) <PI (XM.e. t~ and the "data vector " v~. Figure 3 shows the interpolat~on of two d~ffere~t sets of data obtained with the basis fields descnbed m Fig. <PI (x '. Therefore. <PK are linearly independent and form . by using the gradients of basis functions as basis fields.483 vector/matrix notation. However. (lO) If K = MN.I. These isopotential lines are orthogonal to th. (I) <PN(XM. (K) ~ . together with a set of isopotential lines.. In this case it is strictly correct to characterize the fields <pi(X) as basis fields. !he . (2) (2) t~ (K) . <PN(X2.
5 From basis functions to basis fields. the interpolating vector field is doomed to lack the smoothness of the "true" . 5 basis fields in a 5 x 5 grid. 9 basis fields in a 3 x 3 grid. In this case the Moore. the approximating field is remarkably different from the original field. 3 ( top): there are 8 vectors at 8 locations. 3 (top) are approximated/interpolated by four different sets of basis fields. bottom). However. This field is the vector sum of the irrotational field (topleft) Fig. to minimizing Using this the square i=1 L M IIF(x') .e.484 pseudoinverse (11) with 4 and 9 basis fields. so as to ensure the same degree of overlap across the four examples. As expected. 5. From topleft to bottomright: 4 basis fields in a 2 x 2 grid. it is clearly possible to draw a set of isopotential lines (or even a single isopotential line) which are locally orthogonal to a finite number of vectors. then the interpolation problem admits infinite solutions. the smoothness of the MoorePenrose interpolation (13) decreases. The two panels at the top of Fig. if K > MN). 4.= [K0 K (x OJ [_OK ~Jx ~) (14) If the number of basis fields is larger than the number of data components (i. With 400 basis fields (bottomright).2K.viii· ( 12) FI(X) . In each grid. 20 x 20). In contrast. With 20 vector data. the variance of the Gaussian potentials has been established so as to ensure a fixed degree of overlap between neighboring basis fields generates a leastsquares pseudoinverse corresponds error norm approximation. inducing a set of 16 algebraic equations. Consequently. This is a direct consequence of having chosen a set of linearly independent potential functions for generating the basis fields. Figure 6 shows the results of the approximation obtained by combining 36 equallyspaced irrotational basis fields. The approximating domain is covered by one of four different grids of basisfield centers (topleft to bottom right: 2 x 2. the interpolation is reminiscent of a lookup table: the field vanishes outside a small region surrounding each data vector. In both cases there is a visible approximation error since the degrees of freedom are not sufficient to reproduce exactly the 16 components of the data. what would be the form of a "wrong" irrotational approximation? Let us consider the set of vectors shown at the bottom of in Fig. As the number of basis fields decreases from 9 to 4. 4. it is always possible to use the methods described in the previous sections for deriving an irrotational field that interpolates exactly the data. (8). this system of basis fields generates a slightly overcomplete system of 40 equations in 36 unknowns. However. For each set of Gaussian basis fields.Penrose pseudoinverse $+ = cPT(cPcP1)1 minimizes the norm of the coefficient vector: (13) and the circulating FC(x) = field (topright) ~) (15) Figure 4 illustrates the effect of changing the number of basis fields with respect to the approximation/interpolation of a given vector pattern. In this case. the approximation becomes smoother (as indicated by the isopotential lines) and the data are "corrected" so as to enforce a common central tendency. one needs only to use a sufficiently large number of irrotational basis fields. We have achieved the goal of reproducing the data with a good local accuracy. the variance was determined according to Eq. Approximation results with different sets of basis fields. 4 show the approximating fields (and potential) obtained by computing the with curl(FC) = oFf /OXI . To this end. 5 x 5. 400 basis fields in a 20 x 20 grid. However. the potential landscape which interpolates a set of circulating vectors is necessarily tortuous and has multiple valleys and peaks. The data used in these examples are the same as in Fig. as the set of basis fields becomes increasingly overcomplete (Fig. it may well be that the vector field from which the data have been sampled is not irrotational.oFf /OX2 = . The centers of the fields are arranged in four different grids over the approximation domain. The data are reproduced exactly by attractive/repulsive potential pairs. More generally. The same data shown in Fig. Field decomposition Given a finite number of vectors taken at a set of distinct locations. 3 x 3. the approximating field does not capture at all the smoothness which underlies the sampled pattern. This set was obtained by taking 20 samples of the field shown in the middle panel.
l . and div(t/J) = curl3(I/I).. 1') +L i=1 s. an obvious choice for the solenodal basis fields. (16) + "+ /' Fig. This theorem establishes a simple way to generate a set of solenoidal basis fields. 5. the basis fields) must include among Then. 1') = . Thanks to a wellknown theorem of potential theory (Kellogg 1953) it is still possible to take advantage of the known properties of scalar basis functions. 5 (bottom) by a combination of 36 irrotational basis fields rotational field. Since . Then. To this end it is sufficient to multiply each irrotational basis field by an anti symmetric matrix. Intuitively. Two decompositions of F(x) may differ by the gradient of a harmonic function? It is not difficult to prove that in the Euclidean metric a solenoidal field is obtained from an irrotational field when the latter is multiplied by any antisymmetric matrix". With the addition of the solenoidal fields. 1') . in order to achieve an acceptable approximation of a rotational field the computational primitives (i. A = [ai. an antisymmetric matrix corresponds to a 90° rotation operator 7 . This field is the sum of the two fields shown at the top From a single set of K irrotational fields it is possible to generate Ks (~K) solenoidal fields by using different anti symmetric matrices (in 2D. A I/I(x. Fig. . Therefore. from a set of irrotational ones. 1').11/1 = div(grad(I/I)). this decomposition is not unique. (17) Note that the components of the solenoidal fields obtained by the transformation (16) are essentially the same functions of x as the components of the original irrotational fields. 1') = At/J(x.1l/l(x) = O.. = K. . and d. I/I(x..e. 1') = t/J2(X. 6. is 1/11x.j] is said to be antisymmetric if aij = aJ.t/JI 1') (x. This theorem states that any continuous vector field.. 1') ( 1/12(X. i (ai.. the representation of a generic continuous field becomes: F(x) = i=1 L K cit/J(x. F(x) = C(x) + S(x) . Figure 7 shows a set of solenoidal fields obtained in this way from the irrotational fields of Fig.485 them a number of fields with nonzero curl. it can be added to F(x) without altering its gradient or its divergence 8 A matrix. I/I(x. (1)).. K. from a set of vector data. div(I/I) = curl3(t/J) = 0. . !1 The first field is irrotational (curl( C) = V 1\ C(x) = 0). 1').. that is it has zero divergence (div(S) = V .. S(x) = 0). 1') . c. 2. smoothness may be restored at the expense of a rapidly increasing value of the quadratic error. As the number of basis field is reduced. the solenoidal fields form a linearly independent set and the linear algebra described in Sect. for each center there A harmonic function is defined by the Laplace equation.. whereas the second one is solenoidal. regardless of the fact that the interpolating potential may optimize a smoothness functional (established by the operator P in Eq. the gradient of 1/1 is solenoidal and irrotational at the same time. F(x) . 1'). i = 0). + \ 1 1 1 \ + .. dil/l(x. In two dimensions. can be decomposed into the sum of two fields. A set of vectorial examples (bottom) derived from a nonirrotational continuous field (middle). In 2D. t/J(x. Thus. 4 can be applied to derive both groups of coefficients.. Approximation of the samples shown in Fig. since all antisymmetric matrices differ at most by a scaling factor). In this case one has to give up the idea of reducing the vector approximation to an equivalent approximation of a scalar potential.. .
. gl (x).. 6 Coordinate transformations +• • ""+ . ..... Consider..... . "'.." = L '=1 K Cm.. the approximation has achieved the three distinct goals: 1) reproducing the data 2) decomposing the field into a convergent irrotational component and a rotational component...~"'. ...) A set of 32 fields (16 irrotational and 16 solenoidal over a 4 x 4 grid) has been used to approximate the same 20 vectors of Fig. 6 and 8 can indeed be regarded as an optical flow: the observer moves away from a point (the center of the irrotational field (14». lit~ ~ 1.. rr _• .. \ . '1 .... " ~ . .. they are transformed by a change of coordinates.. + . .486 .. AA'I 11. .... . • " ~ . T+T r T . 8. for example.."'''..+. ... (irrotational) and d. \\\ ~ . I JJJ I .... I I . Four solenoidal basis fields ~ 1 \.. Middle: Approximating field. . ...... ./' I 1 J Instead of using basis fields.... (19) . .. . Top: The pattern of samples. . ~ \ . . ~_. """.. .~" .. The combination of solenoidal and irrotational basis fields has clearly captured the main features of the field underlying the pattern of samples. I" . " .... ~ . smoother) structure than the irrotational approximation of the same data (Fig. • 1 . I I .. .."*".~~ . \ . • . ... .. . .. ... 9 T(x) (20) We remind the reader that a scalar is not simply a single number but a number that remains invariant in a coordinate transformation ...". . JJJ.... " .· \.\ \ ~ ~ 1'... c. ~ J 1 ~ . Approximation results with a combination of irrotational and solenoidal basis fields..~ ". the two sums of (17) can be plotted separately (Fig.... ......... 7._ ... '" _ . 6)..' . 1 ~ ~ \ ..I • . .. ......_ ~• • ... ..""". (18) In this case. \ \\ . In this case. __ 11'...... ~~~~.. r..~".._ """ __ .. '" _ ..... Once the coefficients have been determined... . while rotating about the direction of translation.. """".N i = 1.. + ... 8 bottomleft). I ". . 8 (center)... .. ..1 t _. . ~ ~.......~... ..... 11 ••••• . "" .. \ . . . . . ~ """ . . \ ~~11 " . .. .... I .. This type of field decomposition may be relevant to computational tasks such as the determination of the center of expansion in an optical flow field. .. . (solenoidal... .".' . .. bottom).. Unlike true scalars". ~ ~ . ... .. . . 5.. I I I ......   "'" . •. The field underlying the data pattern of Figs. one could represent each component of the approximating field as a combination of scalar basis functions..~ \I"" . ~_.. .t... .. vector variables such as forces and velocities are not merely collections of numbers... that is: Fj(x) = m=1 L K ci.. 1... mgm(x) i = 1. '~ .. the irrotational component of the approximating field provides a fairly accurate estimate of the center of expansion (Fig.. ". " . .... .. 8.. . . Fig.'ft" .T.~.. .. Thus.'\\\'. the approximation of a pattern of vectors is reduced to a set of scalar equations of the type v. ..1/~~"... ...' However.. a continuous. "" . )''' A  .... .(x') m = 1....... ..+....'" .. ... . "'.. .I11'~ . ~• ". This field has an obviously simpler (i.+1 I I . differentiable and invertible coordinate transformation: x= Fig. . The net approximating field is shown in Fig..gK(X). ~ .. ~ ..g. •. M . Bottom left: Irrotational component with isopotentiai lines.. .JO ~ I r.. ttl '+' I • J' ~ • .~ .. . ....~ ........ . ~~. and 3) deriving a potential function for the irrotational component. ... 1 \ + ~ "" are two basis fields and two coefficients....N...~ .e.. Bottom right: Solenoidal component.. .....". (M = number of data).... .
one can reconstruct a scalar function. is invariant under coordinate transformations.1 m=1 gi(x') = g/(T1(x') . In this case each of the data follows the transformation rule: (21) vJ = L N m=1 ~. For example.lg/(X'). cm. Each basis function provides a significant contribution only within a limited region."(X'). c.. that multijoint arm movements are likely to be planned in hand coordinates rather than in joint coordinates. The arguments developed in this section apply to covariant vectors without any substantial change .jx) = oT. 1=1 (25) 10 According to standard tensor algebra (LeviCivita 1977).g. (21) or covariant (if it transforms according to the inverse law).=1 (23) Also. m(T1(x»4>.m(x') L cl4>. In the original coordinate system the approximation problem is expressed by a set of equations: 7 Conclusions The problem of learning an associative mapping from a set of examples has been recently cast into the mathematical framework of multivariate approximation (Poggio and Girosi 1990a). in many instances it seem desirable for the computational primitives used in functional approximation to share the same type of coordinate dependence. such as multivariate Gaussians. for all the field components. Therefore. both the coefficients and the basis functions must change in order to preserve the approximation after a change of coordinates. a vector (or. one should keep in mind that smoothness is inherently a coordinatedependent property. in Newtonian mechanics velocity is contravariant and force is covariant. A final remark on coordinate transformations regards the smoothness of the approximating field. by combining Eqs. the above expression becomes V) N K vJ = K = L Jj. (21) and (25). 4>. . By combining basis functions. The covariance of force arises from the fact that power. 4> 2(X). As a consequence. since the same basis functions are used in all coordinate systems.l Therefore. Unlike Eqs. = ci. At first. 4> K(X): K F(x) = Lc. now each field component has its own set of realvalued basis functions. m(x') L Cm. a lfold system) can either be contravariant (if it transforms according to Eq.the set of coefficients. For example. have survived the coordinate transformation (20). Velocity is contravariant because it transforms in the same way as displacement (ax = J(x)dx).e. (24) Note that here the coefficients. the linear form obtained from the inner product of force and velocity. In contrast. vJ = K L CI4>. A particular relevance has been attributed to the possibility of expressing the approximating surface as a weighted sum of local basis functions. In contrast. whose components transform as: ~5(x) = L Jj.4>. The basis fields in the new coordinate system may be a distorted version of the original ones. J(x) Ix)' Let the data be of the contravari(e. more precisely.487 characterized (J. c."(T1(x» m=1 N . Eqs. this may seem to be a disadvantage of the basisfield approach. Then.m(X')Cm. .m(x')v:".. 1=1 After carrying out the first sum and taking into account the transformation law (24)./oxj ant type'? component locally by the Jacobian matrix. (21) and (19) one obtains the approximation of the vector data in the new coordinate system: vJ = N K L m=1 ~. velocity vectors). Accordingly. F = J(x) TP. The computational burden associated with the existance of different coordinate systems has been entirely placed in the transformation of the basis fields from one system to another. the expression for V in the new coordinate system becomes: K V) = L ci. 4> 1(x). let us assume that these basis fields follow the same transformation law as the data. in a way that is consistent with the notion of a receptive field. Clearly this does not occur with the component based representation (18).. In fact." (x).(x') . /0 must change in order to approximate the same vectors in the new coordinate system.lgi (x') 1=1 N (22) with L ~. In the new coordinate system the net vector field is a linear combination of the transformed basis fields with the same set of coefficients. that is a function with associates each point of its domain to a real number.. let us consider the representation of the approximating field as a combination of basis fields. By combining Eqs. (19). in a generic coordinate transformation the representation (18) is bound to be destroyed . . Flash and Hogan (1985) have used the coordinate dependence of smoothness as a symmetry principle to infer from the approximation of measured arm trajectories. (25) contain a single set of coefficients. some of their original smoothness properties may be lost in the transformation. to approximate contravariant vectors we use contravariant basis fields. However.. i.(x). with the representation "by components" (18). one obtains the approximation in the new coordinate system: It is not difficult to see that. 1=1 (26) After carrying out the first summation. m=1 1=1 L c/~5(x')..
Glasgow 1926) Micchelli CA (1986) Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Science 253:287291 Flash T. Thus.22 MussaIvaldi FA. depending on the number of basis fields relative to the number of examples. Girosi F (1990a) Networks for approximation and learning. Sanger and V. N. the examples are vectors and the underlying function is a vectorvalued map. mechanical and geometrical factors subserving arm posture in humans. R. This research was supported by NIH grants NS09343 and AR26710 and by ONR grant NOOOI4/88/K/ 0372. However. One way to capture the invariant properties expressed by these relations is to use vector fields instead of scalar functions as computational primitives. most if not all the interesting properties of a vector field do not depend on such a choice.488 In many important cases of biological relevance. (Calculus of tensors). Cambridge Kellogg OD (1953) Foundations of potential theory. Hogan N (1985) The coordination of arm movements: An experimentally confirmed mathematical model. Bioi Cybern 52:315331 Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Boston Girosi F. some investigators (Saito et al. A number of experiments have strongly suggested that in order to process vectorial information the central nervous system takes advantage of functional modules expressing specific vectorfield patterns. References Andersen RA. Dover. Blackie and Son. Proc. J Neurosci 5:27322743 Poggio T (1990) A theory of how the brain might work. I started by considering the particular case in which the field to be represented is irrotational and can be expressed as the gradient of a scalar potential. 5) are directly related to the presence of a significant curl in the underlying field. Therefore. such as the relations which define the curl and the divergence of a field. Girosi. Nature 317:314319 Powell MJD (1987) Radial basis functions for multivariable interpolation: a review. a disadvantage of this approach is that the values assumed by a vector component are contingent upon the arbitrary choice of a coordinate system. M. In: Mason JC. F. Acknowledgements. When activated by microstimulation. each module generates a particular field of forces at a limb's endpoint. (1991) have suggested that the premotor layers of the frog's spinal cord are organized into distinct functional modules. T. These interesting properties involve relations among vector components. F. one may be tempted to treat each component of the approximating map as an independent scalar entity to be represented as a combination of local basis functions. I wish to thank T. I showed that by using a set of irrotational basis fields it is possible to approximate a variety of vector patterns with a different degree of smoothness and accuracy. S. a representation of the potential as a sum of basis functions corresponds to a representation of the fields as a sum of irrotational basis fields obtained as gradients of the same basis functions. For example. Graziano M (1990) Hierarchical processing of motion in the visual cortex of monkey. In this case. Giszter SF (1991) Computations underlying the execution of movement: A biological perspective. Cold Spring Harbor Symp Quant Bioi The Brain 55:899910 Poggio T. Bizzi. MussaIvaldi FA. Hogan N. Tagliasco. In a completely different area of research. Girosi F (1990b) A theory of networks for learning. Poggio for inspiring this work and for providing many useful comments. Clarendon Press. Tanaka et al. New York LeviCivita T (1977) The absolute differential calculus. Reading . Giszter. In some cases. Oxford Rice JR (1964) The approximation of functions. I also took advantage of a number of stimulating discussions with many other people. P. Koch C (1985) Computational vision and regularization theory. Cox MG (eds) Algorithms for approximation. Andersen et al. New York (First English Edition. Bioi Cybern 67:491500 MussaIvaldi F. 1990) have found neurons in the medial superior temporal area of the visual cortex that responded selectively to specifically structured vector patterns corresponding to expanding/contracting. MIT Press. Proc Nat Acad Sci USA . this additional set of fields can be obtained by a simple antisymmetric transformation of the irrotational basis fields. Bizzi E (1985) Neural. T. 79:25542558 Hom BKP (1986) Robot vision. Bioi Cybern 63:169176 Hogan N (1985) The mechanics of multijoint posture and movement control. They are invariant under coordinate transformations. in order to achieve a general representation for continuous fields it is necessary to complement the set of irrotational basis fields with a set of fields with a nonzero curl. J Neurosci 5:16881703 Gibson JJ (1950) The perception of the visual world. Remarkably. circulating and translational flows. In contrast. Snowden RJ. Gandolfo. Flash. Shadmehr. In this paper I have addressed this question by advocating the use of basis fields as the vectorial counterparts of basis functions. For instance. Poggio T (1990) Networks and the best approximation property. Instead of using basis fields. A similar approach has been developed by Wahba (1982) who introduced the concept of vector splines on the sphere with the purpose of estimating the vorticity and divergence of the atmosphere from sampled windvector data. Torre V. 1986. Hogan. Morasso. IEEE 78:14811497 Poggio T. Jordan. by using a set of irrotational basis fields one can only generate a total field with zero curl. Constructive Approx 2:11. In particular I wish to thank E. a vector field may contain a point attractor or it may be characterized by a circulating pattern. Giszter SF (1992) Vector field approximation: A computational paradigm for motor control and learning. 1989. Bizzi et al. A central question shared by researchers in vision and motor control concerns how these elementary vector fields may be combined to analyze or to generate a wider variety of vector patterns. AddisonWesley. E. that is a vector field. Science 247:978982 Poggio T. Treue S. A number of important vector patterns (like the one shown in Fig. HoughtonMifflin. Cold Spring Harbor Symp Quant Bioi The Brain 55:741748 Bizzi E. Dover. Fasse. it is possible to reduce the approximation of continuous vector fields to the mathematical tools of scalarfunction approximation.
Yukio M.489 Rumelhard DE. Fukada Y. H. McLelland JL. Winston. Saito H (1989) Underlying mechanisms of the reponse specificity of the expansion/contraction and rotation cells in the dorsal part of the medial superior temporal area of the macaque monkey. Fukada Y. Zeller K (eds) Multivariate approximation theory. Birkhauser Verlag. Explorations in the microstructure of cognition. Washington Wahba G (1982) Vector splines on the sphere. Bizzi E (1992) Postural force fields of the human arm and their role in generating multijoint movements. Hikosaka K. Cambridge Saito H. pp 407429 . Iwai E (1986) Integration of direction signals of image motion in the superior temporal sulcus of the macaque monkey. Arsenin VY (1977) Solutions of illposed problems.. In: Schempp W. the PDP Research Group (1986) Parallel distributed processing. Basel Boston Stuttgart. Benjamin/Cummings Publishing Co. MIT Press. W. J Neurosci (in press) Spivak M (1965) Calculus on manifolds. MussIvaldi FA. J Neurophysiol 62:642656 Tikhonov AN. with application to the estimation of vorticity and divergence from discrete noisy data. Tanaka K. Menlo Park Tanaka K. J Neurosci 6:145157 Shadmehr R.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.