You are on page 1of 129

Lecture Notes in

Mathematics
A collection of informal reports and seminars
Edited by A. Dold, Heidelberg and B. Eckmann, Z0rich

104
i i

George H. Pimbley, Jr.


University of California
Los Alamos Scientific Laboratory, Los Alamos, New Mexico

Eigenfunction Branches of
Nonlinear Operators,
and their Bifurcations

$
Springer-Verlag
Berlin-Heidelberg- New York 1969
Work performed underthe auspices of the U. S. Atomic
Energy Commission

All rights reserved. No part of this book may be translated or reproduced in any form without written permission from
Springer Verlag. @ by Springer.Verlag Berlin" Heidelberg 1969
Library of Congress Catalog Card Number 70-97958 • Printed in Germany. Title No. 3710
-1 -

TABLE OF CONT~VgS

Introduction . . . . . . . . . . . . . . . . . . . . . . .

I. An Example . . . . . . . . . . . . . . . . . . . . . . . .

2. T h e E x t e n s i o n of B r a n c h e s o f S o l u t i o n s for N o n l i n e a r Equa-
t i o n s in B a n ~ c h S l ~ c e s . . . . . . . . . . . . . . . . . . ii

3. D e v e l o p m e n t of B r a n c h e s o f S o l u t i o n s for N o n l i n e a r Equa-
tions near an Exceptional Point. Bifurcation Theory . . . 18
4. S o l u t i o n of t h e B i f u r c a t i o n E q u a t i o n in t h e C a s e n = i;
Bifurcation at the Origin . . . . . . . . . . . . . . . . 29
. T h e E i g e n v a l u e Problem; H a m m e r s t e i n Operators; S u b l i n e a r
a n d S u p e r l l n e s r Operators; O s c i l l a t i o n K e r n e l s . . . . . . 43

6. O n t h e E x t e n s i o n o f B r a n c h e s of E i g e n f u n c t i o n s ; C o n d i t i o n s
Preventing Secondary BifUrcation of Brancke~ . . . ~ . . . 58

7. E x t e n s i o n o f B r a n c h e s of E i g e n f u m c t i o n s o f H a m m e r s t e i n
Operators . . . . . . . . . . . . . . . . . . . . . . . . 8O
8. T h e E x a m p l e o f S e c t i o n l, R e c o n s i d e r e d . . . . . . . . . .

9. A Two-Point Boundary Value Problem . . . . . . . . . . . .

i0. S,mm~y; C o l l e c t i o n of H y p o t h e s e s ; Unsettled Questions . . 102

Bibliogral~hy . . . . . . . . . . . . . . . . . . . . . . . if4
Additional References . . . . . . . . . . . . . . . . . . Zl6
Appendix: A n o t h e r B i f u r c a t i o n Method; t h e E x a m p l e of
S e c t i o n i~ R e c o n s i d e r e d A g a i n . . . . . . . . . . . . . . .
120
-2-

INTRODUCTION

The series of lectures on nonlinear operators covered by these

lecture notes was given at the Battelle Memorial Institute Advanced

Studies Center in Geneva, Switzerland during the period June 27 -

August 5, 1968, at the invitation of Dr. Norman W. Bazley of the Bat-

telle Research Center in Geneva. The material is taken from the re-

sults of approximately seven years of work on the ~ r t of the author

at the Los Alamos Scientific Laboratory of the University of California,

Los Alamos, New Mexico. Much of this material had previously been pub-

lished in the open literature (see the Bibliography). This effort was

generated by the need for a nonlinear theory observed in connection with

actual problems in physics at Los Alamos.

In deriving nonlinear theory, abstract formulation is perhaps a de-

sired end; but in the newer Im~rts of the theory, as with secondary bii%xrca-

tion in these notes, progress seems to be made more easily with concrete

assumptions, as with our preoccul~tion with HA~merstein operators with

oscillation kernels.

The entire lecture series had to do with the eigenvalue problem

kx = T(x), where T(x) is a bounded nonlinear operator. Other authors,

with a view to applications in nonlinear differential equations with ap-

propriate use of Sobolev Sl~Ces to render the operators bounded, have pre-

ferred to study eigenvalue problems of the form (LI+NI)U = k(L2+N2)u,

where LI, L2 are linear and NI, N 2 are nonlinear. Such is the case with

M. S. Berger [ref. 4]. In these notes we had the less ambitious goal of

understanding nonlinear integral equations, whence we concentrated on the


-4-

So as to illustrate the type of problems considered in these notes,

we present an eigenvalue problem for a nonlinear operator which c~n be

attacked by elementary methods. Namely, we solve the following integral

equation

2 I [a sin s sin t + b sin 2s sin 2t] [o(t) + ~3(t)]dt (i.i)


= Z

which has a second-rank kernel• We suppose that 0 < b < a. Because of

the form of the kernel, any solution of eq. (I.i) is necessarily of the

form O(s) = A sin s + B sin 2s with undetermined constants A,B (which will

turn out to be functions of the real parameter A). Substituting in eq.

(i.i), we have

2
A[A sin s + B sin 2s] = fo [a sin s sin t + b sin 2s sin 2t]

• [(A sin t + B sin 2t) + (A sin t + B sin 2t)3]dt

= 2 a sin s sin2t dt + A 3 sin4t dt + 3AB2 sin2t sin22g


~o

2 b sin 2s B si#2t et + 3 A ~ sin22t sin2t dt + B 3 s i~ 2t dt

in°I A÷ A3÷ 21
in2sI A2 ÷ 31
where use has been made of the following values of integrals:
-5-

~o sin t sin 2t d t = ~
~o
sin3t sin 2t dt =
f
=o
~ sin t sin32t dt = O.

Equating coefficients of sin s and sin 2s, we obtain a pair of nonlinear

simultaneous algebraic equations:

AA = aA + ~ aA3 + dAB 2
(1.2)
bB 3 .

There are four ~ n d s of solutions of equations (1.2):

l) A = B = O; this gives the trivial solution of eq. (1.1).

2) A ~ O, B = O; only the first equation is nontrivial. We

cancel A ~ 0 to obtain
k=a+~aA 2

whence

A=~ -'2 ~ "

The corresponding solution of eq. (i.i) is ~l(S,k) = ~ ~ - ~ sin s,

defined and real for k ~ a.

3) A = O~ B ~ O; only the second equation is nontrivial. We

cancel B ~ 0 to obtain

k = b + @ bB 2

whence

B=± 2

The corresponding solution of eq. (i.i) is ~o2(s,k) = ± 2_. ~ _ i sin 2s,

defined and real for A k b~ where we recall that b < a.


-6-
4) A ~ O, B ~ O; here both A and B may be cancelled in eq. (1.2).

We obtain the two ellipses:

~ A 2 + 2~ B2 k
a
i

(1.3)
=~-1.

Solutions of eq. (1.2) are given by intersections of these ellipses.

Solving, we get

Lab

so that we have the following solutions of eq. (i.i):

~3(s,~ ) = 4- 23 ~J~zl-b
ab A-I sin s 4- ~2 ~-a
~A-I sin 2s. (1.~,)

Clearly 2a-b > 0 since we assumed that b < a. Hence the question of whether

or not solutions of the form (1.4) can be real hinges upon whether or not
2b-a > 0, or b > ~i • We have the following cases:

Case !: b ~ ~1 ; °3 (S,k) is real for no real A.

Case ii: b > ~i; 3(s,k ) is real f o r ~ > m a x (~


abb' ~ ab > "

Since a > b, this means that ~3(s,~ ) is real when k > ~ ab •

Under case I above, i.e., when ba g 12 2 the only real solutions of eq

(i.i) are the trivial solution ~(spk) • O, and the two main branches:

~l(s,~) = + 2_. - ~ sin s

~ sin 2s.
-7-

The solutions q~l and q02 branch away from the trivial solution q0 m 0 at

the eigenvalues a,b of the linearization of eq. (i.i) at the origin:

Ah(s) = /T2 So [a sin s sin t + b sin 2s sin 2¢]h(t)dt. (1.5)

We can represent this situation pictorially in two ways:

b
> sin 2 s

b<±
O- 2
sin s
FIG. 1.1a.

I1¢11 -- , / ~ + B2

l
0 b
_

FIG • l.lb.
o
:~ A
_b<±
0 - 2
-8-

In Fig. l.la we simply plot the two normal modes vs. the l~rameter k,

which serves well for eq. (1.1). In more general problems, as for

example if we were to add to the rank of the kernel, a plot such as

Fig. 1.1b must be used, where some norm of the solution is plotted vs.

the parameter k.
b 1
Under Case II above, i .e. when ~ > ~ , we again have the trivial

solution ~(s,k) m 0, and the two main branches

l(S,x) = 4. sin s

~2(s,k ) = 4. 2 ~ . I sin 2s

which bifurcate from ~ = 0 at the primary bifurcation points, which are


ab
the eigenv~lues a,b of linearized eq. (1.5). Moreover, for k > ~ > a,

a third type of solution branch appears, namely that in eq. (1.4). Note

ab ab
that as k ~ ~ , k > ~ , the coefficients abJ2b'a k - l -
¥ 0 and

~ 2 aab
- b k -i ~ ~¥ 2b-a " On the other hand note that - i T 2b-a as

k 4 ~ . Thus ~s X - 2 b - a ' we see c h a t ~_P3(s,~) ~.-.~3 s, =

%oI s, ~ a " Therefore at k = ~ , the sub-branch (twig)

2 J2a-b ..... 2
~3+(s,~.) = -~ ¥ - y ~ ~-i sin s 4- ~ V ab sin 2s

joins the main branch, i.e., o:( s, = Ol s, while the

sub-branch (twig)

q03- ( s , X ) = 2J2a-b 2 j2b:


sin s 4. ~ v a h k-i sin 2s
- 3~ ab k-i
-9-
Joins the n~ative ~ of the ~ i n branch, i.e., ~3 -( s, ~ ab) =

~i '
b 1
We have h~e, under Case II, when ~ > ~ , the ~ ~ of "secondary

bi~tion," or ~ e fo~ of s ~ r ~ e s or G i g s which b i ~ c a t e ~om

~e ~ i n br~ches. ~e ~in br~es hi,care ~ the trivial solution

~e eig~lues of~e line~iz~ion~ ~. (1.9), while the t ~ g s b i ~ c a t e

from ~ e ~ i n branches. We c ~ represent the s i t ~ t i ~ ~ i n in ~ o ~ y s :

J
b

sin
/
$
> sin 2s

b I
-~>~

FIG. 1.2a.

: /AZ+B 2

I
0

FIG. 1.2b.
2b -0
b>±
o 2
- I0

Thus solutions of the nonlinear equation (i.i) exist as continuous

loci in (k, sin s, sin 2s) space. There are two main branches: ~01(s,A)

splits off from the trivial solution ~-- 0 at k = a, and its two l~Lrts

~l+,Ol" differ only in sign; ~2(s,k) Joins the trivial solution at k = b,

and its two parts ~2+,~2" differ only in sign. a and b on the k axis are

the primary bifurcation points for the main branches.

If ~b > ~1 , i .e. Case II, two sub-branches or twigs split away from

Ol(S,k) at k = ~ ab , which is known as a secondary bifurcation point.

The question of whether or not secondary bifurcation of the eigen-

solutions of eq. (1.1) takes place therefore hinges on whether we have

b~l b~l b 1
~ , or ~ ~ . The c o n d l t i o m ~ ~ in this simple problem is a

"condition preventing secondary bifurcation." Much interest attaches

generally to the question of whether we have secondary bifurcation of a

given branch of eigensolutions, or of any branch of eigensolutions of a

nonlinear eigenvalue problem, and to the derivation of conditions prevent-

ing or allowing secondary bifurcation. The occurrence of secondary bi-

furcation clearly has a -~rked effect on the matter of multiplicity of

solutions, over considerable ranges of the real parameter k, as this simple

example shows.

The example of this section is such that the solutions can be com-

pletely worked out by elementary methods. In the next sections we pre-

sent the methods of nonlinear functional analysis which must be employed

to study bifurcations and solution branches in the general theory of non-

linear eigenvalue problems. There happens to be however much qualitative

similarity between the structure of solutions of problem (1.1) of this

section, and more general cases.


-ii -

2 . The Extension of Branches of So!utions for Nqnline~ Equations in

Banach Sl~.qes.

In this s e c t i o n we c o n s i d e r g e n e r a l bounded c o n t i n u o u s l y F r ~ c h e t - d i f -

ferentiable transformations T(x) of a real Banach space X into itself: x E X,

T(x) E X. We assume that T(8) = %~ where 8 is the null element. Let

us suppose that the equation

xx -- ~(x) + r , (2.1)

where A is a real parameter and f E X is a fixed element~ has a solution


x o E X for a value Ao of the parameter; i.e., suppose that AoXo =

T(x o) + f. We pose the problem of finding a nearby solution x o + h for

a nearby value A = Ao + 5. Thus we solve the following equation for h,5:

T(Xo+h) + f = (Ao+8)(Xo+h). (2.2)

Using the definition of the Fr~chet derivative T' (xo) [ref. i~; p. 183]~

we can write eq. (2.2) in the form

T(x o) + T' (Xo)h + Rl(Xo, h) + f = XoX ° + Xoh + 5x ° + 5h


where

IIRlcxo,h)ll
il~il ~ - o
as llhll- o. u.~ the a s ~ = ~ i ~ that Xo, X o ~ t i s ~ eq. (2.1) we have

[AoI-T' (Xo)]h = - 5X O - 5h + Rl(Xo, h). (2.3)

Since T' (xo) is a bounded linear transform~ti~ such that T' (Xo)h E X

if h E X, let us assume that Ao E p(T e (Xo)); other complementary assump-

tions will he discussed in the next section. Thus Ao I - T' (xO) has a con-

tinuous inverse M. Then from eq. (2.3) we write


-12 -

h = [AoI-T' (x O) ]'1[ .BXo.Sh+Rl(Xo,h)} = MF8 (h) • (2.4)

We now prove a prel~m!ng~y result about Fs(h) defined in eq. (2.~):

Le,.m- 2.1: The function FB(h) = -5Xo-~h+Ri(xo,h) satisfies a Lipschitz


condition
llF5(h l) - F5 (h2)ll < A(B,hl,h2)!lhl-h211,
with A(5,hl, h 2) > 0, and A(B,hl, h 2) -~ 0 as 151 -~ O, llhlll -* 0, llh211-~ 0.

Proof: By definition of the Fr~chet derivative,

slCx, h) = T(Xo+h) - T(Xo)- T' ( x ) h .

Hence

Rl(Xo, hl) - Rl(Xo, h 2) -- T(Xo+hl) - T(Xo+h 2) - Te(Xo)(hl-h 2)

= T(Xo+h2+[hi-h 2] ) - T(Xo+h 2) - T' (Xo)(hl-h 2)

= T' (Xo+h2)(hl-h2) + Ri(Xo+h2,hl'h 2)

- T' (Xo)(hi-h2),
so that

!tRi(Xo~,hi)-Ri(xo,h2)ll < {liT' (xo+h2)'TS(xo)!l + ~h1_h211 I!!hi'H2!I"


The q,~ntity

+ll~i(~o+h2,hi'H2~l~
{I1~' C~o+H2)-T'(xoIll -.0

~s ,,,,llh~.ll-. o and ,,,,llh211-. O. Now we have

lIF5mI)-~~(h2)ll~ 18111hl-h2!1+ 11R1(Xo,h1)- ~i(~o,h2)11


and the l e ~ iw, ediately follows.
-13 -

The following result depends upon the previous lemma:

Theorem 2.2: There exist positive constants c,d such that for 151 < c,

the mapping h* = MFs(h) carries the ball [hlI ~ d into itself, and is con-
tracting thereon.

Proof: We have

I IR~ (x ,h)ll
IIh*ll < II1~I 151!IXo!l + 15II!h!l + - ~ !lhll I •
First let us take d I > 0 small enough that
!IR~<~o,h)11
]]h~ < 2~ for IlhLI ~ d I,

Next we can find c I > 0 so small that


dI
l=l!l~ll + 18111hll = l~l (ItXo!l+d l) < ~ for I~I < el.
i i
~en llh~1 = ~ d I + y d I -- d I , which s h ~ s t h a t f o r 151 < % , ~5(h) ma~

the ball llh!l g d 1 into itself.

Again,

llhl*'h2~l < !I~!"IIF8(h I) " F8 (h2)II g II~l"A(8,hl,h2)!lhl'h211

where we have used the Lipschltz condition satisfied by Fs(h). Employ-

ing Le.m~ 2.1, we can take positive constants c2,d2 small enough that
i
II~I"A(5,hl,h 2) < ~ when 151 < 82, llhlllg d2, IIh211 < d 2. Then Ilhl*-h2*!l g
i
llhl-h211 for 151 < 52, llhl!l~ d2, llh211< d2, which shows that for 151 < 52,

MF5 (h) is contracting on the ball llhllg d 2.

Now if we take d = d I ~ d 2 and let c = min(cl, c2) , then MFs(h ) maps

IIhll < d into itself and is also contracting thereon, provided 151 < c.
This proves the theorem.

From the above result, we get the funaAmental theorem on extension


of solutions:
-14 -
Theorem 2.3:
. . . . . . . .
Suppose that [kol-T ' (x o )]"1 exists and is boun~ed, where

Xo,k o, x o EX, is a solution pair for eq. (2.1). Then there exist posi-

tive constants e,d, and a solution h 6 X of @q~i(R~2), unique in the

ball !lhll ~ d, ~ I d e d 181 < e. T h ~ the pair Xo+h, ~o~ solves ~ . (2.1).

The constants c,d can be taken so that the operator (ko+5)I-T' (Xo+h) has a

bounded inverse for llhll g d, 151 < c. The function h(5) solving eq. (2.2)

is continuous, and lim h(5) = 8 where % is the null element.


5-0

~99f: Let Cl, d I be the constants of Theorem 2.2. Use of Theorem 2.2 and

the Contraction Mapping Principle [ref. 19, P. 27] yields the existence and

uniqueness of h(5) E X, 151 < 51, i.e., the solution of eq. (2.3). Since

eq. (2.2) and eq. (2.3) are equivalent, the pair 5,h solves eq. (2.2) and

the pair Xo+h , ko445 solves eq. (2.1). By a known result [ref. 15, P- 92,

Th. 3'] there exist positive constants c2,d 2 such that (ko+5)I-Tt (Xo+g)

has a bounded inverse provided 151 < c2, llgll < d 2" If we take c = min(cl, c2)

and d = ~n(dl, d2), then for 151 < c there exists a solution pair Xo+h(5) ,
kO + ~ satisfying eq. (2.1), unique in the ball !lhll < d, and such that

(ko+5)I-T' (Xo+h(5)) has a bounded inverse.

aiven two ,~lues 8,~* wlth 151 < ~, 15"I < o, ~e = i r e

!lhCS)-h(8*)}I = !IMF5 (h(5)) - MFs.(h(5*))II

!!1~'5(h(5)) - 1~5.(h(5))!1 + tlMFs.(h(8))- ~5.(h(5"))11


1
!IMFs(h(s)) - MFs.(h(5))II + y }lh(5)'h(5*)ll
by the Lipschitz condition derived in the proof of Theorem 2.2 and cer-

tainly valid here. Then


-15 -

IIhC8 )-hCs-)ll ~ 21!~1"lIF5 (he5) ) - F~,ChC8 ) )11

2II~1 "ll~o+d!l'ts~'l
w h i t e hC~), 151 < c, is conti~uo~. ~e ~eor~ is now ~o~en.
We now study how a solution of eq. (2.1), x(k), valid in a neighbor-

hood of x o = X(ko) by Theorem 2.3p might be extended into the large. We

introduce notions developed by T. H. Hildebrandt and L. M. Gravesp [ref.

13, sec. 18, p. 151].

We consider the cartesian product W = X × R, where R is the real num-

ber system; for w q W, we denote the norm by II~l = ll~l + Ikl where x q X,

k E R are the respective components. A neighborhood Na(w O) of w O E W con-

sists of points such that llX-XolI + Ik-koi < a while a neighborhood % ( k o)

o f k ° E R comprises p o i n t s such t h a t I k-kot < b .

A s e t W° c W o f p o i n t s w E W i s c a l l e d a " s h e e t o f p o i n t s " i f i t has

the following properties:

1) For a l l wo E W(°)~ t h e r e e x i s t P o s i t i v e constants~ a and b < a,

such t h a t no two p o i n t s Wl~W2 E Na(wO) have t h e same p r o ~ e c t i o n

~ R, i . e . , ir wl;(xl,~), w2;(x2,~2), wl,w 2 ~ Na(%), then


~l ~ ~2' ~ every point ~ ~ ~ ( ~ o ), where %:(Xo,~o), is the
~ o J e ~ i o n o~ a p o ~ t w ~ W(°) t o n g u e d tn Na(%).
2) W( ° ) i s a connected s e t .

A boundary point w B of W (O) is a point not belongin6 to W (°) but

such that every neighborhood contains points of W (O), i.e., w B ~ W (°) but

Ne(wB) n W(0) ~ O, ¢ > O. Thus W(0) c o n t a i n s o n l y i n t e r i o r p o i n t s .

A point w E W, w = (x~k) is called an ordinary point with respect to

the nonlinear transformation T if AI-T' (x) has a bounded inverse; here


-16 -

T' (x) is the Fr~chet derivative of T at x, [ref. 15, p. 183]. Otherwise

w is called an exceptional point of T.

W (°) is called a sheet of solutions of the equation (2.1): kx = T(x)+f,

x E X, f 6 X, k E R, if e v e r y w = (x,k) i n W (°) satisfies kx = T(x)+f.

The following theorem is due essentially to Hildebrandt and Graves

[ref. 13, p. 1~2].

Theorem 2.4: If w o = (Xo,k o) is an ordinary point with respect to the

continuous nonlinear transformation T, i.e. koI-T' (xo) has a bounded in-

verse, and if w o = (Xo,k o) is a solution of eq. (2.1), i.e. koX O = T(Xo)+f ,

then there exists a unique sheet W (°) of solutions with the following prop-

erties:

a) W( ° ) contains w .
o

b) Every point of W (°) is an ordinary point of T.

c) The o~ly boundary points (xB,k B) of the sheet W (°) are exceptional

points of T, i.e., kBI-T' (xB) does not have a bounded inverse.

Proof: According to Theorem 2.3, there exists at least one sheet of

solutions W (I) having properties a) and b). Let W (°) be the "least

common superclass" of all such sheets W (I) . Evidently W (O) is a con-

nected set of solutions satisfying a) and b). That W (°) is a sheet of

solutions of eq. (2.1) follows from Theorem 2.3 and property b).

To show that W (°) satisfies property c), let w I = (Xl,k I) be a

boundary point of W (°) and an ordinary point of T. Since T would then

be continuous at Xl, klX 1 = T(Xl)+f , i.e. w I is a solution of equation

(2.1). Then however by Theorem 2.3, we could extend W (°) to include w 1

in such a way tb~t the newly extended sheet satisfies a) and b), contra-

dicting the definition of W (°)"


-17

Now suppose there is a second sheet W (2) of solutions of equation

(2.1) having properties a)~ b) and c). Then W (2) c W (O) and there exists

an element w I E W (°) with w I ~ W (2) . Since W (°) is connected, there

exists a continuous function F on R to W (°) such that F(r O) = Wo,

F(r I) = Wl, and r ° < r I. By property a) of W (2), F(r o) E W (2). Let

r2=l.u.b.[rlr ° < r < rl~ F(r) E W(2)]. Then F(r 2) is a boundary point
J l

of W t2;. But since F(r 2) E Wt°), it is an ordinary point of T, which

contradicts property c) of W (2) . This completes the proof.

Every sheet of solutions determines a single valued function x(k)

in a neighborhood of each of its points. By Theorem 2.3 these functions

are continuous.

The sheet of solutions of Theorem 2.4 is called the "unique maximal

sheet" of solutions of eq. (2.1) passing through w o = (Xo~k O) . As indi-

cated, the only way for a process of continuation of a branch of

solutions x(A) to come to an end is in an approach to a point xB, k B where

k B I -T' (xB) has no bounded inverse; this is merely an alternative way

of saying that any bound a ~ point w B possessed by a unique maximal sheet

of solutions of eq. (2.1) is an exceptional point.


-18 -

3: Devel01~ent o f Branches o f S o l u t ! o n e for Nonlinear Equ~t!ons near


~.xceptio~l P.oin.t:..~i~rcat!on ~ e o ~ .
Again, as in Section 2, we consider the general bounded continuously

Fr~chet-differentiable transformation T(x): X - X, with T(e) = 0, 8 E X

being the null element. Again we consider solutions of eq. (2.1). Since

X is a real space, we stress that we seek real solutions.

Let x ° E X be a solution of eq. (2.1) corresponding to k = k o, and

consider again the matter of finding nearby solutions; we are immediately

led to eqs. (2.2) and (2.3) to be solved for the increment h E X, for

given 5. Now however we assume that k o is an exceptional point of T; i.e.

k o E ~TS(Xo). (See [ref. 21, p. 2921 for the spectral notations ~,C~,R~,

and I~.)
At the present state of the art, we cannot speak on the behavior of

solutions of eq. (2.1) in a neighborhood of x o if k o E CoT t (xo) or if

k o E RoT ° (xo) • We are equipped only to handle the case k ° E P~T t (xo) •

Therefore it helps at this point to make the following assumption:

H~l: T e (xo) is a compact linear operator.

Actually if T(x) is compact aud continuous on X (i.e. completely continuous),

then by a known theorem [ref. 14, p. 135, L e - ~ 4.1] the Fr~chet derivative

T t (x) is also compact, x E X. Thus H-1 is quite a natural assumption.

With T e (xo) COml~ct, the eigenvalue k ° is of finite index v, the

generalized nullspaces ~n(Xo) c ~n+l(Xo), n = Oil , ...,~-l are of finite

dimension, and the generalized range Rv(x o) is such that X = ~ ( X o ) ~+

Rv(Xo) , [ref. 19, P. 183, p. 2171 • Thus the null space ~l(Xo) and range

Rl(X o) of koI-Tt (Xo) each a~m~t the projections E and ~ respectively,

[ref. 16, problem l, p. 72].


-19 -

Since Ao E PaT' (Xo), k oI-T (x O) has no inverse; nevertheless because


of the existence of the projection E of X on ~l(Xo) and the fact that

AOI -T' (Xo) has a closed range RI(X O) ( ~ exlsts)~ we do have a pseudo-

inverse. A pseudo-inverse is a bounded right inverse defined on the

range: [AoI-Te(xo)]Mx = x~ x E Rl(Xo)-

We state and prove the following l e , ~ about the pseudo-inverse) which

is applicable here [ref. 16~ p. 72]:

Lemma 3.1: Let A be a closed linear operator and suppose R(A) is closed.

If ~(A) admits a projection E then A has a pseudo-inverse. Conversely if

~(A) = X and A has a pseudo-inverse~ then ~(A) admits a projection. Here

of course R(A)I~(A) and ~(A) stand for range of A, nullspace of A and domain

of A respectively.

Proof. The operator A induced by A on X/~(A) is i:i onto R(A), and thus has

a bounded inverse. Therefore lIA[x]ll ~ Vll[x]ll > ~ 7zci~f]llzll~,~for any [x]6X/~(A),

where y is the minimum modulus of A, [ref. i0, p. 96]. Hence given y 6 R(A)

there exists an element x E [x] with y = Ax such that llxll < cllA~l = cll~l,
2
where c = -- •
7
Now define M on R(A) as follows: put My = (I-E)x where y E R(A),

y = Ax and E projects on~(A). M is well defined; indeed if y = Ax I = Ax 2,

then Xl-X 2 E ~(A), whence (I-E)x I = (I-E)x 2. Also AM = I since AMy = A(I-E)x

= AX = y, y E R(A), and M is bounded: lIM~l = II(I'E)~I ~ KlllXll ~ cKIII~I, using

a proper choice of x.

On the other hsad~ if S(A) = X) let M be the given pseudo-inverse. A is

bounded by the Closed Graph theorem. Therefore E = I-MA is bounded. Since

AEx = O) R(E) c ~ ( A ) . If x E ~(A) then Ex = (I-MA)x = x. Hence E is the

projection on ~(A)~ and the le-~a is proven.


- 20 -

Henceforth, let M(x o) be the pseudo-inverse of AoI-T' (xo) given by

the le...A. We have

[~oI-T'(Xo)]M(xo) = I on Sl(Xo)

~(~o)[~J-T' (Xo)] -- ~.
We extend the pseudo-inverse M(Xo) to the entire space X by writing

M(x O) = M(Xo) ~ . Then

[×oI-T' (xo) ]~(xo) -- Y(x o) [×oI-T '(Xo)] -- ~. (3.1)

With the aid of the extended pseudo-inverse, let us study the fol-

lowing equation to be solved for h:

h = ~(Xo)Fs(h) + u, u ~ ~i(~o) (3.2)

where as before (see eq. (2.4))

Fs(h) = - 5x O - 5h + Rl(Xo, h). (3.3)

I f h E X s a t i s f i e s eq. (2.3) for given Xo,ko,5 , then FB(h) E Rl(Xo)-


Using eq. (3.1) we see that u = h - M(Xo)FB(h) E~l(Xo) , so that %he same

h satisfies eq. (3.2) with this u. Therefore we are motivated to prove

an existence theorem for eq. (3.2):

Theorem 3,2: There exist positive constants c,d,e such that for 151 < c

a n }!~I < e, u ~ ~l(Xo), eq. (3.2) has a solutlon h(5,u) ~lque in the
ball llh!l ~ d. The solution is continues in 8.

Proof: We study the m~pping h* = M(Xo)F~(h) + u of X into itself,

u 6 ~l(Xo). We have

I "Rl(Xo'h)" "h"} + ''~'


IIh~l ~ II~l IBIIIXoll + IBl't}h!l + ilhtl'" •
- 21 -

according to our definition (3.3) of Fs(h). First we can take d I so

llRl(Xo'h)II 1
s m a l l that ! t h l I < . . . . for !thtl < d 1. With d I thus fixed, we can
3!t~t
find B I such that

dI
1~1 "!!Xoll ÷ 181 "11h!I ~ I~l (llXo!l÷dl) ~ 3!I~! for I~1 < ~l"
d1
Next we ~ e !1~1 ~ T ; then llh*ll = IIR F8(h)+u~l ~ d I i f 181 < 51. Thus
d1
i f I~I < h ' and II~l < 7" the map c a r r i e s ~ e ~ll Ilhll < d I into i t s e l f .

In view of Le-m~ 2 .i we can find d2,5 2 small enough in order to have


1
!!~lA(5,hl, h 2) < ~ for 151 < 52, tthl! I ~ d2, llh2!t ~ d2. Thus
1
llhl*-h2*!l = !IMFs(hl)-MFs(h2)II < ~ !!hl-h2!1 provided 151<52, llhl!l<d2, IIh211 < % .

d
Take e = rain (ci,c2) , d = d I < d2 , and e = ~ . Then the map carries

the ball llhll ~ d into itself sad is contracting thereon, provided !lu!! ~ e.

Therefore if 151 < c and !I~! g e, the iterations hn+ I = MFs(h n) + u con-

verge to a solution of eq. (3.2) unique in the ball llhl! < d. The con-

tinuity is obtained in a way similar to that used in Theorem 2.3. This


c
ends the proof.

The existence and local uniqueness of a solution of eq. (3.2) given

in Theorem 3.2 sets the s ~ e for the following result:

Theorgm 3-3: Let T' (xo) satisfy H-l, (which can be arranged by assuming

that T(x) is everywhere compact and continuous). Then the condition that

h = Vs(u) , the solution of eq. (3.2)~ be at the same time a solution of

~. (2.3) is that

(~-ER)F~%Cu)) = ~ (3.~)
- 22 -

where of course ~ is the projection of X onto the range RI(Xo). Conversely

i f h,5 s a t i s f y eq. (2.3), then Fs(h) E Rl(Xo), eq. (3.4) i s s a t i s f i e d , and


h,5 also s a t i s f y eq. (3.2).

Proof: Since T' (xO) is com~ct, kol-T' (xo) has close~ range, an~ as we

have seen~ the null s~ace ~l(xo) admits the projection E . Thus by

Lemm~ 3.1 the pseudo-inverse M and the extended pseudo-inverse M = M ~

exist where ~ projects on the range Rl(X o). We have ~(M) = X. Let

5 and u E ~l(xo) be such that eq. (3.4) is satisfied~ where Vs(u) is

the solution of eq. 43.2). Then Fs(Vs(u)) E RI(XO) = S(M). Premultipli-

cation of eq. 43.2) with k I-T' (x) and use of eq. (3.1) give
o o

(koI-T'(Xo))h = (Ao I - T ' ( x o


))MF_(h) + 8 = Fs(h),

which is Just eq. (2.3). On the other hand if h,5 satisfies eq. (2.3),

then Fs(h) E R I ( X o) . Let u O - h - MFS(h); then

[~oI-~ ' (Xo)lUo -- [×oi~' (Xo)Jh-[×oI-~' (xo) 3~5(h)

= F6(h) - ~Fs(h) --

by ~ . 43.1). Thus h -- ~5(h) + u o with u o E ~l(xo), which is ~ . (3.2).


Then h -- vsCuoL ~ since FsCh) ~ R1(~o), we have (I-~)Fs(Vs(Uo)) -- e,
which is eq. (3~)- This ends the proof.

In. the proof of Theorem 3.2, the solution of eq. (3.2) was produced

by the method of contraction -~ppings under the assumption that 151 < c

and II~I < e, where c and e are positive constants. Hence h = V5(u) is the

unique limit of the iterations hn+ 1 = MFs(hn) + u, which converge in norm.

We compose these iterates to get the nonlinear expansion:


23 -

vs(u) = u + ~ s [ ~ s [ ~ s [ ~ . . . e t c . ] ] ] . (3.5)

By Theorems 3.2 and 3.3, in order to study small solutions h,5 of eq.

(2.3), it is well to study small solutions of eq. (3-4) and eq. (3.2).

Eq. (3.4) is to be solved for u E ~l(Xo), where ~l(Xo) is of course a

finite dimensional subsl~ce of X. By a known result [ref. 21~ p. 28~

Th. i], the annihilator of the range Rl(Xo) is the null sl~ace~l*(xo) of

the adjoint koI-T'(Xo)*. Thus by choosing bases in ~l(Xo) and~l*(Xo),


eq. (3.4) may be regarded as a finite system of nonlinear scalar equations

in an equal finite number of scalar unknownsp l~Lrameterizedbythe real

scalar 5. Indeed if Ul,...~un and Ul*J-.-~Un* are bases respectively for

nl(X O) and ~l*(Xo), then eq. (3.4) has the representatlon

ui*FS(¥5 (~oUl+-- ,~-~nU~)) = 0 (3.6)


i = i~21...~n.

This system of nonlinear scalar equations is to be solved for the scalar

unknowns ~j(5), j = 1,2,-..,n. (Note: ui* E X*, i = l,...,n.)

Before proceeding further with solution of eq. (3.4), it is neces-

sary to make it more amenable to calculation. First we must expand F~(h)


O

to more terms:

Fs(h) = . 1
sx ° - 5 h + ~. d~(Xo; h,h) + ~31~.d3T(xo;h,h,h) (3.7)

+ R3(Xo,h)
where d2T(Xo;hl,h2) and d3T(Xo;hl,h2,h3) are respectively the second and
third Fr@chet differentials of the nonlinear operator T(x), linear in

each argument h i separately, and syanetric in the arguments hi~ [ref.

15~ p. 18~i. The remainder R3(Xo,h) is of course such that

- o a ~ !!hll ~ o .
Llhll 3
- 24 -

Following R. G. Bartle [ref. i, p. 370, 373], we now substitute the


nonlinear expansion, eq. (3.5), into the expression, eq. (3-7), for F~(h)

to compute Fs(Vs(u)). As a first step we wrtte


Fs¢vs(u)) = " ~ o " ~[~ZsCh)] + ~2~(Xo;~YZ~(h), u + ~(h))

+ "~.
i a3T(Xo;~MFs(h), u + ~Fs(h), u + ~4Fs(h))

+ R3(Xo,~F 5 (h))

= - 8X O - 5U - 5MFs(h) + i d~(Xo;U,U)

+ ~-~<Xo, U,~Ch>~ + ~<x~C~>, ~C~>~


i d3~(x;u,u,yZsCh))
+~-i d3T(Xo;U,U,U ) + 3" --3;

1 d3T(Xo;U,~Fs(h), ~Fs(h) )
+3"~.

+3'.
~ ~%<~o~5 <~),~5 <h), ~8 ~)) + ~3~o'~B oh))
Continuing to substitute,
i
F~Cv~Cu)) -- - ~xo - ~u + ~. a2~(Xo;U,u) + ~.. d3T(xju, u,u)

. ~ {_ 52Xo. ~2h +~..5 ;h,h) + ~. d3T(Xo;h,h,h) + 5R3(Xo, h) 1

+~.2 d2T(Xo;U,M~.SXo...Sh+21~, d~(Xo;h,h)


+ 31_.
; d3T(Xo;h,h,h ) + R3(Xo,h)])
+~. 1 d2T(Xo;h,h ) + 1 d.3T(Xo;h,h,h ) + R3(Xo,h)],

M[ -Sx o -Sh + d2T(Xo;h,h) + ~ d3T(Xo;h'h'h) + R3(Xo,h)])

a3T(Xo;u,u,~[-SXo-Sh+~.1 d2T(Xo;h,h)+ 13, cl3T(Xo;h'h'h)+R3(x°'h)])


+ 3/[,.
- 25 -

+ 3/[ d3T(Xo;U,Y[-SXo'Sh + ~y.


1 d~(Xo; h,h) 1 d~Cxo; h,h,h) + R3(Xo,h) ],
+3"[.
M[ -SXo'Sh + d2T (Xo;h,h) + ~1 d~(Xo; h,h,h) + R3(Xo, h ) ] )

+ ~1 d~(Xo;~Xo~h + ~i d~(Xo;h,h) + 1 d3T(Xo; h,h,h ) + R3CXo, h ) ] ,


3;
i d~(Xo;h,h) + 3~. d3T(Xo; h,h,h) + R3(Xo, h) ],

~[~Xo~h i d~CXo;h,h)
+ ~. + 31--[.d~(Xo;h,h,h) + R3(Xo, h)])

+ R3(Xo'U+--M['SXo'Sh + ~.1 ;h,h) + 1__


3; d~(Xo;h,h,h) + R3(Xo,h)]) "

Since eq. (3.7) contains no terms in which h appears alone raised to the
first p~wer, this process results in terms containing h and 5 to successively
hi@her degrees.
So as to estimate the higher order terms, consider eq. (3.2):

h = ~(Xo)Fs(h) + u

= ~(Xo){~Xo~h + ~i. i d~(Xo; h,h,h)


;h,h) + 3'

+ R3(Xo, h) ] + U = M(Xo)Fs°(h) - 5M(Xo)X O + u,

1 d~(Xo;h,h)
where Fs°(h) = - 5h + ~.. + ~,. d~(Xo;h,h,h) + R3(Xo,h) "
Note that Y5°(e ) -- e and
1
tlM(Xo)It-ilFsO(h 1) - Fs°(h2)!l < ~ ilhl'h211' !lhl!! < d2, !lh21t ~ d2, 151 < 52,
where d2,52 are numbers arising in the proof of Theorem 3.2.

Thus llM<Xo)ll.llFs°(h)!! ~ ~1 Ilhll, and

ilhll ~ llM(Xo)ll°!IFs°(h)ll + 151 °!lY(Xo)ll'!IXoll+ II~I

~1 11~I + elSl + !lull,where c = ii~(~o)ll.!!Xoll,


- 26 -

or

!!hit ~ 2¢~1~1 + 11~t), llh!l ~ d2, 15t ~ ~2"

Thus to estimate the terms in u and 5 which eventually arise, we note

that

tld~(Xo;h,h,h)ll ~ censt !lh!l 3 ~ cons, ¢~!81 + li~l) 3,


and so
d3T(Xo;h,h,h) = 0¢[ol51 + !1~,113),
and similarly for other expressions. We propose to keep explicit for

the time being only those terms of order up to, but not ineludiI~,

o¢isti!lul!J), i + ~ = ~.

R3(Xo,h) = d4T(Xo+toh;h,h,h,h), 0 ~ to ~; i,

SO ~hat we do not keep the reminders explicit.

In the important case of bifurcation at the origin t x o = 8, so that

c=O,

Lumping the terms that we do not keep explicit, we have

Fs(v~(u)) = - 5 ~ - 5u + ~I ;~,~) + ~ ~¢~;~,u,u)

- M¢Xo){-52xo - 5 ~ + ~ a~(Xo;h,h)}

2 d2T(Xo;U,[4[ ..~Xo.Sh+2..~d2T(Xo;h,h)} )

z
+ ~. a~(Xo;~(Xo){~Xo~~ ~. 1
1 dT(Xo;h,h) },y(xo){~xo~h+~., d~¢Xo;h,h) })

+ +
- 2? -
1
+ ~. ~%(Xo;y{ "~o}' ~oS, ~Xo~)
+ ~ ~ij(~;U) where ~(~,~) = o(l~l~llull~)
i+J=3

= - ~x o -
~:~(Xo;U,U,U)
~u+ ~.. ~(~o;U,U) ÷~.

÷ ~2~ o + ~ - ~. ~(~o;h,h) - ~(~o;U,~o)

- ~(Xo;U,~) + ~.
~ ~(Xo;U,~(Zo;h,h))
52
+ ~. ~(~o;YXo,~o ) + ~2~(Zo;~,~o)

"~.5 d2T(Xo;~Xo~c]2T(Xo;h,h))
"~. ~¢~o;U,U,~o ) + ~.
~2 ~(~o;U,~o,~o)

- ~ ~¢~o;~o,~o,~o) + ~+~=3
~ ~¢~,u)

Agaln~ substltuti~ h = u + ~ 5 ( h ) 2 we finally get

F8(%(u)) = - ~o - 8u + a~Cxo;U,u) + ~ ~sT(Xo;U,U,~)

+~o +~" ~o- ~ ~(~o;~,~)


÷ ~ % ( X o ; U , ~ o) - ~ ~%(~o;~o,~o)

- ~(Xo, U,~ o) - 8d~(Zo;U,~) + 82d~Cxo;U,~Xo )

+~i d~Cxo;U,~(Xo;U,U) ) Bd~Cxo;U,~Czo;U,~o) )

+y
- 28 -

+ 62
7" ~ T ( ~ o ~ o ' ~ o ) * 5~T(~o;~'~o ) " 5~( ,,~:o )
. ~_
2 ~(~o)~o,~(Xo)U,U) ) + 5~(~o~o,~(Xo)U,~o ))

.5_2~ a~(Xo~mo,r~%,(~o~o,r%))

" ~--2d3T (Xo;u, u,Mx O ) + ~ d3T (Xo;u,N X o , ~ o )

.6 3
"Y e:~(~o)~o'~o'MXo) + • i~(5,u)
i+j--3

= - 5~ o - 5 { ~ ( ~ o ) U , ~ = o ) ]

+~l ~(~o;U,u)
i+J=l

~e of oo~se ~ij(5,u) = o(bli!l~lJ).

Again if Ul,--.,un and Ul* , ---,Un* are bases respectively for the

null spaces ~l(Xo)and ~f(Xo),then the bifurcation equation, eq. (3.4)


or eq. (3.6), becomes,

- 5~*x o - 5 ~ * ~ + ~. = ~a%¢~o~U~,r%)

n n
+ ~1' ~ * X X (3.8)
/,z--I ;,2-i ~*l%2~(~°;U~l'U~2 )
n n n
E ~2=I
~l=l ~ ~l~2~3a~(~°;U~l'U~2'u~3)
~ ~3=i

5 ,J,:Z
Eu~ =0 k = l~2,...,n
i+J=l uJiJ \
ll~II = !I~-II = 1.

This representation of the bifurcation equation has validity, of course,

for those values of 5,~£ such that our iterative solution V5 of eq. (3.2)

has validity, i.e., for 181 ~ c, I~£I ~ e, where c,e are constants occur-

ring in Theorem 3.2.


- 29 -

. , S.olution of the Bifurcation E~uation in the Case n - I;. Bifurcation

at the Origin.

Eq. (3.8) represents n equations to be solved for n scalar -nknowns

El'"" "' ~n" If solution can be accomplished, then by Theorem 3-3, letting

n
u = ~ ~kUk in eq. (3.2), we produce solutions of eq. (2.1) in a neighbor-
k=l
hood of the exceptional point (Xo,k o) . Here, of course, the elements

Ul,...,u n are assumed to be a basis for the null space ~l(Xo) of the

operator kol - T' (xo) .

The solution of eqs. (3.8) presents much difficulty in the cases n > l,

although attempts have been and are being made to study these cases, [refs. 3, 12

am~ 48]. For the present, we here confine the discourse to the case n = 1.

If we assume the null spaces ~l(Xo) and ~l*(Xo)tO be one dimensional,

and to be spanned by the elements u I and Ul* respectively, i.e., T' (Xo)U1

= koUl, T t (Xo)*Ul* = koUl* , then eq. (3.8) simplifies to the following

single equation in the scalar unknown E1 (note: u I E X, Ul* E X*):

-SUl*X ° - 5 ~lUl*[ U l + d ~ (Xo; u I,Mx o) ]

+ ~ ui*d~(Xo;Ul'U i) + --6- Ul r(Xo; Ul,Ul,Ul) .i)


3
÷Z Ul*~ij(5,~ I) = O, where ~ij(5,~l) = o(151il ~II~),
i+j=l

ilUl!1: 11ui-II-- i.
For convenience we write eq. (4.1) as follows:
- 30 -

5[aI+~I(5,~i)] + ~i~[a2+~2(5, {i)] + ~[a3+@3(5,~i)]


+ q31%+~4(B,q)] = o (4 .~)

where a I = - Ul~Xo

a3 = i ul.d(Xo;U1,1)
and we h a v e defined

_ _d%o%o%o ). ~2(~'q ) = 5q
~i(5, ~i ) - 5 ,

Ul~02 Ul*~03
~3 (8'~I) = 2" ' ~4 (8'~I) = " 3

and where $i(5,~i) - 0 as 5,~ I ~ O, i = I, 2, 3, 4.

Eq. (4.2) is of the form


m

Z~ q [ai+~i(5,q)]= o (4.3)
i=l

with m = 4; a I = i, 81 = O; ~2 = i, 82 = i; cL3 = O, 8 3 = 2; 64 = O, 84 = 3.

Equations such as eq. (4.3) were treated by the method of the Newton Polygon

by j. Dieudonn~ [ref. 8] and R. G. Bsrtle [ref. i~ p. 376]. In each of these

studies it was necessary to assume that~ among those terms in eq. (4.3) with

a i ~ O~ rain ~i = rain 8 i = O. Thus with the exponents listed for eq. (4.2)
l~i~n l~i~n
we ~ o ~ d want a I ~ 0, ~ a a 3 ~ 0 or % ~ 0 in o r a ~ to ~ i ~ the ~ o n

Polygon method as developed by these two authors.

We now take up the study of bifurcation at the origin. In eq. (2.1)

we t a k e f - 0 so that we consider now the eigenfunctions of the nonlinear


- 81 -

operator T(x). In other words, from now on we interest ourselves in the

elgenvalue problem ~x = T(x), where T(e) = 8t the null element. There

exists the trivial solution x = 8, and we have the problem of determin-

ing those real values of k such that nontrivial solutions exist. The

pair (8,k o) is a solution pair of eq. (2.1); if it happens to be an

exceptional point as defined in connection with Theorem 2.4, then we

have the problem of bifurcation at the origin.

It is convenient at this time also to assume that the nonlinear

operator T(x) is odd: T(-x) = - T(x). Thus

H-~: T(x) is an odd, thrice differentiable operator.

With odd operators we have the following result:

Theorem 4.1: Let the nonlinear operator T(x) satisfy H-2. Then T' (x)

is an even operator: T' (-x) = T' (x), and ~' (x) is an odd operator:

~' (-x) -- - ~ ' (x).


Proof: By a known result [ref. 15, p. 185] the weak derivative exists,

and we have T' (-x)h = lira ~1 [T(-x+th) - T(-x)] = llm 1 [T(x-th) - T(x)]
~o t-.o
= T' (x)h. This shows the evenness of T' (x). Again ~' (-X)hlh 2 -- d2T(-X;hl, h2)

= lim ~1 [T, ( . x + t h l ) . T , (.x)]h2 = lim ~1 [ T , ( x . t h l ) . T , ( x ) ] h 2 =


t~o ~o

- nm ~ [T'(x-thl)-T' (x)Sh 2 -- - T ~ (X)hlh2, or ~' (-x) -- - T ~ (x), ~ i c h

shows the oddness of %#'(x). Here, of course, x,h, hl,h 2 E X. This ends

the proof.

Just as the oddness of T(x) implies that T(8) = 8, so does the oddness

of %#'(x) imply that %#'(8)hlh 2 = d~(8;hl~h 2) = 9,hllh 2 E X. By Theorem 4.1,

eq. (4.1) appears as follows:


-32 -
3
- 5~lUl*Ul + --6-
~13 Ul*d3T(e;Ul'Ul'Ul) + ~ Ul~ij(5'~l) = 0.
i+j=l

With x O = 8, we also discern that R3(e,h) = O(llul~); this, and other

terms in the expansion of Fs(Vs(u)) which vanish, imply that ~lO =

= w30 = w02 = 8. Hence the bifurcation equation in the case x o = 8 can

be written as follows:

qs[a2+~2(5, q)] + q3[a~+~(5, q)] = 0.


The coefficients a2,a 4 and the functions ~2(5,~i), ~4(5,~i) are as de-

fined in connection with eq. (4.2). It is seen that ~i can be cancelled

in eq. (~.~).
At this point we explain the -/~--er in which J. Dieudonn& treated

equations such as (4.2), (4.3) and ( 4 . 4 ) . Let f(5,~ l ) be t h e l e f t - h a n d


side of either eqs. (4.2), (4.3) or (4.4). If the function $(5) solves

the equation f(5,~ I) = O, i.e., f(8,$(5)) --- O, in the neighborhood of

(0,0), (f(0,O) = 0), then $(5) ~ t5 ~, where - 1 is the slope of one of

8 exponents the sides of the Newton Poly-


vs ~ exponents gon, and t is a real root of

the equation 7 a~t k = O. Here


k
k runs over the indices of the

points on the side of the poly-

gon of which - 1 is the slope,

[ref. 8, p. 90]. Conversely,

to a given side and slope of


a l
Newton Polygon the Newton Polygon, there may
FIG. 4.i.
correspond a solution in the
-33 -

smaii, ¢(5) ~ of z ( 5 , q ) -- 0.

For simplicity let us take the case where there is only one side,

of slope - ~1 , of the Newton Polygon. Put ~I= ~5 #I in f ( 5 , ~ = O.

After division by 5 , we have, (cf. eq. (4.3)) :

=0.
i=l

Noting now that ~ k + ~iBk " ~181 = 0 for all points on the single side

of the Newton Polygon, we have

k i

where the second sum is over all the rewalnlng points.


81 8k
Now let t O be a real root of the equation alt + ~. akt = O,
k
of multiplicity q. Then eq. (4.5) can be written as follows:

(n-to)q = > o,

where F(5,~) is continuous and tends to b ~ 0 as (5,~) - (O,to). If

~i ~F
derivatives - ~ exist and are continuous near (0,0), then ~ exists and

is continuous near (O, to). If q is even and b > O, we may write


1
n - t o = • 5k/q[F(5,~)] q.

Either branch may be solved for ?] in terms of 5 by using the ordinary

Implicit Function Theorem [ref. ii, p. 138], since the Jacobean is non-

vanishing for small 5. If b < 0 there is no real solution. On the other

hand if the multiplicity of t o is edd, we may write


- 34 -

- t o = 5k/q[F(5,~)] ~.

This one real branch can then be uniquely solved for any real b, again

using the Implicit Function Theorem.

We now use the method of Dieudonn~ to prove the following result:

Theorem 4.2: Under H-l, H-2 and the supposition that (e,ko) is an ex-

ceptional point of the nonlinear operator T(x), (i.e., koI - TS(8) has

no bounded inverse,or k
O
E P~(T'(e))), there exist two nontrivial solu-

tlon branches ~ ( k ) of the equation T(x) = kx consisting of eigenfunctions

of T(x), which bifurcate from the trivial solution x = e at the bii~Arca-

tion point k = k o. The two branches differ only in sign. If a2a4 < O,

the two branches exist only for k > k o and bifurcation is said to be to

the right; if a2a 4 > O, the two branches exist only for k < k o and the

bifurcation is said to be to the left. These branches exist at least in

a small neighborho~ of ~o' and 11~(×)II " 0 as × - × o

Proof: We start with eq. (4.4). Clearly ~i = 0 is a solution of eq. (4.4)

for 5 > 0 or 5 < O. Thus u = ~lUl = e. Insertion of u = e and ± 5 ~ 0 in

eq. (3.2) leads to the trivial solution.

Next, if we suppose ~i ~ 0, it may be cancelled in eq. (4.4). There

remains an equation in 5 and ~i2 which


2~
possesses a Newton Polygon with one side

and slope -2. A s s u m i ~ at first that

5 > O, we put ~i = ~5~" After cancel-

ing 5, we get
1 1 ~j ~ --

[ a2+{2(5,'r152)] + r]2 [ a4+{4(5,r]52")] = O. Newton Polygon


(~.6) FZG.~ ~ 4 -
- 35 -

2
Solution of the leading part, a 2 + a4~ = O, leads to ~]1,2
--~ a4
This represents two real solutions of unit multiplicity if aud only if

a2%< O. Then with these roots we can rewrite eq. (4.6) as follows:
1 1
(n~nl)(n~na)= - ~2(~,~ g) - na~ (8,~ g)
1
=^(5,~5~ ) - 0 as ~ - o , 5 > 0 .
1
Since A(5,r~ 2) is differentiable with respect to ~1, we can solve the two

equations
1
w
1
+ A ~(5,~_5a). A(~,~aa)~
~=~i ~..n2 ~3 = ~2 + o-~l

uniquely for ~ as a function of 5 > O, employing the Implicit Function

Theorem for real functions [ref. ll, p. 138], in a sufficiently small

neighborhood. We get two real functions ~i~(5) for small 5 > 0, one

tending to HI as 5 ~ 0, the other to ~2" Through the relation


1
~l = r~2 there result two real curves ~+(5)--±which, when substituted as

~i,5 pairs in eq° (3.2) with u = ~lUl , provide two real solutions

x~(A) o f T(x) = kx for k ~ear k o-

C l e a r l y s i n c e ~1 = ~5~' 5 > O,
4-
we see that ~i ~ 0 as 5 ~ 0 and

thus 'Ix~(~)ll ~ o as ~ - ×o" More-


-8
over because the use of the Im-

plicit Function Theorem above im-


02 o4 < 0
plies a uniqueness property and 8>0
FIG, ~.
because the Newton Polygon has
- 36

only one side, there are no other solutions of T(x) = kx such that

iI~I ~ O as k ~ k o, k > k ° for a 2 a 4 < O. By the oddness of T(x), if x(k)

is a solution of T(x) = kx, so also is -x(k). Thus the two solution

branches differ only in sign.

For 5 < O, we substitute 5 = - 15] into eq. (4.4). Then we put


i
~i = ~151~ and cancel ~i ~ 0 for nontrivial solutions. Solution of the

leading part, -ao+a4~2~ = 0 now leads to ~1,2 = ~ I/~-~2


~4 " There exist

two real roots of unit multiplicity if and only if a 2 a 4 > 0. The re-

malnder of the analysis proceeds

in exactly the s a m e ~ a y as with

the case 5 > O.

We have two mutually ex-

haustive situations represented


aza4>O
here. If a2a 4 < 0 we have pro-

duced exactly two real nontrivial


FIG. 4.4.
solutions for 5 > O, while for

5 = - t5t < 0 we have seen there

are no real solutions. Likewise if a2a 4 > O we have seen that there are

no real solutions for 5 > 0 (the leading part in eq. (4.6), namely
2
a 2 + a4~ = O has no real roots), while for 8 = - 151 < 0 we have pro-

duced exactly two real nontrlvial solutions. This ends the proof of

Theorem 4.2.

Of course the hypotheses of Theorem 4.2 are unnecessarily stringent.

Since the theorem furnishes branch solutions of the equation T(x) = kx


-37 -

only near k = ko, and since the bifurcation equation, eq. (4.1) is valid

only with a restriction on ~i: ] ~i I ~ e, where e is a constant in

Theorem 3-2, we see that Theorem 4.2 is really only a local theorem.

In its statement we need only assume oddness of T(x) in a neighborhood

of the origin x = 8.

In writing eq. (4.4), part of the assumption of oddness was that

d~(8;hl, h 2) = 8, hl, h 2 E X. This leads us to our next theorem, which

we include for completeness:

Theorem 4.3: Under H-I but ngt H-2, and with the supposition that

d~(@;hl, h 2) ~ 8, the two branches of eigenfunctions of the operator

T(x) (T(e) = 8) which bifurcate from the trivial solution x o -- 8 at the

bifurcation point k = ko, exist, one on each side of k o. These branches

exist at least i n a s m a l l n e i g h b o r h o o d o f k : ~'o' and l!x(k)!! - 0 a s

k ~ Ko, k ~ k o.
ProQf: In this case, eq. (4.1) is written as follows:

~l2 ul.a~(e;UlUl ) + --6--Ul*d3T(8;Ul'Ul'Ul


" 5{lUl*Ul + -~- ~l3 )

3
ul~ij(~,ul) -- o .
i+j--i

Since we can show again that Wl0 = ~20 = ~30 = O, the bifurcation equation

may be put in the form (compare with eq. 4.2):

q~[aa+~a(8 , q)] + ~12[a3+~3(5, q ) ] + q3[a~+~(~, q)] -- o,


with aa, a3, a 4 and ~2(5,~i), ~3(5, ~i ), ~4(5,~i) as defined in connection
- 38 -

with eq. (4.2). By putting ~--3(8,~ = ~3(5,~i) + ~i[a4+$4(5)~i)], we may

also write this bifurcation equation in the form, [ref. 8, p. 90]:

~15[a2+~2(8,{I)] + ~12[a3+~3(5,~i )] = O. (4.7)

After cancellation of ~i ~ 0 (~i = 0 leads to the trivial solution), eq.

(4.7) is an equation in 5'~1 with a one-sided Newton Polygon with slope

-i. Putting ~i = ~5 we have) for 5 ~ O, B

[ae+)2(~,~)] + ~[aB+~(~,-~)] = o. 2
(4.8)
The leading part, a 2 + a3n = O~ has the I

a2
single root, ~o = " 3a ' regardless of the
v
)
sign of a2a 3. Then eq. (4.8) is put into
Newton Polygon
the form
Fm-~ ~.~-

n~o=-)2(~,~) - ~(8,~)
= A(5,~) - 0 a s ~ - O. (4.9)

Since A(8,~5) is differentlable with

respect to ~ in a neighborhood of ~] = ~o'


-8
we can employ the Implicit Function

Theorem for real functions [ref. ll, p.

138] to produce a solution ~](5) of

eq. (4-9) whether 5 > 0 or 5 < O. FIG. 4 . 6 .

Through the relationship ~i = ~5, we

have a unique real function ~i(5) for

8 ~ 0 which when substituted as El,5


- 39 -

pairs in eq. (3.2) provide unique small solutions x(k) of T(x) = kx for

k near k O. Moreover since ~i(8)~ 0 as 5 ~ O, we see by means of eq. (3.2)

that IIx(~)11 ~ o as ~ - ~o" This ends the p r o o f

To end the present section of these notes we present two rather

simple examples which might be somewhat illustrative of the foregoing

methods.

Let us solve the integral equation

A<0(s) = sin s sin t [o(t)+~02(t)]dt

of quadratic type with first rank kernel.

Eq. (2.3) is represented in this case as follows:

i So
O I- sin s sin t [ l + ~ o ] - d
tlh = - 5~o - 5h +
L sin s sin t h2(t)dt = F~(h)

where we assume that ko,~o is a solution of the problem. There is no re-

mainder term@

The llnearized problem

~.~ - sin s sin t[l÷a~o(t)lh(t)dt = 0

has an eigenvalue k ° = ~sin2t[ 1÷~ o(t) ]at and the normalized eigenfunctlon

sin s. Since the kernel is syumetric, k ° has Riesz index unity [ref. 21, p.

342, Th. 18], where we assume that our Benach space is L2(0,2, ) . The null

space of the operator koI - --JoR"Sln s sin t [ l + ~ o ] -dt is one dimensional. Let

E project on ~; then since the Riesz index is unity, I-E projects on the
- 40 -

range R, [ref. 19, P. 183, p. 217].

Let us first treat bifurcation at the origin, i.e., let ¢ o O. Eq.

(3.2) is represented as follows:

h : MFs(h) + ~ sin s = M(I-E) -~h + sin s sin t h 2 dt + ~ sin s

= ~-6(I-E)h+O} + E sin s

where ~ is a scalar. The iterations indicated in Theorem 3.1 are trivial,

so that the solution is h = Vs(u) = ~ sin s. When this is substituted in

eq. (3.4), we have

~(v~(u)) = ~(~ sin s) = E

=
I ~

E{-5~
sin s + ~2
fosin s s ~ 3 t at

sin s} = - 8~ sin s = O.

As solutions of the bifurcation equation, either 8 { O, ~ = 0 or 8 = O,

{ O are possible. The former is the trivial solution, which leads to

h - O. The latter is nontrivial and leads to a branch of eigensolutions

of the nonlinear problem in a neighborhood of @ o = O, namely cp = ~ sin s,

where ~ is a completely arbitrary scalar. This branch of eigenfunctions

exists only for the single value k = k o.

Thus in this case, a nonlinear operator possesses an eigenvalue k o,

and an associated eigenspace: that space which is spanned by sin s. In

this respect it is llke a linear problem.

In this example the bifurcation equation at the origin yields solu-

tions valid in the large because the iteration process for eq. (3.2) is
- 41 -

finite in duration, and the local requirements of Theorem 3.2 are not

necessary.

~_.~__~: Now let us deal similarly with the cubic equation:

;~(s) = sin s sin t [~(t)+~S(t)ldt

for which the llnearization at the origin again has the single elgenvalue

ko = sin2t dt = ~ of unit Riesz index, and the eigenfunction sin s.

Again let E project on the one dimensional null space ~ Sl~uned by sin s.

Then ~rlth $ o - O, we have eq. (3.2) represented as follows:

h = MFt(h) + ~ sins = M(I-E) -Sh + ~Jo sin s sin t h3(t)dt + ~ sin s

- M{~(I-E)~O} + ~ sin s.

Again the iteration process is trivial, terminating with the solution h =

V~(u) = ~ sin s. q~le bifurcation equation (eq. 3.4)) becomes

EFt( ~ sin s) = E -tg sin s + ~3 sin s sin4t dt = ~ + g3 sin s = 0

whence either = O, 5 arbitrary, or - 5 + ~ ~2 = O. The former possibility

leads to the trivial solution @ -= O; the latter case is that of a right-facing

l~rabolic ctu~e. We are les~l to a branch of nontrivial eigenfunctions parame-

trized by k: $ = ± A ~ sin s. Thus the eigenfunctions in this example do

not form a lines~ space as in linear problems and the previous nonlinear exsarple.

Rather they form a nonlinear manifold, which has been given the D/~me: "con-

tinuous branch."
- 42 -

Again, since the iterations are trivial, the sort of local restric-

tions imposed in Theorem 3.2 do not apply, and we have produced here solu-

tions in the large. Are they the only solutions?

Further solutions could be produced if there were bifurcation on the

continuous branch at some point other than the origin. At k I > k ° = ~,

the continuous branch yields the eigenfunction ~I = ~ ~ sin s; if this


2~
is a bifurcation point, the operator klI - ~o sin s sin t [1 + 4 (kl'k°)sin2t] .dt

is singular. It can be seen however that the linearized problem at kl,

sin s, namely

kh - sin s sin t [
i + 4 °in2t] h(t)dt = O,

has k = k I + 2(kl-k o) > k I as its only eigenvalue, c o r r e s p o n ~ to eigen-

function sin s. Since kl,~l could be any point on the continuous branch,

we see that a "secondary bifurcation" does not exist in this problem.

There is no point of the branch where the considerations of Theorem 3.3

are applicable.

This example is illustrative of the situation of Theorem 4.2. A simi-

lar problem illustrative of Theorem 4.3 would be

k£O(s) = ~ sin s sin t[~o(t)+qo2(t)+~o3(t)]dt.


- 43 o

5. The Eigenv~lue Problem; Ha~uerstein Operators; Sublinear and Super-

linear 0perators; O sqillatio n Kernels.

In the preceding development we have discussed the extension of

branches of eigenelements of quite general nonlinear operators T(x) of

a Banach Space X into itself. Then we treated bifurcation of branches

of elgenelements of these general operators under the assumption that

T ~ (xo) is compact at a bifurcation point (Xo,k o) (condition H-l); this

can be accomplished if T(x) itself is assumed to be completely continuous,

in which case T t (x) is compact everywhere. There resulted a set of simul-

taneous bifurcation equations of quite a general character, namely eq. (3.8).

Because of the difficulties in handling the general bifurcation equations,

attention was then confined to the case where the null space ~l(Xo) of the

operator koI-Tn (Xo) is one dimensional, where (xo,ko) is the bifurcation

point. In this case eq. (3.8) becomes just one scalar equation in one

scalar unknown. The Newton Polygon method was used to treat this case

of "bifurcation at an elgenvalue of T t (xo) of unit multiplicity." The

treatment became very explicit in the case of an odd operator: T(-x) =

° T(x), in which case T(e) = e, where e E X is the null element. We

handled '~olfurcatlon at the origin" in Theorem 4.2; i .e. we set x -- 8.


O

With odd operators T(x) and bifurcation at the origin x o -- e, we

have a situation which may roughly be compared with the situation per-

talnlng to compact linear operators on a real Banach space X, and the real

elgenvalue problem for such operators.

The eigenvalue problem for an odd completely continuous operator T(x)

may be described as follows: Find those values of the real parameter k

such that the equation


- 44 -

~x = ~(x) (5.l)

has nontrivial solutions x E Xj and then explore the properties of these

nontrlvlal solutions. Eq. (5.1) always has of course the trivial solu-

tion by the oddness of T(x). Now if T(x) = Axj x E X, where A is a com-

pletely continuous linear operator, the study of eq. (5.1) leads along

o en ooo scro o uonoOI n O


numbers; these eigenvalues are of finite algebraic multiplicity~ and

thus of finite geometric multiplicity (ref. 21, p. 336). Associated

with each eigenvalue therefore is a finite dimensional linear space,

sometimes called an eigenspace; this space is spanned by the eigenele-

merits associated with the eigenvalue.


ilxli
Since the eigenspace is a linear space,

it contains elements of arbitrarily

large norm. If we were to ~ k e a two-

dimensional plot of the norm of the

eigenelement vs. the eigenvalue, we

should have an array of vertical lines

emanating from the k axis and extend-

ing to infinity. For linear opera- FIG. ~.I.

tors, such a portrayal would not seem

to have any conceptual advantages.

If we think of a linear operator~ however, as a type of nonlinear operator,

we can regard these vertical lines as representing branches of eigenelements

and the eigenvalues ~ as being bifurcation points at the origin. If


- 45 -

the eigenvalues ~ are of unit multiplicity, this description is an

apt one. Indeed, there are nonlinear operators with linear elgenspaces

bifurcating from the trivial solution x = 8, as e x e m ~ l i f i e d b y t h e first

example at the end of Section 4.

In general with nonlinear odd operators, the eigenelements x(k) are

not such trivial functions of the


II x l l

eigenvalues k, and the branches of

eigenelements which bifurcate from

an eigenvalue a(o) of the linear-


-n
ized operator T' (8) are nonlinear

manifolds. On a norm vs. k plot

we do not in general have verti-

cal straight lines. The ques-

tions asked though are the same, 5

i.e. those asked in connection FIG. ~.2.


with eq. (5.1).

The problem therefore in dealing with eq. (5.1), for an odd completely

continuous operator T(x), is to take the information developed in Theorem

4.2 about the " p r i m r y bifurcation points" ~ 0) , (i.e . x o =

theorem), and about the corresponding solution branches in a small neighbor-

hood of x = @, and then to extend these branches into the large. In other

words, it usually is not enough to know the behavior of the branches of

elgenelements merely in a small neighborhood of the origin in real Ban Ach

space X. We want to study elgenelements with large norms also.


- 48 -

When it comes to this question of extending branches of eigensolu-

tions from the small to the large, we run out of general theory. Branches

can be extended by stepwise application of the process of Theorem 2.3

provided there is a ~air (x,A) on a branch which is recognizable as an

"ordinary point," i .e., where kl-T' (x) has a b o u ~ e d inverse. The con-

siderations of Theorem 2.4 are applicable~ however~ which means we can-

not get past the "exceptional points" which may occur, i.e., pairs (x,k)

where kl-T' (x) has no inverse. In order to investigate this latter problem

we must now considerably restrict the class of operators T(x). Namely we

consider the operators of H~mmerstein, (ref. 14, p. 46).

Generally, a Hammerstein operator consists of an operator of Ne~ytskii

[ref. 14, p. 20], namely fx = f(spx(s)), defined on some ftmction space, pre-

multiplied by a linear operator K, i.e. we let T(x) = Kfx.

In the sequel, H~merstein operators will play the leading role. We

admit at this point that we are not partial to H~mmerstein operators on

account of any great applicability in physical problems, though there are

a few such applications, of course. One might mention the rotating chain

problem [ref. 38] and the rotating rod problem [refs. 2~ 20]. Rather, we

like H~mmerstein operators because they are amenable to the study of branches

of eigenfunctions in the large. Assumptions can be made about the linear

operator K which make the study presently possible. An extension of these

results to other classes of nonlinear operators is to be desired~ but seems

difficult now.

In the present study, we let the Banach space X be the space C(011)

of real continuous functions x(s) defined on the interval 0 ~ s ~ i with


- 47 -

the sup norm. The Nemytskil operator f is defined by a function f(s,x)

which is continuous in the strip 0 g s g I, - ~ < x < + ~, uniformly in

x with respect to s. The linear operator K is generated by a bounded

continuous kernel K(s,t), and thus is compact on C(0,1). Thus we let

T(x) = Kfx-- K(s,t)f(t,x(tIIdt. (5.2)

The discrete H-mmerstein operator is defined on finite dimensional

vector space by letting K be a square matrix and f a nonlinear vector

function. It is interesting for examples, but is not fundamentally dif-

ferent than the operator of eq. (5.2).

We further assume that f(s,x) is dlfferentiable with respect to x,

uniformly in s, up to the third order, and we define the following Fr~chet

derivatives:
.I
T'(x)h = Kfx'h = JO K(s,t)fx'(t,x(t))h(t)dt,

~'(X)hlh2 = KfUhlh2x = ~01 K(s,t)f"(t,x(t))hl(t)h2(t)dt,

i
"'nlh2h3 = 50 KCs't)fx~Ct'x(t))hlCt)h2Ct)h3(t)dt
T~(X)hlh2h3 = "--x~f

which are defined everywhere on C(O,I) since C(O,I) is an algebra.

All of the theory developed in sections 2-4 holds for these operators.

It is very useful in the stuc4y of branches of eigenfunctions in the

large to distinguish two types of nonlinearity. We assume we are speaking

of odd operators: T(-x) = -T(x), which for H~mmerstein operators means


- 48 -

that f(s,-x) = - f(s,x). Thus f(s,0) =- 0. We further assume that

f '(s,x) > O, 0 m s ~ I, - ~ < x < + ~. Then for Hammersteln operators


X

we readily distinguish the following pure categories of nonlinearity:

(i) Sublinearity: f

Xfx" (s,x) < 0

O~ s~ I~ - ~ < x < + m

from which it follows that f/


fx"(s,x) < O; sublinear
FIG. 5 - ~ -
(2) Superlinearity:

(s,x) > o
f

O~ sm i, - ~ < x<+~

from which it follows that


X
fx'(s,x) > O.

A Ha~merstein operator Kf, with

such conditions on f, is said to


superlinear
be respectively a subllnear or a FIG. 5.3b.

superlinear Hammerstein operator.

Obviously these two types of nonlinearity are not exhaustive; gen-

e r a l l y a nonlinear operator could be of neither type. In physical appll-

cations, however, problems seem to be preponderantly of pure sublinear or

superlinear type, and these qualities certainly transcend the Hammerstein

class of operators.

There is another important type of operator, where our classifica-

tion goes in a somewhat different direction: the Asymptotically Linear


- 49 -

operator. A Hammerstein operator

is asymptotically linear if

llm f(.s,x) = A(s) ~ O.


x
x-~=~
C ~x

An asymptotically linear Hammer-


asymptotically
stein operator can obviously be
linear
sublinear, superlinear 3 or nei-

ther. If it is either sublinear


FIG. ~ ,~.
or superlinear, however, then

lim f(S,X) = A(s) m O.


x
x-~

The example of section i is a superlinear problem, but not asymptoti-

cally linear.

Let us see how our bifurcation theory appears with H ~ r s t e i n opera-

tors2 and one or the other assumption of sublinearity or superlinearity.

Suppose k o is an eigenvalue of Kf x' for x = Xo(S) , where Xo(S) satisfies

Hammerstein's equation Kfx = kx for some k, of multiplicity unity. Sup-

pose that u I is the associated normalized eigenfunction, i.e., Kfx'U I =

koUl, and Ul* the eigenfunction of the adJoint problem: fx'K*Ul* =


.i
fx'(S,Xo(S)) j~ K(t,s)u2(t)~t -- ku1*(t). With reference to the ex-

plicit bifurcation equation, namely eq. (4.2) 3 we put down the coeffi-

cients of that equation in the Hammerstein case (here we use the setwise

imbedding C(0,1) c L2(O,I) ~ and employ the inner product):


- 50 -

aI =. (UI*,Xo)
1
a2 = - (Ul*,Ul+ ~0 K(s,t)f~l~(t'x° (t))ul(t)Mx°(t)dt)

a3 : g
~01K(s,t)fx" (t,Xo(t) )u12(t)dt)

% =g *, ~oiK(s,t)fx"(t,Xo(t))Ul3(t)at).

It is very convenient to assume now that K(s,t) = K(t,s); i.e. K(s,t) is


syn~etric. Then Ul*(S) : f x ' (S, Xo(S))Ul(S), ~a

~z : - (q'(S,Xo(S))Uz, X o ( ~ ) ) : - (~o 5 ' X o )

% : " (fx. Ul'Ul) " %(ul, f'x U#o)


o o
(~.3)
i 11 i
~3 : ~ "o(~-'fx u12) : ~ ×o(~ ,uzB)
o o

% _ _ ~ o1( U z , f xmUl3 ) -_ ~ ~o (~oUZ~


m , ).
Taking x° -= 0 so as to study bifurcation at the origln~ we have for these
coefficients:
ko ,,,
~o (~,,(s,o),ul3), ~ : ~_ (rX (s,o),ul)
a I --o, % = - (rx'(s,o),ul2), a 3 _-~- 4

where of course Ul(S) is the normalized solution of the problem koh(S) =

~oiK(s,t)fx'(s,O)h(s)dso
- 51 -

Let us examine the sublinear and superlinear cases and the signs

which the above coefficients assume. In the sublinear case, we have for

k ° > 0 that a I = a 3 = O, a 2 < 0 and a 4 < 0 so that a2a 4 > 0. With refer-

ence to eq. (4.4) and Theorem 4.2,


llXll
there are small real solutions for ~ _

k < ko, none if k > k o. Thus bi- f Ol ~ " ~


ko ~o
furcation is to the left at k = k o. sublinear case

Again if k o < O, we have a2a 4 < 0 FIG. ~-~a.

and bifurcation is to the right.

In the superlinear case, if k o > O, II • II

we have a I = a 3 = O, a 2 < 0 and

a4> O so that a 2 a 4 < O.

erence to Theorem 4.2, there exist


With ref-
~o el C superlinear case
small real solutions for k > ko, FIG. 5 .~Sb•
none for k < k o. Hence bifurca-

tion is to the right at k = k o.

If k o < O, we have a 2 a 4 > 0 so that bifurcation is to the left.

The above remarks on sublinearityand superllnearityhave an analog

with abstract operators. Indeed let X = H, a real Hilbert space, and let

T(x) be an odd operator: T(-x) = -T(x), with T(e) = e. Further we sup-

pose that T(x) is completely continuous and of variational type [ref. 14,

p. 300]; in this case T'(x) is compact and symmetric for given x 6 H.

Suppose moreover that T~(x) is positive definite for given x E H.


-52 -

Such an operator T(x) is said to be sublinear if (dT'(x,x)h,h) < 0

for all h, x E H. In other words dT'(x;x) = ~'(x)x is, for all x E H, a

negative definite linear transformation of H into itself. Similarly T(x)

is said to be superlinear if (dT'(x,x)h,h) > O for all h, x E X.

With a > 0 any number, and x E H, we have by definition of the

Fr~chet differential [ref. 15, P. 183] and the fact that ~' (e)x = e,

dT' ( a x ; x ) ; ~ ( ~ x ) x -- ~ ' ( ~ x ) x -T" ( e ) x

(5 .~)
= dT~(e;ax)x + R(e,ax)x,

where R ( e , a x ) = o (a!lxll)- S i n c e in t h e s u b l i n e a r c a s e ( d T ' ( a x ~ x ) h , h ) < 0

for all h , x E H, i t can be seen from eq. ( 5 . 4 ) +.~hat f o r a s m a l l enough,

(dTn(e;ax)xh, h) = (d~'(e;ax, x)h,h) < O; this implies however that

(d~'(e;x,x)h,h) < 0 for all h, x E H. Similarly, for the su~erlinear

case (d~'(e;x,x)h,h) > 0 for all h, x q H.

Then for x ° = 8, we have the following coefficients in the bifurca-

tion equation, eq. (4.2), for the sublinear case:

aI 0, % = - (ul,uI)---I, a 3 =0

i (Ul,d3T(e ;Ul,Ul,Ul)) = ~ (ul, d2T ' (e;Ul,Ul)U i) < O.

Here of course, (kol-T' (e))u I = e. Thus in the sublinear case, a2a 4 > 0

and we have bifurcation to the left. On the other hand, if it were the

superlinear case, we should have a2a 4 < 0 and bifurcation to the right,

(see Theorem 4.2).

Perhaps the chief reason for our selection of Hammerstein operators

as an object of study is the fact that this type of concrete nonlinear


- 53 -

operator possesses a separated kernel K(s,t) about which we can make

further assumptions. Specifically, from an investigative standpoint,

it is useful to assume that K(s,t) is an oscillation kernel, [ref. 9,

p. 236].
Definition: An nxn matrix A = (alk) is a completely non-negatlve matrix

(or respectively completely positive) if all its minors of any order are

non-negative (or respectively positlve).

Definition: An nxn matrix A = (aik) is an oscillation matrix if it is a

completely non-negatlve matrix, and there exists a positive integer

such that A ~ is a completely positive matrix.

Definition: A continuous kernel K(s,t), 0 < s, t ~ l, is an oscillation

kernel if for any set of n points Xl, x2, ..., Xn, where 0 ~ x i ~ l, one

of which is internal, the matrix (K(xl,xk)) 1 is an oscillation matrix,

n = i, 2, 3, ....

With K(s,t) a sym~netric oscillation kernel, we have the following

properties for eigenvalues and eigenfunctions of the equation

~(s) -- K(s,t)$ (t)d~ (t) (5.5)


#0
where ~(t) is a non-dimlnlshlng function with at least one point of growth

in the open interval 0 ~ t K i, [ref. 9, P- 262]:

(a) There is an infinite set of elgenvalues if ~(t) has an infinite

number of growth points.

(b) All the eigenvalues are positive and simple: 0 ( . . . ( k n < k n . l ( ..-(ko.

(c) The eigenfunction ~o(S) corresponding to k ° has no zeros on the

open interval 0 ( s ~ I.
" 54 -

(d) For each j = i, 2, ..., the eigenfunction Cj(s) corresponding

to kj has exactly J nodes (odd order zeros) in the interval

O< s < l, and no other zeros.


m
(e) ~(s) = ~ ci¢i(s) has at most m zeros and at least k nodes in
i--k
m
the interval 0 < t < l, for given ci, ~ ci2 > 0. If the num-
i=k
ber of zeros is equal to m 3 these zeros are nodes.

(f) The nodes of the functions Cj(s) and ~j+l(S) alternate, j = l,

2, ... .

Our interest in the oscillation kernel in dealing with Ha,~erstein

operators stems from the fact that with fx t(s,x) > O, 0 g s ~ l, - ~ < x < +

as we have supposed, the Fr~chet derivative

Kfx'h = K(s,t)fxe (t,x(t))h(t)dt, (5.6)

with K(s,t) an oscillation kernel, is a case of a linear operator such as

that appearing in eq. (G-G), so that the properties (a)-(f) listed above

are true for its eigenvalues and eigenfunctions. We wish to stress as very

important for H~mmerstein operators that properties (a)-(f) hold for opera-

tor (5.6) whatever the continuous function x(t) used in the definition of

the operator, if K(s,t) is an oscillation kernel.

Properties (e), (f) are actually in excess of requirement as far as

we know, as is also the statement in property (b) about the positivity of

the eigenvalues.

With K(s,t) an oscillation kernel, every eigenvalue ~p(o) p -- 0, i,

2, ... of the Frgchet derivative Kfoeh =


~0
1 K(s, t)fxl (t~o)h(t)dt at the
- 55 -
origin is of multiplicity unity, so that Theorem 4.2 or 4.3 is directly

applicable to study primary bifurcation from the trivial solution. Each

such eigenvalue ~p(o) is a bifurcation point. Moreover if Xo(S,k O) is an

exceptional point on a branch of eigensolutions, i .e. the Fr~chet deriva-


I" 1

tive Kfxolh = JO K(s't)fx'(t'x°(t))h(t)dt has a n eigenvalue AO~ or

Ao I-Kfxo' has no bounded inverse, then we know a priori that k ° is a simple

eigenvalue, or the null space ~l(Xo) is one dimensional. Hence our bifurca-

tion theory with the Newton Polygon method is applicable, in ~erticular eq.

(4.2).

Another bene~tin assuming an oscillation kernel is illustrated in

the following example for a discretized HAmmerstein operator:

~E~m_~!e: Consider the discrete superlinear problem:

(.O b v+v 3 =k ,
a > b (5-7)

for which we have the following linearization:

(~ . 8 )
o b k (l+3v2)k! = ~" •
At the origin~ u = v = O, and we have primary bifurcation points a > b.

A continuous branch of eigenvectorsj n~me!y (~ ~ I b l f u r c a t e s to the

right at k = a from the trivial solution


(°I
0
while another branch
(& )
- 56 -

bifurcates to the right at

k = b. Of interest is the

behavior of the eigenvalues I I x II

of the linearized problem

eq. (9.8) as the branches

evolve. Taking the second


f
i

branch , and
///
/ "!/(+F-_-, k
letting u = 0 a n d
Q

v = • ~l 4 ~u - i in ~ . (5.8) F!O. ~.6.


we see that the linearization

has two eigenvalues, ~i = a and ~2 = ~-2b. The parameter k increases

as the second branch evolves however, and whereas initially we have

a.2b
~2 < ~i' for k > ~ we have ~2 > ~i" Moreover a situation is attained

where k = ~I = a. At this point on the second branch, eq. (5.8) has the
11\
o r o = =o ooo.

place.

In this example, the kernel or matrix(~ ~ l i s not an oscillatic~

matrix; two of its minors vanish. If we were to have used an oscillation

matrix~ s a y ( : :>with 0 < e <~, we could have been assured a priori

that eigenvalues ~i,~ 2 would always be simple, and the crossover point at
a+2b
= ~ would have been impossible. We should have had ~2 < ~I always,

and no secondary bifurcation.


-57 -

Green's functions for differential operators are often oscillation

k e r n e l s ( r e f . 9~ p . 3 1 2 ] . T h e r e f o r e when ~ s t e i n operators result

from boundary wLlue problems f o r d i f f e r e n t i a l e q u ~ t i o n s i n a p p l i e d work,

a ~rstein e q u a t i o n i n v o l v i n g an o s c i l l a t i o n k e r n e l must o l d e n be

solved.
- 58 -

. On the Extension of Braa~hes of Eigenfunctions; Conditions Preventing

Secondar~ Bifurcatio ~ of Branches.

r 1
In this section we assume that T(x) = ~_ K(s,t)f(t,x(t))dt, i.e.,
-u

T(x) is a Hannerstein operator. The kernel K(s,t) is bounded and con-

tinuous, 0 m s ~ l, 0 ~ t m 1. The function f(s,x) is continuous in the

strip 0 ~ s ~ i, - ~ < x < + ® uniformly in x with respect to s.

Thus if fx'(S,X) exists and is integrable, T(x) satisfies H-1 as it

stands; i.e., T'(x) = K(s,t) f '(t,x(t)).dt is completely continuous


x

for any x(s) E C(O,1). T(x) also satisfies H-2, i.e., T(-x) = - T(x) if

we assume that f(s,-x) = - f(s,x).

Let us make the following additional assumptions:

H-3: f(s,x) is four times differentiable in x, with IfxiVl bounded~uni-

formly over 0 ~ s ~ i. Lira f(s,x) = f '(s,O) uniformly on 0 ~ s < i.


x x
X..~O

It can be shown that f(s,x) is continuous in x, uniformly on 0 ~ s < i.


X

We assume one or the other of the following pair of hypotheses:

H-4a: Sublinearity; i.e. f '(s,x) > Op O g s g i, and xf "(s,x) < O,


x x

0 ~ S < i, - ~ < X < + ~.

H-4b: Superlinearity; i.e.f. '(s,x) > O, O < s ~ i, and xf "(s,x) > O,


x

0 ~ s < i~ - ~ < x< + m.

We again note that f " ( s ~ O ) < 0 in H-4a; fx"(s,0) > 0 in H-4b; 0 < s < i.

Also for most of our considerations it is well to assume the following:

H-~: Asymptotic Linearity: lira f(s,x) = A(s) ~ 0 uniformly, 0 ~ s ~ i.


ixl~ ® x
- 59 -

Finally all subsequent considerations are based on the following

requirement:

H°6: K(s,t) is a symmetric oscillation kernel, (see section 5).

Hypothesis H-6 together with the condition fx' (s,x) > 0 stated in H-4

imply that the linearized problem

kh(s) = K(s,t)fx' (t,y(t))h(t)dt (6.1)

possesses a sequence [~n] of eigenvalues, each of unit multiplicity and

unit Riesz index, and a corresponding sequence [hn] of continuous eigen-

functions such that h n(s) has precisely n nodes (odd order zeros) and no

other zeros on the open interval 0 < s < l, n = O, l, 2, ..., [ref. 9,

p. 262]. We stress that this property holds whatever the function

y(t) E C(O,1) we choose to substitute in the definition of the operator

in eq. (6.1).

Having made all these assumptions, it is clear that every eigen-

value Wp
(o) , p = O, l, 2, "'" is a primary bifurcation point for the non-

linear eigenvalue problem


i .1
kx(s) = ]o K(s,t)f(t,x(t))dt. (6.2)

Here [~(n°)] is the sequence of simple eigenvalues for linearized eq. (6.1)

with y(t) m O. Indeed Theorem 4.2 is applicable, and there exist exactly

two solutions of small norm which branch away from the trivial solution

x(s) 8 of eq. (6.2) at k = ~p(O) P = O, i, 2, ''-. Also, as d i s c u s s e d

in section 51 in the sublinear case (H-4a) these solutions branch to the


- 60 -

left since ~p
(o) > O; i.e. there exist two sm~ll solutions if A < ~ o ) ,

but none if k > ~ p (o) • In the superlinear case on the other hand (H-4b) I

the branching is to the right, i.e., there exist two small solutions if
.Co). The two solutions of small norm which
k > ~P(°)' but none if k < ~p

bifurcate at k = ~p
(o) , p = O, i t 2, ..., differ only in sign. We denote

the two solutions bifurcating at k = ~p


(o) by xp~(s,×), and note that

llm = 0 in the norm of C(O,I). This is readily seen in


x.~ °)
inspecting the proof of Theorem 4.2.

The following result on the zeros of x~(s,k) will be useful:

Theorem 6.1:
x~ (s,k), where defined for k ~ ~p
(o), has exactly p nodes

and no other zeros on 0 < s < i, p = O~ i, 2, -.-.

Proof: Consider the problem

~oI f(t,~(t,k))
~u(s) = K(s,t) x;--(t,A) u(t)dt, (6.3)

which has the eigenvalue sequence {~n} and eigenfunction sequence {Un(S)}
where, as indicated for oscillation kernels, Ul~(S) has exactly p nodes on

0 < s < i° To convert eq° (6.3) to a ~ o b ! e m wlth a symmetric kernel with

the same eigenvalues t we put V(s) =41 ~ u(s)p whence

s,~)) +(s,x))
~V(s) = ' ' K(s,t) ........ v(t) t.
,)
(6.~)
- 61 -

(We note that £(s,X)x > 0.) By H-3 as k - ~P(°)' k ~ ~p(°), the symmetric

kernel tends uniformly to the symmetric kernel jfx'(S,O)K(s,t) ~fx'(t,o).


Therefore by a known result [ref. 7, P- 151], the eigenvalue ~p of eq. (6.3)
tends to ~ o ) , p = O, i, 2, "", and the normalized eigenfunction Vp(S) of
eq. (6.4) tends uniformly to Wp(S), where Wp(S) is the p'th normalized
eigenfunction of the problem

i
w(s) --
i Jfx'(S,O) K(s,t) Jfx'(t,o) W(t)dt

which is associated with eigenvalue ~p(o) . Equivalently we may write


W(t)
(o) ~_W10(S)-- ~o1 K(s,t)fx,(t,o ) J X'p .... dr,
p = 0,1,2,---.

But obviously we then have ~........ = hp(°)(s), where "np(°)(s) is the p'th

eigenfunction of eq. (6.1) with y(s) = O. This is because the kernel

K(s,t)fx'(t,o ) has elgenvalues of unit multiplicity only.


We happen to know a solution pair (u(s),~) for eq. (6.3) however,

namely u(s) = C ( s ' k ) , ~ = k. We readily see this by inspection of

eq. (6.2). Indeed k is one of the eigenvalues [~n] and is

among the normalized eigenfunctions [Un] of eq. (6.3). As k ~ ~p(o) how-


ever, only one of the eigenvalues of eq. (6.3) tends to ~pO)-- and this
must be k itself. Hence ~ = ~p. The corresponding eigenfunction up(s)
62-

is then a member of the one-dimensional eigenspace spanned by _!lxp


(s,×)ll
Since up(s) has p nodes and no other zeros on 0 < s < i, the same is true

for xp~(s,k). This concludes the proof.

We prove the following result for the sublinear case. The superlinear

case is shown in the same way.

Theorem 6.2: Suppose hypothesis H-$a holds~ i.e.~ we have the sublinear

case. Let (xp*,k*) be a solution pair for eq. (6.2) on the p'th branch

X;(S,k). Then ~p* < k*, where ~p* is the p'th elgenvalue of eq. (6.1)

where we have put y(s) = xp*.

Proof: Let K be the positive definite operator on ~(0,i) generated by

K(s,t). (K(s,t) is symmetric, and has positive eigenvalues since it is

an oscillation kernel. ) There exists a unique positive definite square

root H, where K = H.H. Let us consider the following eigenvalue problem

for a symnetric operator:

: . (6.5)

with eigenvalue parameter k and eigenfunctions L to be determined. Eq.

(6.5)has the sam_~eeigenwalues {,n* } as equation (6.1) with y(s) = xp*.


(To elucidate our notation in eq. (6.5), we state that if the operator

H has a kernel H(s,t), then

Hf~p H~ =
/o H(s,r)f x' (r,xp*(r,k*))
foH(r,t)~(t)dtdr;

the operator H ~f. H below would have a corresponding expression.)


x
P
- 63 -

Likewise t h e problem

5z = H _ L
Xp* Hz, (6.6)
with a sy~m~etric operator, has the same eigenvalues [Sn] as problem (6.3)

with x;(s,k) = Xp*; the eigenfunctions of equation (6.6) are used later

in t h e proof, and are denoted by [ Zn].

We note now that for all u E L2(0,1) we have

(He~. H,,,u) = (r~: w K~,Hu) = ~o1 Csul2rx '(s, *(s,k*))ds <


P P
46.7)
< ~oI Z(s,x *(s,k*)) ~.
[su}2 - ~ as = (H Hu,u);
xp*(s,k*) P

(in the sublinear case, H-4a, f(s,x) has t h e p r o p e r t y fx'(S,X) < f(s~X)
X

0 < x < + m and - ~ < x < O). Then, using Courant's minimax principle

[ref. 19, p. 238],

(~. H~,u)
n~n max ~ .....
(u,u;
{ v l , ' " N . l} ~vl,'",~. 1
(Hr'x W Hu,u) (H ~ . Hu,u)
g max - - - ~(u,u) < max P
.......
(u,u) .... = 5p,
UlZl~ • • .~zp. 1 ulz I, • • -, zp. 1

where 5p is of course the p'th eigenvalue of eq. (6.3) with = xp*,


and we have used inequality (6.7).

Since K ( s , t ) i s an o s c i l l a t i o n k e r n e l , 5p c o r r e s p o n d s t o t h a t e l g e n -
. •
f u n c t i o n Up(s) o f e q . 46.3) (with x~ ( s , k ) = x ) which has e x a c t l y p
~ xW(s,k*)
nodes on 0 < s < i. Therefore u~, !I~I = i i. identical with ,rJ , ~ , ~. ,
r ' ~ | I X ~ & w,,~W J H
and 8p = kw. Hence ~p* < kw and the theorem is proven.
- 64 -

In the superlinear case, it can be seen from H-4b that f ' (s,x) > f(s,x)
x x '
0 < x < + ~ and - ~ < x < O; thus for the superlinear ease, the inequality

of Theorem 6.2 is reversed: ~p* > k*.

As discussed in Section 91 the problem of extending the branches xp(s,k)

into the large is that of finding an ordinary point (i.e. (Xp,k) such that

kl-Kf'px has a bounded inverse, where Kftxp is the operator of eq. (6.1)),

employing the process of Theorem 2.3 in a stepwise manner, which process can

terminate only in an exceptional point by Theorem 2.4, and finally handling

the bifurcation equation (3.8) at any exceptional points on the branch.

Theorem 6.2 above represents about half of the task of showing that

xp* = xp±(stY*) is an ordl~ry point. We have shown that ~p* < k* (~p* > k*)

there, and that this is a consequence of sublinearlty(superlinearity) and thus

is true wherever the branch is developed. Note that because K(s,t) is an

oscillation kernel s ~p* is of unit multiplicity and so ~n* < ~p*# n = I~i,

p+2, " ' " f o r a r b i t r a r y points Xp* on t h e p ' t h branch x p ~ ( s , k ) .

The other half of the Job of showing that xp* = Xpe(S,k*)is an ordinary

point consists in showing that k* < ~* where * is of course the p-lth


p-i ' ~p-i
eigenvalue of eq. (6.1) with y(t) = xp*. Indeed~ if we can show that k~-~p.l* '

then * < A* ~* < ~* --- so that k*l-Kf t . cannot be


"'" I~p+I ~:P* < < ic~l p-2 Xp-
singular.

Showing t h a t k* < ~* presents major difficulties; further assumptions


p-I
are needed. So as to introduce the necessary assumptions, we prove the follow-

ing intermediate results:

Lem 6.3: Consider the linear integral operator

= K(s,t)~9(t)h(t)dt (6.8)
-85 -

where q~ E C(O,l), ~ m O, and K(s,t) is an oscillation kernel. The eigen-

values [~n(K,~)] are continuous functions of q~ in terms of the L2(O,I)

norm.

Proof: By putting $ = ~ h, we ascertain that the problem kS -- ~ K ~ $

has the same eigenvalue ~arameter k as the problem ~h = K~h for the opera-

tor defined in eq. (6.8). Here of course we have the symmetric operator

L
with ~(s) m O. As ~ - ~*, m, q~* E C(O,l), in the L2(0,1) norm, we have

I~ K~ -~ KJ~I + !I~ Kj~ - ~ K~-~I

IJ~C~ - ~712~2(,,t)Im(t)l dtdsl


~l
.J

.i
Jo '~'(8)'K2(s,t)'~'~'2dtds 1

I~(o)~*(~)I K2(o,t)In(t)i aids]½

4-
fSoSo I
ImP(s)IK2(o,t) l~(t)-m*(t) Iatd

where K -- max K(s,t). ~us ~ K ~ - ~ K j ~ in the ~form t o ~


0~s~l
0~t~l
- 86

of L2(O,I) • Since ~ K ~ is a sy~netric operator, we use a known re-

sult [ref. 19, p. 239] to see that ~n(K,o) ~ ~n(K,q~*) as q0 ~ ~* in the

L2(O,I) norm, n = O,1,2, .... Here of course ~n(K,q~) is the n'th eigen-

value of operator (6.8). This ends the proof.

Next we define the following number which varies from branch to

branch:

Ap = sup
+ "~.i(K'~ ~
~S l

vhere S~+ : Eml~ ~ C(O.l), !Imll : z, m. 01. of course ,p(K,m) is the


p'th eigenvalue of operator (6.8). We have the following result:

Lp-m~ 6.4: Ap > 0 is less thsm unity, and the maximum is actually

assumed for a function q~* E S ; .

proof: The positive symmetric continuous nondegenerate kernel K(s,t)

generates a nonsingular completely continuous operator K on L2(O,I).

Thus k = 0 is in the continuous spectrum of K on L2(O,I) , and the range

R K of K is dense in ~(Opl), [ref. 21, p. 305, Th. 8; p. 292]. We have

R K C C (0,I) since K(s,t) is continuous. Also of course S ; c C(O,I) c

L2(O,I) in the sense of set-inclusion.

,~(K,~) ,~(K.~)
~o,, ^p : s:p ~p.i(k,~ ) since ~(~) : ,~z(k,~)-is strongly

continuous in q~ E S ; in terms of the L2(O,I) norm, as was shown in

Le-m~ 6.3. Then

: sup Fp(m) sup Fp(Kg) : sup Fp(~) ' ~ l,


kp ~S;nRK ~L2(o,I)
Y,g =" o
-67 -

where ~ = [glgEL2(O,1), llgll= i, Kg m 0]. This follows since the func-

tional Fp(K~)is constant alor~ rays rg, O ~ r < m.

Instead of contlnuir~ to deal directly wlth the functional Fp(Kg),

it is adv~nts4~eous at this point to consider the fv~ctio~l ~iH(g) =

l+Fp(Kg) on ~ , sad its convenient extension Sp(g) = (g,g) + Fp(Kg) to

all g such that Kg ~ O, g E L2(O,I). If @pR assumes its maximt~ value

i+ ' on ~ , then Ap ' will be assumed by Fp(Kg) at the same element of ~ .

There a m .imi- i s uence E K such >

1 + Ap' - ~ , n = 1,2,3,'", and in~i


~$ )= sup ~ g) = 1 + •
gEEK
Since the unit sphere is weakly coml~ct in L2(O,I) , there exists a sub-

sequence [gnl], Ilgnll! = i, weakly convergent to some g* E L2(O,I). By

passing to the weak limit in the inequality I(gnl,g*) I ~ llgnl~lllg*ll as

nI ~ ~, we see that l!g~l < i.

Since K is compact, {Kgn2 converges strongly in L2(O,I) to an ele-

ment $* = K ~ m O. Because Fp(O) is a continuous function of O E C(O,I)

in the L2(O,I) norm by Le.m~ 6.3, we have Fp(Kgnl ) - Ap ' as n I ~ ~. In

the case g* ~ 0 therefore, the maximum 1 + ~ ' of @pR(g) on EK is assumed

by ~p(g) at g*. I f llg*ll < i, we should then have ~p(g*) = (g*,g*) + FpCKg~)

= i+ ~'. This is a contra~ictlon since ,p(g*)< ~ / ( ~ ) , which in

turn is because Fp(Kg) is constant on radial lines, and ~ E EK. Hence

!lg*!l = l, and both ~pR(g) and Fp(Kg) assume their maximum v a l u e s , 1 + Ap'

and Ap' r e s p e c t i v e l y , on EK a t t h e element g* 6 EK. The mximum v a l u e o f

Fp((:p) on S ; is therefore assumed at . . ~ , where ~p* = ~ > O, .(D*~ O.


- 68 -

Also since K(s,t) is a continuous kernel, we have ~*(s) E C(O,l), so that


l
Ap = Ap.
In the case E* = O, Fp(KE*)is not defined, but the limiting value

Ap' of 0p(g) as gnl - E* weakly in ~ ( 0 , i ) is less than certain discern-

able values assumed by ~p(g) on ~ , which is a contradiction.

The linear operator (6.8), with ~ i ~*, has eigenvalues ~n(K,o*),

n = 0,1,2, ..., such that ~n(K,~*) < ~n.l(K,~*) (strict inequality); this

is because K(s,t) is an oscillation kernel, [ref. 9, PP. 254-273]. Hence

Ap < i, and the le~-a is proven.

With these two preliminary results proven, we are now in a position

to state our main results having to do with whether or not a given point

xp" xp~(s,k *) on the p'th branch is an ordinary point.

We put forth a couple of conditions which, together with the state-

ment of Theorem 6.2, will be seen to guarantee that x * is an ordinary


P
point. These may be considered as a priori conditions on either the

kernel K(s,t) or on the function f(s,x). In the sublinear case, hypothesis

H-4a, the condition is that

f(s,x) > Alp, O ~ s ~ i, - ® < x < ®, (6.9a)

while in the superlinear case, hypothesis H-~b the condition is that

xf ' (s,x)
x 1
"f(S,x) <--, 0 ~ s ~ i, - ® < x < + ®, (6.9b)
Ap~l

where An, n -- 1,2,3,--. was shown in Le,w ~ 6.4 to be less than unity. The

fact that 0 < A n < i, n = 1,2,3,..- means that there exists a class of sub-

linear respectively superlinear functions f(s,x) for which condition (6.9a)


- 69 -

or (6.9b) is realizable.

Theorem6.~: Under hypothesis H-4a (sublinearity) let f(s,x) be such

that condition (6.9~) is satisfied. Then k* < ~ l ' where ~ l is

t h e p - l t h e i g e n v a l u e o f e q . (6.1) where we have put y ( s ) = x p * = x p ~ ( s , k * ) .

Under h y p o t h e s i s H-4b ( s u p e r l i n e a r i t y ) l e t f ( s , x ) be such t h a t c o n d i t i o n

(6.9b) is satisfied. Then ~ i < k*.

Proof: As in the proof of Theorem 6.2 we again split the kernel: K = H-H,

and consider the symmetrized problem

_L

where, if the square root operator H has a kernel H(s,t), we may write

1 f(r,x*(r,k*)) o~
H H p~ =
~0 H(s,r) ' ' ~
xp*(r,×*)
H(r,t)~ (t)dtdr.

For the p'th eigenvalue of eq. (6.10) we have 5p = A*. Now i n t h e

subllnear case (Hypothesis H-4a),

(6.ii)
a s f o l l o w s from t h e d e f i n i t i o n o f Ap. But

#p-i K, [Vl, " ""' Vp-2"~ IALVl," ' " v p - 2

max (u,u)
"'A, "",Yp-2
- 70 -

where YI' "'"Yp-2 can be any set of p-2 elements in ~(0,i). I n parti-

cular, let YI' "'"Yp-2 be the eigenelements of H f ~ . H, which operator

was discussed in the proof of Theorem 6.2. Then since by condition (6.9a),

.i .i
{~u]2as < ]o %' (s,~*(s,~*))CHu]2as
(6.~)
= (H%. Hu, u)

we can write, using inequality (6.11)

max Ap (u,u)
UAYI' " " "'Y- 2

(Hf' . Hu, u)
< mm~x
-~ (u,u) --- ~-i '
ulyll ..,tYp. 2
which proves the result in the sublinear case.

On the other hand, in the superlinear case (Hypothesis H-4b),


)
~+i (K,f'x *
~I~i ( K , % . ) . .~p(K,
. . Xp*~ ) ~p(K, f,xp )< f' (6.~3)

w h e r e a g a i n we u s e t h e d e f i n i t i o n o f t h e number An . Analogously with

inequality (6.12), we now h a v e t h e inequality


- 71 -

~^-(~'x
* ~u,u) = ~( f'~ . Hu, Hu)

= ~l X

i {Hu}~as - (~ ~ Hu,u) (6.~)

where we have made use of condltion (6.9b). Then using inequality 46.13)
we hay% with Zl,...~zp. I representlr~ eigenelements of H Xp* H~
__r

(a~. ~u,u)
P
mln mex
~ <~ {,, , . . . , ~ UlVl~ •, .~vp_i

c~ ~ Hu,u> ~ Ru,u
m~
< ~i UlZl ~... zP'I %~ZlS '''~zp. 1

= ~p ~ = 5p = k*~

where i n e q u a l i t y (6.1~) has been used. This proves t h e theorem.


The b a s i c content of Theorem 6.2 and Theorem 6.9 taken together can
be s t a t e d q u i t e v i v i d l y :
Xfxt ( s , x )
Coro!lar7 6..6.- In t h e sublinear case (Hypothesis H-~a), i f Ap < ~is, xJ -< 1,
0 ~ s ~ i~ - = < x < + =~ithen ~p < k < ~p.l. Here ~p is the p'th eigenvalue

of the operator Kfx'h = Jo K ( s ' t ) f x ' ( t ' x P e ' ( t ' A ) ) h ( t ) d t ' where x ; ( t , k ) rewe-

sents the p'th branch of the eigenflmctions of the nonlinear problem. On the
~rX' (s,x) 1
other hana, in the superlinear case (Hypothesis H-~b), if 1 < f(s,x) <

0 ~ s ~ i~ - ~ < x < + ~ then ~I~i < A < ~p.


- 72 -

Thus, since (kI-Kfx' )-i can fail to exist as a bounded inverse only

when k = ~n for some n = 0,1,2,..-, and since under the assumption that

we have an oscillation kernel (Hypothesis H-6) all the ~n'S are a priori

of unit multiplicity, we have stated in Corollary 6.6 certain a priori

conditions that we never must consider the bifurcation equation, eq.(4.1),

in developing the p'th branch of eigenfUnctions x;(s,k) of the Hammer-

stein operator from the small into the large. In essence we have opened

up a corridor between the eigenvalues [Wn] through which to extend the

p'th branch uniquely.

The definition of the number Ap can certainly be refined. This quantity

was used in the proof of Theorem 6.5 (inequalities 6.11 and 6.13) to repre-

sent anupper bound for the ratios ~ p - l ( KI+ / a n d " p'- i ( K-; ' x - p* ) "

the definition of ~ (Lelnmla 6.11.) we took the sup of the ratio l~i(K,o- )
as m assumed ~ l u . i n the set Sl+ = [ml~c¢o,1), llmll = l, m ~ 0]. crawly
however it would have been better to have taken the sup over a more re-

stricted set of functions which would have reflected somehow the property

that functions on the p'th branch xp* = V(s,k*) have exactly p nodes on

the open interval 0 < s < I, (see Theorem 6.1).

To begin with, we see that in the sublinear case (H-4a) we have the

inequalities A(s) g ~ P
. g fX '(s,0) and A(s) < ft
Xp* g fx (S,0), while in

f ~ ACs) and ~'(s,O) ~ f,


the s u p e r n n e ~ case ~'(s,O) ~ ~-~p. xp* ~ A(s).
- 73 -

Here A(s) is the function appearing in the statement of Hypothesis H-5.

Hence the set over which we compute the sup in the definition of ~ should

reflect this fact.

We confine ourselves to the sublinear case (H-~), the superlinear

case being h a n d l e d in a similar way. Let

SA~ fo
, = [~I~ ~ c(o,i), ACs) ~ ~(s) ~ fX '(s,O)],

and consider the cone in S+ all of whose rays intersect this convex set:

+
CA, fo, -- [,l×,(s) ~ SA, fo, for some k, × > 0].

Then we may use

% -- sup ........ j % < 1


q~S;OCA' fo'
as a refined definition of the number ~ to be used in inequalities (6.11)

and (6.13) (the latter in the s~erllnear case).

We can refine further the definition of Ap. Let

~,ro,,p -- [~l~ ~ c(o,I), A(s) ~ o(s) ~ zx'(s'°); fx'(S,O) - ~(s) has

exactly p zeros on 0 < s < i]

and then consider the cone in S+ all of whose rays intersect this set:

%,%',p = [%Ik¢(s) 6 ~S+ fo, p for some k, k > 0].

Then we may employ

~l~(K,q~)
~= sup
~°ES;aCA, fo' ,p
- 74 -

as a yet more refined definition which takes into account that in inequali-

ties (6.11) and (6.13) we are really only interested in the p'thbranch.

It is not known at present to what extent, if any, such refined defini-

tions of Ap improve the sufficient conditions against secondary bifurca-

tion, of the p'th branch of eigenfunctions, given in Theorem 6.~ and in

Corollary 6.6. It is clear that such refined definitions do not enhance

the computability of Ap.

Finally an examination of the proof of Theorem 6.9 will suggest another

condition against secondary bifurcation of the p'th branch of eigenfunctions

of eq. (6.2), which is much less concrete than that given by inequalities

(6.9). Namely, it is a condition which compares two functionals on

S+ = [~I~ 6 C(O,1), ~ ~ 0]. In the sublinear case, hypothesis H-4a, the

condition is that

>
(6

while in the superlinear case, hypothesis H-4b, the condition is that

O~s~l ~ -

~E S+ •
~ e remember that f(s,~) and fx(S,~) are even functions of ~, so that an

oscillatory ~(s) would satisfy the inequality in (6.15) along with ]~(s) I .)
- 75 -

In view of Lemma 6.4, the right hand functional in inequality (6.15a) is

less than unity, while the right hand functional in equality (6.15b) ex-

ceeds unity. Hence the conditions are realizable for sublinesr respec-

tively superllnear functions f(s,x).

Theorem 6,7: Under hypothesis H-4a (sublinearity) let f(s,x) be such that

condition (6.15a) is satisfied. Then k* < ~* where W~-l is the p-lth


p-1
eigenvalue of eq. (6.1) with y(s) = Xp* = x P~(s,k*). Again under hypothesis

H-4b (superlinearity), let f(s,x) be such that condition (6.15b) is satis-

fied). Then ~ l * < k*.

Proof: We show the proposition in the sublinear case, the superlinear

case being shown similarly. We split the kernel: K = H.H, and have eq.

(6.10). Then from eq. (6.11) we have

" (u,u)
(6.16)
uAYI' "" "Yp-2

where the arbitrary set of p-2 elements YI' "'"Yp-2 in L2(O,I) can be the

eigenelements of Hf~p. H. In inequality (6.16) we have used the fact that

Up (.'),
\ X* is a pure number. Then in view of condition (6.15a), we can

write, as in inequality (6.12):


- 78 -

f( s,x *(s,~.*) )

I ~ . . . . . . . t~Hu] 2ds
xp*(s,~*)

~*<~'~*)fx'(s'x*{~'~*)) i'(s,x*Cs,~*))
of
1

Then using i n e q ~ l i t y (6.16) we have

(Rf'x . Hu,u)
k* <
(U,u) = ~.-1 '
uJ'YI' "" "'Yp-2

which proves the sublinear case. As mentioned, the superlinear case,

utilizing condition (6.15b), is similar and the proof can be patterned

after that of Theorem 6.5.

There follows immediately the result:

Corollary 6.8: In the sublinear case (Hypothesis H-4a), if

I <
;~. f(s,~S~ rain ~ < z, ~ s÷ ,

then ~p < k < ~p-l' and there is no secondary bifurcation of the p'th branch

of eigenfunctions of eq. (6.2). In the superlinear case (Hypothesis H-4b), if


- 77 -

mfx (s.®)
i
,p(K.rx'(S.m))

then ~p+l < k < ~p, and again there is no secondary bifurcation.

Of course, in this corollary, ~p is the p'th eigenvalue of the

operator repre-
x jo l~ p

sents the p'th branch of eigenfunctions of nonlinear problem (6.2).

It would be interesting %o find out to what extent condition (6.19)

might be a necessary condition for no secondary bifurcation of the p'th

branch. Certainly for Ham~erstein operators under hypothesis H-~a (sub-

linearity), it is necessary for no secondary bifurcation that k* < ~*


p-i '

(h,h) < (b,h)


~ z I, "''zp. I v I, "''Vp. I h~v I, "'',Vl~.l

using obvious notation from the proofs of Theorems 6.9 and 6.7. Here

Zl,.. ",zp. I are of course the first p-i eigenelements of the operator

H x-~p
f H. Then for s o m e ~ ± Zl, -.-, Zl~l, we have

(H m"h) <
ma~
(h,h) (h,h)
hlZl, •..zp. 1 b,Zl, • .., zp. 1

~p'l (K' X * )
- 78 -

which leadsimmedi&tely to

P
<
(~,~') (~',~')

Thus, cancelling d e n o m i n a t o r s we have

- r' mT,~ < o


xp* ~p (K,rxp. ) xp*

Or

II -
~p-I(K" Xp
~P~;
~_(K,f.,.
_ f'
Xp*
<0

which has the concrete representation

~p.l(K,f~. )
~oI Xp(S,~*) ~ p ~ fx'(s,xp(s,~*)) [~a~2ds < 0.
p

For this it is necessary that


.)
<

or

fCs,Xp(i,×*) 1 > ~p-


~ i (K'f'x *) (6.~7)
P
- 79 -

on a subset of 0 < s < 1 of positive measure.

Necessary condition (6.17) certainly beers some resemb!~-ce to suf-

ficient condition (6.15a) for the sublinear case~ but these two conditions

also show a conjugate feature: ~ occurs in the sufficient condition

while f x * occurs in the necessary condition.

A necessary condition for the superllnear case is handled in the same

~D
80 -

T. Extension o f Branches o f Eikgenfunct!ons of. Han~erstein O P e r a t o r s .

As remarked i n t h e t e x t o f s e c t i o n 6 p r i o r t o i n t r o d u c i n g Theorem

6.2, Theorem 4.2 is applicable in defining the branches of eigen_~unctions

of the nonlinear Hammerstein equation, eq. (6.2), in a small neighborhood

of the origin and in a neighborhood of a primary bifurcation point. In-

deed under Hypotheses H-2 through H-6 there exist exactly two branches

x:(s~k) emanating from the trivial solution at each primary bifurcation

point k = ~(o).- In the sublinear case (H-4a) these exist for k < ~(o),

while in the superlinear case (H-4b) they exist for k > ~p(0) . By the

supposition of oddness (H-2)~ the two branches xp~(s,~,) differ only in

sign.

In order to employ considerations of Theorems 2.3 and 2.4 to extend

the p'th branch xp~(s,~) from the small into the large~ we needed assur-

ance that there existed some ordinary point (x:,k*) on that branch, i.e.

a point such that k*I-T t (xp*) has a bounded inverse. This assurance is

given by Corollary 6.6 o_~rCorollary 6.8 under the assumption that either

condition (6.9) o_~rcondition (6.15) holds~ whether we have sublinearity

(H-b~) or superlinearity (H-4b). Moreover~ we shall see that either of

these corollaries givesassura~ce that~ a priori, all points (xp2k) on

the p'th branch Xp&(S,k) are indeed ordinary points. Of course the

latter can be inferred also from Theorem 2.4 once a single ordinary

point is found~ but there is no assurance that the branch cannot termi-

nate at a singular point on the basis of Theorem 2.4.

Accordingly we invoke Theorem 2.4 and state that there does exist a

branch x~(s~k)s or a "unique maximal sheet~" of eigenfunctions of problem


81 -

(6.2) emanating from the trivial solution at the pr~-~ry bifurcation

point k = ~p(o) • The only finite boundary point such a sheet may have

is a point (X *,M*) such that k ' I - = x . has no bounded inverse.


A ~
P
Theorem 7-!: The branch xp~(s,k) has no finite boundary point apart
from x = @, and may therefore be continued indefinitely.

Proof: If there were such a boundary point (xp*,k*), then XpCS,k)" xp*

as k -- A* in the C(O,I) norm. By Theorem 6.1, xp*(s,k*) has exactly p

nodes on O < s < i. Accordingly in view of Theorem 6.2 and either Theorem

6.9 or Theorem 6.7 we have ~p* < k*< ~ - I in the sublinear case and ~ i <

k* < ~,* in the superlinear case. Hence k * I-Kf x' must have a bounded in-
A"
P
verse, i.e. k* is not an eigenvalue of Kf'x *" This however is a contradic-
P
tlon t since by Theorem 2.~, a boundary point is an exceptional point.

Theorem T.2: There exists a number [ m 0 such that lira sup llxp(s,k)ll = ~.
k~k
(Note: In the superlinear case it is possible that [ = ®.)

Proof: We a s s u m e t h e s u b l t n e a r case (H-4a), the superlinear case being

similar. Let

Hp = [klO < k < ~p , x/(s,k) exists amd is continuous in k uniformly

with respect to 0 ~ s K i].

By sublinearity (H-4a), Theorem 4.2 and Chapter 9, rip includes some small

interv~l (~ 0 )-¢, ~p(0),2, a left neighborhood of ~p


.(0) . Let us suppose, con-

trary to the st~tememt of the theorem, that there exists a number M > 0

such t2~at llxp(s,k)ll ~ M, k E ~p. Then we show that lip is closed relative

tO (0,~(0)). Indeed let [k~, k k E X]p, k = 0,1,2, ..- be a convergemt

sequence. Each function xp k = xp(s,k k) solves eq. (6oe) with k = k k,


- 82 -

kI
sequence {kkl} such that [KfXp 3 converges in norm. We have that

KfXpkl = k klxpkl however, whence I x ; I} converges in norm. (We may con-

sider here that [kk~~ is bounded aw~yfromO; otherwise we should already


. A .

k1
be done.) Then~ = lim x is a solution of eq. (6.2) with k = [ =
P kl~® P

kl~lim~ kkl , i .e. [ Xp = KfXp, and by Theorem 6.1 ~p has exactly p nodes in

0 < s < i. Hence ~ E rlp, which shows that lip is closed. On the other
(°)
hand [Tp must be open relative to -P ) since, given ~ 6 np, Xp(S,[): ~p

exists such that [ "l exi.ts as a bo ed inverse by

Theorem 7.1, and Theorem 2.3 indicates that there is a neighborhood ~ of [

such that Xp(S,k) exists, k 6 N~k ; i.e. % c Hp. A set such as Hp which is both

open and closed relative to (0,~(°)) must either be empty or be equal to

0 ,~p(o) ). We have seen however that ~p is not empty since it contains a

left neighborhood of ~p(°). Hence [~p = (0,~(O)) under the assumption that

llXp(S,k)!! ~ M, A 6 lip- But by Theorem 6.2, we must have ~p < k for k E Hp,

where ~p is the p'th eigenvalue of linearized eq. (6.1) with y(s) = xp(s,k).
Now for functions x(s) E C(O,l) with llx!l~ M, we have f'(s,x) a f.'(s,M) by the sub-

linearlty assumption, (H-4a). Let ~pM be the p'th eigenvalue of the opera-
i
tot Kf~ = Jo K(s,t)fx(t,M)-dt. Using the sysmetrized operators of Theorem

6.2, which are such that Hf.OHxhas the same spectrum as Kf'x, we can write
- 83 -

(.f~u,u)
= ~n max + -(u,u.)- - -
"~ {Vl,...,~p.l} ~Vl,'",vp. 1

max ~ ~ max --~ = ~p,


UAWl, • •"Wp.1 uAwI~ •• -,wp.1

where we have indicated here by Wl,---,Wp. I the first p-i eigenelements

of the operator Hfx H. Hence for k E Hp we necessarily have 0 < ~pM <
P
,p < × ~.d.r the ass~tion t h a t !lXp(S.×)!! ~ M. ~ i s i s a e o n t ~ d i c t i o n

since we also proved that • p = (O,,p). Thus xp(s,k) cannot remain bounded;

th.re ~ t s a . ~ b e r ~ ~ o such that as ~ + ~ . ~ > ~ . lira sup llxp(s.k)!! -- -.

In the superlinear case (H-4b) the argument is the same except that the set

~p, where xp~(s,k) exists and is continuous, lies to the right of the pri-
.(o) , (o)+®)
mary bifurcation point ~p , an~ is a subset of t h e interval t~p , • This

proves the theorem.

Now let us consider the linear eigenv~lue problem

I
yh(s) = ~o K(s,t)A(t)h(t)dt (7.1)

formed with the function A(s) ~ 0 of the Hypothesis H-~ of Section 6.

Problem (7.1) possesses the positive sequence of simple eigenvalues {Vn).

Finally, to prove the following result, we must strengthen H-5:

H'~: lf(s,B)-ACs)BI ~ ~ , 0 ~ 8 < @', where ~ i s a constant.

Th~eorem 7.S: ~ = 7p, where ~ 0 is the number appearing in the last re-

sult~ i.e. n m sup tl~*(s,~)It = ®.


k-.Tp
- 84 -

Proof: Let KA be the operator of eq. (7.1). By Theorem 7.2 there exists
a sequence k k ~ k such that kk~k
lira llxp(s,kk)ll= -, end xp(s,kk) satisfies

eq. (6.2) with k = k k. We subtract the element FAx from each side of
P
eq. (6.2) to obtain
.i
(kl-KA)xp(s,kk) = ~[ K(s,t) [f(t,xp(t,kk) )-A(t)xp(t,kk) ]dt,

whence, using the sup norm of C(0,1),

11xp(s,~k)ll~ II (~'KA)'~I'II~I'IIr(S,%(S,~Q)'A(s)~Cs,~k)II-

Then by H-7, IIXp(S,kk)ll g II(kI-KA)'~I.II~I.MI. Thus lim = @D

implies that [ 6 {yn} where {yn} are the elgenvalues of eq. (7.1).

k = ¥m > 0. We compare the functions hmm e n d


Suppose now that --
xp(s,k k) where hmm(S) is the normallzed eigenfunction associated ~rith ym •

h®(s). ~(s'×k) 1 ~oI K(s,t)A(t)hm~(t)dt -

llXp(S,~kJIl-- ~q

1
1 ........ f K(s,t)f(t,xp(t,kk))dt
- ~kllxp(~,~ n

i j£. ~0

K(s,t)A(t)hj(t)dt + I
~0
K(s,t)A(t) [, (t) - llXp(t,kk)ii @

1
1
[ K(s,t)[A(t)xp(t,kk)-f(t,xp(t,kk) )]dt.
"O

Now • llnll" o as ~k " Ym' s~d likewise


Ym
- 85 -

8......The .mmmp!e of Section I, Reconsla~ea.

Having developed some methods for treating more general cases, let

us now reconsider the example of section I, namely eq. (i.i). This equa-

tion now appears to be an e/~envalue problem for a superlinear Hammer-

stein operator of odd type. In fact we find that hypotheses H-I through

H-3p H-4b, and H-5 with A(s) = A = w~ are satisfied. The second rank

kernel does not impress us as being an oscillation kernel in that it is

possible for it to assume negative values, but in a simple example we

can live with whatever deficient l~roperties a specific kernel does have~

if it has any~ and find out where we are led.

Accordingly let us begin by treating eq. (i.i) in a fashion remi-

niscent of Section 2. Nsmely~ let h~8 be increments added to ~o~Ao

respectivelyp where we assume that (~o,Ao) represents a pair which saris-

lies eq. (i.I). We have then

Xo~ ° + Xoh + B~ o + ~h = ~o [a sin s sin t + b sin 2s sin2t]

• dt (8.1)

~he fact that (~o,Ao) solves eq. (i.i) allows some cancellations in eq.

(8.1); after rearra~ement we get

koI - ~ [a sin s sin t + b sin 2s sin 2t][l + .dr h =

2 | [a sin s sin t + b sin 2s sin 2t][3~o(t)h2(t)+h3(t)]dt


~o
-- Fs(h). (8.2)
- 86 -

At the trivial solution ~o(S) --- 0 we have the linearization eq. (1.5)

with two eigenvalues a,b, with a > b as assumed, to which are associated

the eigenspaces Slm~nned respectively by sin s, sin 2s. We thus have two

primary bifurcation points, a,b~ where the operator on the left in eq.

(8.2) has no inverse. Corresponding to eq. (3.2) we have the following

equation to be solved for h at the bifurcation point k = a, (with 5 = k-a):

h = MF 5 (h) + ~ sin s = M(I-E)


{ So
8h + n-- [a sin s sln t +

b sin 2s sin 2t]h3(t)dt} + ~ sin s

where here E is the orthogonal projection on the null space spanned by sin s,

and M is the pseudo inverse. This gives

h - S { 5(l-E)h + ~ b sin 2s sin 2t h3(t)d t} + ~ sin s. (8.3)

Putting h o = ~ sin s in an iteration processp we find that the integral in

eq. (8.3) vanishes, so that h I = ~ sin s. Likewise every succeeding iterate

is equal to ~ sin s, and therefore h = VS(~ sin s) = ~ sin s. Then the bi-

furcation equation~ eq. (3.4)~ becomes

slns)--E (-5 s l n s + 2 ~ n [ a sin s sin t + b sln 2s sin 2t]~3sin3t dt

-- 3] sin s = O.

Eq. (8.~) has the trivial solution ~ = 0 and the nontrivlal solution

~ = ~ ~= ± ° -2 ~b" tI 'a andl then flrst


e branch
d 4 s°luti°n'
~

by substituting this ~ into eq. (8.3), is h(s) = • 2-- ~_- - i sin s.


- 87 -

Again, we write eq. (3.2) for the bifurcation point at A = b, with

5=k-b:

h -- M(I'E) -~h + 2 foN[a sin s s i n t + b sin 2s s i n 2 t ] h 3 ( t ) d t ~ + ~ sin 2s

where E is now the orthogonal projection onto the null sp~ce spanned by

sin 2s, and 5 = A - b. This gives

h = M 5(l-E)h + ~ a sin s sin t + ~ sin 2s. (8.5)

Starting with the first iterate h = ~ sin 2s and substituting this on the
o
right in eq. (8.5)p we again hare the integral vanishing~ whence h I = ~ sin 2s.

We ~ a n t h e n see that hn = ~ sin 2s also I so that h = V~(~ sin 2s) = ~ sin 2s.

The b~furcation equation is now written

EFs( ~ sin 2s) = E ~o 5~ sin 2s + 2 foN[asin s sin t + b sin 2s sin 2t]~3sin32t dt

= {- 8 ~ + ~ b ~ 3] sin 2s = O. (8.6)

Eq. (8.6) has the trivial solution ~ = 0 and the nontrivial solution

~=~ - = - ~-i . We solve eq. (8.5), with this solution of

eq. (8.6), to get h(s) = ± 2 Sb. i sin 2s for the second branch of eigen-

solutions.

These branches of eigensolutions are exactly the same as those obtained

in Section i by more elementary means. Also since the expansion (3.5) is

tribal in this case, the ~resslons OlCS,~) = Oo+h = * 2__~ ~a 1 sin s and
- 88 -

~2(s,k) = ~o+h=&~q ~-l sin 2s are valid in the large. There is no

need for the process of Theorems 2.3 and 2.4. Of course one could follow

the steps. The uniqueness property of Theorem 2.3 would yield no other re-

suit.

With this example in Section 1 however, we had secondary bifurcation

on the 1st branch if b > ~1 . Here, we study this possibility by learning

how the eigenvalue ~2 of the linearization

wh(s) = ~2 foN [a sin s sin t + b sin 2s sin 2t][ l+3~12(t,k) ]h(t)dt

(8.7)
behaves as the is, branch ~l(S,k) evolves. The eigenvalue ~i does not

bother us since ~l = a+3(k-a) -- k+2(k-a) > k. Of course this is what


2b
Theorem 6.2 would tell us. For ~2 we have the expression ~2 = " b + -~- k.

Secondary bifurcation of the branch ~l(S,k) occurs if ever ~2 = k; this

does happen if 2b
a
> 1 but cannot happen if 2b
a
g 1. Hence we get the same

condition as in Section 1. The secondary bifurcation in this example oc-

curs then at ksb = ~ ab with the solution qOsb(S)_


_ = ± --
2 sin s.

There is of course the question of the bifurcation analysis at the

secondary bifurcation point (~0sb,ksb). In Section 1 it was found, using

direct elementary methods, that the two sub-branches or twigs split away

from the main branch ~l(Stk) at this point and evolve to the right. When

it comes to repeating this bifurcation analysis by use of the bifurcation

equation, eq. (4.2), we find that difficulties arise. When we compute the

coefficients using (5.3), we find that a I vanishes; the nonvanishing of a 1

is essential in the application of the Newton Polygon method as discussed


-89 -

by J. Dieudonn~ [ref. 8~ p. 4]. In treating bifurcation at the origin as

in Section 4, we were able to b~ndle a case where a I vanished since there

it was clear that Ii(5 , ~i ) in eq. (4.2) vanished also. In this ~ l e

where we have secondary bifurcation at (~sbPAsb) we have yet to learn how

to work out the sub-branches using the bifurcation equation.*

The vanishing of a I in eq. (4o2) at a secondary bifurcation point as

above is a peculiarity of Hasnerstein operators K(s,t)f(t,x(t))dt = K~x

for which K(l-s,l-t) = K(s~t) and f(l°s,x) = f(s,x). The example of section

i is of this type. More general e ~ l e s lead to the nonvanishing of a I in

eq. (4.2) at a secondary bifurcation point, whence the Newton Polygon method

is applicable to eq. (4.2) as it stands. It should be noted however that in

a case of nonvanishing a I in eq. (4.2) one usually has a changing of direc-

tion of evolution of the branch of eigensolutions at the secondary bifurca-

tion point, rather than a formation of sub-branches as in the problem of

Section i. Some writers prefer to call such a point a limit point of the

branch of eigensolutions, thus preserving the term "secondary bifurcation"

for the more intuitive idea of the splitting of a branch. In any case, however,

the bifurcation analysis ~ s t be used in general.

We can compare and assess the two conditions ngainst secondary bifurca-

tion given in Section 61 namely (6.9b) and (6.1~b) respectively. We found

in Section I that a necessary and sufficient condition against secondary bi-


b i
furcation in the example was that ~ ~ ~ • How do conditions (6.9b) and (6.15b)

co,r~eJre with this?

*See AFpendix however.


- 90 -

With respect to condition (6.9b), the quantity on the left, namely

3 ~ ~ , varies between i and 3. For the condition to be satisfied

1
must be no higher than ~ . Now in the present example, A 2 can be given a

most refined definition. In connection with eq. (8.7) we saw that the two

eigenvalues, 141(K,I+3~2) = 3k-2a and 142(K,I+~2)= "b+2ba k evolved as the

first branch ~l(S,k) evolved. We know these expressions for 141 and 142

only because we know a priori, independently of ~I and 142' the expression

for ~l(S,)~) in this example. Hence we can compute the maximum of

142(K, :L+,-~2 )
over this known branch only, rather than over the positive
2)
b
cone. The m~xi~um of the ratio is ~ and is assumed at k = a, i.e. at

the origin ~ m O. This allows interpretation of condition (6.9b) in terms

of eigenvalues of the linearization at the origin, namely a and b. Condi-


b 1
tion (6.9b) therefore requires ~ ~ ~ as a condition for no secondary bi~trca-

tion in this example. Hence condition (6.9b) while being a sufficient condi-

tlon, is far from being a necessary condition for no secondary bifurcation.

Condition (6.15b) on the other hand requires that


2
i+~i2 ~2(K,I+~2)
as a condition against secondary bifurcation. with ~ z = ~i(', x) -- * 2_j~_. i sin s,
j~.a

this means that the condition is satisfied if

< 3X.-2a
l+~. -1 2~
-91 -

Indeed the l~tter is satisfied for a ~ ~ ~ = provided b ~ 1 Hence


a 2 "

in the exampleI condition (6.15b) stacks up quite well as a condition

against s e c ~ bil~rcation.
- 92 -

9_: A Two-Point Boun.da_ry. Va!ue problem.


In eq. (6.2), if we let K(s,t) be the Green's function

~'t s , s<t

I
n
a(s,t) =
~'s t , s > t (9.1)
N

2
sin ns sin nt, O< s~ t ~ t
n

we have an integral equation which is equivalent to the following two-

point boundaryvalue problem (we have changed the upper limit of integra-
tion in eq. (6.2) to ~)~
Xss + ~-if(s,x(s)) = o
(9.2)
x(o) = o x(~) = o.

The question arises of what alternative methods there may be for studying

branches of eigenfunctions and their bifurcations for the equivalent two-

point boundary-value problem. In this connection please see papers by

the author [ref. 17] and C. V. Coffman [ref. 6] • Coff,~n

gives certain interesting conditions against secondary bifurcation in

problems such as eq. (9.2). In particular there is no secondary bifurca-

tion of the branches of eigenfunctions in the autonomous case, f(s,x(s))

f(x(s)). We now study the autonomous case to see how the absence of

secondary bifurcation on any of the branches of eigenfunctions can be re-

lated to our considerations involving Hammerstein's equation, eq. (6.2).

We take a definite case; namely let us deal with the Hammerstein Equa-

tion

~(s) = G(s,t)[~(t)+~3(t)]dt (9-3)


93 -
and the equivalent boundary value problem

mss + k'l[~u~3] = 0
(9.4)
~(o) = o ~(~) = o.

The first step in our solution of problem (9.4) is to let ~ = k'~s.

Then ~s = k
%~ and ~ss = k
-1~ , whence the problem takes the form

m~+ [o~ 3] =o

(9.5)
~(o) -- 0 ~(×~) = o.

This sets the stage for using the initial value problem

m~+ [~3] = o
(9.6)
m(o) = o, ~(o) = e > o

as a tool in solution of boundary value problems (9.5) and (9.4).

Problem (9.6) has the first integral

(%)2 + ko a + ~ ~ 1 = c2
÷C' "

which defines a closed trajectory in the

~,~ phase plane (see Fig. 9.1). Then - P ~ P


--. , and in the first -C

quadrant we have FZG: 9:!:

(9.7)
,/7--2 _-c~

We factor the denominator in the integral of eq. (9.7):

"2
~4~ + Cp - C =
- 94 -

Defining p2=-l+ ~ and q2 = i + i + ~ 2 c2 , we write (9.7) as follows:

~0e~OE ~"~3 t ~ ,
Om$~p,

= ,~ i sn'l(sin $,k)

where sn is the Jacobi Elliptic function of the ist kind [ref. 5, P. 50

#214.00] and we have the following definitions:

~ _L -~÷~ /2,% 2)
and $ -- sin"I "P ~]~ -q .....
= 2 2 = ~
P +q 2 ~
= .o2(2.q2) .

in~o~i~ we ~ot oin0o ~n t V ~ Solving we


w p t~ + q }

have

or sn ~,k = p q sn ~,k .

pqsn ( ~,k)
Thus ~(~) -

sn 2 ~+q2

is the solution of problem (9.6) for 0 < ~ <


{ d~ ..... , and indeed

for all subsequent values of ~.


- 95 -

The elliptic function of the first kind sn(x,k) has the real period

4K (and of course also a complex period) where

TT

dt
K(k) --
q(i.t2)(i.St2)
is monotone increasing in k, [ref. 5, P. 19]. Hence if we seek to solve

the boundary value problem (9.5), we are interested in matching the zeros

of sn ( ~ ~,k), which occur at ~ = 2nK(k)n = 0,1,2,." with the

value ~ = k ~ . Let the first such zero be ~l; then we have

~gl - 2K(k)
and

2K(k) 2 |
ri dt

Jo ~(l-t2) (l-2t 2 )
ol÷
where we remember that k = k(c) = ~ 2 V l + 2 e 2 • We have N ~ K(k) < ---

so that ~l(C) ~ 0 as c - ® and ~l(C) - N as c ~ 0. It can also be verified

that ~l(C) is monotone decreasing,

c > O. Of interest is the solu- ~h

tion cI of the equation ~l(C) =


(c)
C
k~, (see Fig. 9.2). With c I 0 c,

thus defined, we can write ~ O


- 96 -

n2 2 ~ l+~c I _
=
z2(c!) - (9.8)

whlch goes to I as c I ~ O, and to ~ as c I ~ ~.

~ h u s c I is the initial slope of an elgenfunction of problem (9-5),

and eq. (9.8) tells how the eigenvalue k varies as we vary the initial

slope. Thus eq. (9.8), together with problem (9.6), defines a branch of

eigenfunctions of boundary value problem (9.5) which is parameterized by

the initial slope c I > 0. It is the zero'th branch of eigenfunctions

which bifurcates from the trivial solution ~ ~ 0 at the zero'th bifurca-

tion point k = l, which is the zero'th elgenvalue of the linearizedprob-

lem at the origin:

~ + ~ = 0

(9.9)
: o m(x : o.

1 , n : 1,2,-...
Linear problem (9-9) has simple eigenvalues at k = -~
n
In a similar fashion we match the n'th zero ~n of the solution of

problem (9.6) with k ' ~ . We have

and so get the expression

~n(C) = = n~l(C).

Solution of the equation ~n(C) = k ' ~ yields a value cn. T h e n we h a v e


-87 -

2 2 ~
k ~ n i
-- - ............... - 0 (9.10)

and k-, = as c -. ~ .
n

Thus is yielded the (n-l)th branch of eigenfunctions of problem (9.5),

that branch with n-i interior zeros. It is parameterized by the initial

slope Cn; eq. (9.10) gives the eigenvalue while problem (9.6) with c = cn
1
gives the eigenfunctions. The primary bifurcation f r o m ~ m 0 is at k = -~ .
n
For future reference, we note that the n'th maximum of the solution of

problem (9.6)(or the n'th zero o f t he function % below) occurs at

~n(C ) _- (2n-!)K(k(c)) - (9.11)

Now let us consider again the solution of problem (9.6) which forms a

trajectory ~ 2 + [ 2 + ~ 4 ] = c 2 in the ~,# phase plane (see Fig. 9.1). Here we

have set $ = ~ . Let ~ = successive zeros of ~, v = 0,1,2,...

= successive zeros of ,, v = 1,2,3,''"

where we label ~o = O arbitrarily. By inspection of Fig. 9.1 we note that

s@ ~(~,c) = (-i) 'J, ~v < ~ < ~,~i


sgn t(~,c) = (-i) v, u~ < ~ < ~1"

We consider also the linearized initial value problem

h~ + [i+3~2]h = 0
(9.12)
h(o) = 0 h~(o) -. 1.
98-

Problem (9.12) has a trajectory which also revolves around the origin

of an h, k phase plane, where k = h~.

In Fig. 9-3 we superimpose the

two phase planes. Define~ v =

the successive zeros of h, v =

O, i, 2, • • • • By inspection we

have

sgn h(~,c) = (-i)v, ~ < ~ < ~u+l" FIG. 9.3.

If we multiply the differential equation h ~ + [1+3~2]h = O through

by ~, multipl~ the differential eq~tion ~ + [g)+~3] = O through by h,

and subtract the latter fr0M the former, we get

Integration from ~ to ~ l gives

V = 1,2,3,'''; in other words the h,k trajectory

leads the ~,~ trajectory in Fig. 9-3.

Proof: We employ induction. Assume the l e - ~ is true for v = 1,2, ---,m,

but not for m+l. Then we have a m < ~ < ~m+l ~ ~m+l" The integrand in

eq. (9.13), (in which we put V = m)1 is positive since h and £0 have the

same sign between ~m and ~m+l under our assumption, and since sgn h(~m,C)

= (-i)m, ~ sgn ~(~m,c) = (-i)m, we also have h(~m,c)%(~m,C) > O. Thus

by eq. (9.13) we should have h(~m+l,C)%(~l,C) > O. The latter must be


- 99 -

false however since either sgn h(~w,l,C ) = (-i)m or h ( ~ l , C ) = 0, by

our assumption, but ss~ $(~m,l,C) = (-I)m+l. This contradiction shows

~t ~m~l < ~a,l' and Proves the l e m .

A proof that ~c = ~c Cp(~,c) = h and %c = ~ ,(~,c) = k can be l ~ t -


terned after a very similar proof in a published paper of the author;

[ref. 17; p. 132, L ~ m = i].

L e m B 9-2: ~ < ~v+l' and ~v+l < ~v+l' v = 0,1,2, ...; in other words,

the le~d of the (h,k) trajectory over the (~,$) trajectory in Fig. 9.3

is less than 90°.

Pr+oof: The first statement follows from the second since by inspection

of Fig. 9.3, it is clear that ~ < ~+i' ~ = 0,i,2,---. The second state-

ment can be proves by showing that sgn h(~,c) = sgn ~(r~,c), V = 1,2,3, "'"

(with reference to ~ 9.1). Prom the expression of the solution of prob-

lem (9.6) in terms of elliptic functions, we have ~ (~ ,c ) = (-l)v'Ip =

(.1)~-1 .JV l + 2 c 1 .
d
But

~(r~p(C),C) = Ir(~,c) = 0. Therefore d o ( r ~ p , c ) = h(~p,c) = (-1) ~p'I d ~ ~ - ~ 2 c 2 -1

( 1)~-i 1 c . Also we have sgn ~ ( ~ , c ) = (-i) v ' l .

Therefore sen h(~,c) = s~s O(~,c), ~ = 1,2,3, "'', which proves the i ~ .

Theorem 9,3: There is no secondary bifurcation on any branch of eigenfunc-

t i o n s of problem (9.~).
Proof: Solutions of problem 49-5), equivalent to problem (9-~), are given

by solutions of problem (9.6) with c = cn > Os where cn solves the equation


- 1 0 0 -

'I 2
~n(C) = k ' ~ . By eq. (9.10) we had k = ~ as the n'th branch eigen-
~ J ( c n)
value of problem (9-5) expressed as a function of the initial slope cn of

the associated elgenfunctlon. In exactly the same~ay, the discrete elgen-

values { ~ } of the llnearized boundary value problem

h~ + (l+30n2)h = 0
(9.1#)
h(o) = 0 h(k'~) = 0

are obtained by matching the zeros @v of the solution of the initial value
2
problem (9.12) with the value k'~11; thus ~v = ~ ( C n ) " By Le-,-as 9.1 and

9.2 we can then make the following comparison:

2 2 2
~n+l(Cn ) = 2 ~J(Cn) = ~n(Cn )
~n+l(Cn ) ~n2(Cn )

0 < C <~.
n

Hence nowhere on the n'th branch of eigenfunctions, for any n, do we have

A = ~n+l or k = ~n" Since the eigenvalues ~n' ~n+l of problem (9.14) are

necessarily simple, we can conclude that there is no secondary bifurcation

on any branch of eigenfunctions of problem (9.4). This concludes the proof.

In Section 6 we were concerned about the possibility ~n+l = k, and

devised two conditions against it. It is interesting to note here that

in the case of problem (9.4), which of course is completely equivalent to

the Ha.,.erstein equation (9.3), we can say that ~n+l < k with a ~rgin.

Indeed, by Lem.~ 9.2 we have


- i01 -

2 2 2
:
~n+l(Cn ) = -2 .... < .... 2 . . . . . . . . <
=n+l(Cn ) nn+l(C n) ~J(c n)

O~ C <=e,
n

Thus ~n+l stays bounded away from ~ on the n'th branch.

The following eigenvalue problem for a Hammerstein operator:

7?

~(s) = I K(s't)[o(t)+~3(t)]dt (9.15)

where we define the kernel as follows (with positive constants {~(n°)} SO

chosen that the resulting operator is compact):

K(s,t) = ~2 ~ ~n(o) sin ns sin nt

is a generalization on the eigenvalue Problem of Section i. In Section i

we merely chose the first two terms of this kernel. In the present sec-

tion we have discovered that there indeed do exist sets of constants {~(o)}-

such that the eigenfunction branches of eq. (9.15) undergo no secondary

bifurcation; namely ~n(o) = --~


l . Hence if some condition on the primary
n
bifurcation points {~n
(°)}- of problem (9.15) could be devised assuring

against secondary bifurcation (the condition of Section i was that


(o)
~o(OJ < l ), the constants ~(no) = ~1 would presumably have to satisfy it.

Such a co~litlon would not be vacuous therefore, and would be of a form

much more convenient to apply than than those of section 6.


- 1 0 2 -

10 . . _S_ummary;
~ C o_llect!on .0f_ . ~ . . ~ e s e s ; _ U n s e t t ! e d Qu_estione.

To s~mm~rize, we s t a t e t h e assumptions made a b o u t t h e o p e r a t o r T ( x ) ,

and t h e c o n c l u s i o n s , i n a p r o g r e s s i v e f a s h i o n .

I n s e c t i o n 2 i t i s assumed t h a t T(x) i s a g e n e r a l l y n o n l i n e a r c o n t i n u -

ous, F r e s h e t d i f f e r e n t i a b l e transformation, with continuous Fr~chet deriva-

tive, d e f i n e d on a Banach S l i c e X w i t h r e a l s c a l a r f i e l d , and mapping X

into itself w i t h T(e ) = 8~ where 8 i s t h e n u l l element o f X. The F r ~ c h e t

derivative of T(x) is a linear operator T' (x) defined for each parameter

X E X, and mapping X into itself. Let (Xo,A o) be an ordinary point for

T(x), i.e. the transformation AoI-T' (xo) has a bounded inverse, and sup-

pose also that (Xo,A o) solves the nonlinear equation (2.1), i.e.,

koX o = T(Xo)+f where f E X is a fixed element. Then there exists in

X × R a unique maximal sheet of solutions of eq. (2.1), namely (xtA),

which forms a locus of Points in X parameterized by A and denoted by x(k),

(Theorem 2.4). This sheet is the only locus of solutions passing throu@h

the ordinary Point (Xo,Ao) , and the only finite boundary Points it can

have in X × R are exceptional points, i.e., points (xB,AB) such that

ABI-T'(x B) does not have a bounded inverse.


Section 3 takes up the study of solutions of eq. (2.1) in a neighbor-

hood of an exceptional Point (Xo,A o) • It is assumed here that T s (xo) is

coml~ct (statement H-I); such is the case if the nonlinear operator T(x)

is completely continuous on X. Let M be the extended pseudo-inverse of

AoI-T ' (Xo) , (see Lemma 3.1). Let h = Vs(u) be the unique solution of the

equation h = MFs(h)+u for 151 < c, where c is a constant of Theorem 3.2,

and where Fs(h) = - 5x ° - 5h + Rl(Xo,h), Rl(Xo,h) being the first order


- 103 -

remainder in t h e Taylor expansion of T(x) around xo. Then t h e c o n d i t i o n

t ~ t h ---vs(u) be such that (Xo+h, ~.o+S) solves eq. (2.1) is t ~ t

(I-~)FS(Vs(u)) = O, where ~ is the projection on the range of koI-T'fx o)

alon6 the null space of koI-T'(Xo). Conversely if (Xo+h , ko+8) satisfies

eq. (2.1), then (h,5) also satisfies h = MFs(h)+u where u conforms to the

condition (I-~)Fs(Vs(u)) -- O. This bi~=cation equation, ~=ely

(I-~)Fs(Vs(u)) = O, is solved in the finite dimensional null space ~l(Xo)

and hence can be represented as a finite number of scalar equations in an

equal finite number of unknowns. It was necessary to assume T(x) to be three

times Fr~chet differentiable for this. The i: i correspondence between the

solutions of the bifurcation equation and solutions of eq. (2.1) in a neigh-

borhood of an exceptional point (Xo,ko) enables the study of solution multi-

plicity of eg. (2.1) by means of the actual solution multiplicity of an

algebraic system, viz., the bifurcation equation. A concrete representa-

tion of this algebraic system in terms of assumed bases for the null spaces

~l(Xo) c X and ~l*(Xo) c X*, where X* is the conjugate space, is given by

eq. (3.8).

In Section 4, the important case of bifurcation of solutions of

eq. (2.1) a t a n exceptional point (Xo,k o) such that the n u l l space ~l(Xo)

of AoI-T' (xO) has dimension one, is taken up. Bifurcation eq. (3.8) is

then one equation in one unsown, viz. eq. (4.1). The Newton Polygon

method of solution is introduced and described, but for more details the

reader is referred to the work of Dieudonn~ [ref. 8] and of Bartle [ref. 1

P. 376]. Next the eigenvalue problem kx = T(x) is specifically considered

(i.e. we let f = @ ), as it is throughout the remainder of these notes.


- 104

The bifurcation equation (4.1) is specialized to the origin, i.e., we

take x ° = 8, ~o ~ O. At this point it is very convenient to assume that

T(x) is an odd operator, i.e., T(-x) = -T(x) (statement H-2). With this

added supposition, (8,Ao) being an exceptional point with respect to T(x),

there exist two nontrivial solution branches x~(A) of the equation T(x) -- ~x,

consisting of eigenfunctions of T(x), which bifurcate from the trivial solu-

tion x = e at the point A = A o. The two branches differ only in sign. The

branching is to the left or right according to whether a2a 4 > 0 or a2a4 < O,

where a2,a 4 are coefficients in eq. (4.2), (see Theorem 4.2).

One of the l~Arposes of section 5 is to explain in a general way the

reasons for the subsequent preoccupation with H~m~erstein operators, (see

eq. (5.2)). Also two pure categories of nonlinearity are introduced for

both Hamerstein operators and general completely continuous operators with

synnetric Fr~chet derivative, namely sublinearity and superlinearity; also

the idea of asymptotic lin~rity is introduced. Bifurcation equation 4.2

and Theorem 4.2 are interpreted for Ham~erstein operators. Lastly, the

notion of an oscillation kernel is introduced together with the reason

for its use: operator (5.6) has only simple eigenvalues whatever the

continuous funotic~1 x(s) used in defining this operator.

In section 6 we take up the extension of branches of eigenfunctions

of the Has~erstein operator (5.2) from the s-~ll into the large. Assump-

tions about the operator, mentioned in section 5, are given precise defini-

tion. Then it is shown that the branches of nontrivial solutions of the

eigenvalue problem, given in the small by Theorem 4.2, which arise at the

"primary" bifurcation points~ (O), n = 0,1,2,..-, are such that the p'th
AA
I05 -

branch functions have exactly p odd order interior zeros (and no other

interior zeros). Actually this result holds in the large as well. Next,
Xfxl (s,x)
in the sublinear case it is shown that if, (a priori), ~ ~ ~ ~ i,

0 ~ s ~ i, -® ~ x < + ~ , then kI-T i (xp) cannot become singular so that

there is no secondary bifurcation of the p'th branch. Here Ap is the

number defined in connection ~ith L e m 6.4, and T e (xp) is the Fr~chet

derivative of the HaHnerstein operator defined along the p'th branch

xp = x~&(s,k) of eigenfunctions. The analogous sufficient condition


AJ
xr'Cs,x)
against secondary bifurcation in the superlinear case is 1 ~ f(S,x)

i
A ~ 1 , O ~ s ~ i, - ~ < x ~ + ~ . A couple of attempts are made to re-

fine these conditions preventing secondary bifurcation of a branch of

eigenfunctions by refining the definition of Ap- In the sublinear case,

yet another condition against secondary bifurcation is given by inequality

(6.15a), while a corresponding necessary condition is given by inequality

(6.17); the two conditions bear some resemblance to each other but also

show a conjugate feature, namely the occurrence of f ( s ~ ) in the suffi-

cient condition, and fx t. in the necessary condition.


P
Under HYpotheses H-I through H-6, and under one or another of the

conditions against secondary bifurcation, section 7 deals with branches

of eigenfunctlons of Hammerstein operators (5.2), in the large. Indeed,

the p'th branch xp~(s,k) (the two parts differ only in sign) is a con-

tinuous locus of points in C(0,1) which arises at the primary bifurca-

tion point ~p
Col , exists in the large, and becomes infinite as k ~ Vp,

where Vp is the p'th eigenvalue of eq. (7.1). This again is under a


- I06

condition against secondary bifurcation (such as 6.9 or 6.15) pertaining

to the p'th branch. The number Vp is sometimes called an "asymptotic

bifurcation point."

Section 8 treats the example of section i s previously solved by ele-

mentary methods, by the methods of functional analysis introduced in sub-

sequent sections. For this example we know a precise condition preventing

secondary bifurcation~ namely ~b ~ i2 where a~b are the prlw~ry bifurcation

points. It is shown that condition (6.9b) applied to this problem, and

interpreted in terms of the primary bifurcation points~ would give b ~ 1


a 3
as a condition at best; on the other hand condition (6.15b) actually
b 1
yields ~ < ~ . Hence, though (6.15b) is more complicated than condition

(6.9b) and thus harder to apply t it is a more refined condition, at least

for this example. Unfortunately we have no success in applying bifurca-

tion equation (4.2) to study the secondary bifurcation of this example


b i
when it occurs, namely w h e n ~ > ~ . Indeed, the leading coefficient of

eq. (4.2) vanishes in this example at a secondary bifurcation point~ but

not enough of the lumped terms seem to cancel as was the case in producing

eq. (4.4). Hence use of the Newton Polygon method as discussed by

J. Dieudonn~ [ref. 8], and R. G. Battle [ref. i, p. 376] seems to founder

on the requirement rain ~i = rain 8 i = 0 imposed upon the exponents of


l~i~n ~:i~n
the bifurcation equation. Actually this failure is not to be expected in

secondary bifurcation for Hammerstein operators which are such that K(l-s,l-t)

K(s,t), or f(1-s,x) ~ f(s,x).*

In s e c t i o n 9 we f i r s t n o t e t h e e q u i v a l e n c e between t h e e t g e n v a l u e

problem T ( x ) = ~x f o r t h e H a s ~ e r s t e i n o p e r a t o r ( 5 . 2 ) , where we assume

* P l e a s e s e e Appendix h o w e v e r .
- 107 -

that K(s,t) has the form given in eq. (9.1), and a certain familiar two

point boundary value problem, (9.2). It is known that in the autonomous

case, f(s,x(s)) - f~x(s)), there is no secondary bifurcation of ~ branch

of solutions of problem (9.2). This is shown for a particular two-point

boundary value problem, namely (9.4), in a way which clearly relates this

absence of secondary bifurcation to some of o u r considerations of section

6. Eigenvalue problem (9.14) with kernel (9.15) generalizes the problem

of section i in that the kernel is complete, (the kernel of section I had

only the first two terms). But kernel (9.1) is that particular choice of

kernel (9.15) in which we set -n


u (O) = - ~I ; also we note that the ~n(o), s
n
are primary bifurcation points for problem (9.14). Hence by Theorem 9-3

there actually do exist sets of constants {~(o)} such that the eigenfune-

tion branches arising at these primary bifurcation points undergo no

secondary bifurcations at all. Indeed {~ = is one such set.

Hence if we seek a condition on t h e primary bifurcation points ~(n


°)}- of

problem (9.14), with the complete kernel (9.15)~ such that there is no

use only the first two terms~, the particular constants -n


~(o) = --~
i would
/ n
presumably have to satisfy it.

Thus, assumptions H-I through H-6 have been made on the nonlinear

operator T(x) mapping a real Banach space X into itself, with T(e ) = 8,

and three times Fr~chet differentiable. For the convenience of the

reader we set down these cumulative hypotheses:


- 108

H-!: T'(Xo) is a compact linear operator, where x o is a solution of

the problem T(x) = kx. This statement is fulfilled conveniently if the

nonlinear operator T(x) is completely continuous. Then Tt(X) is compact

for a n y x 6 X.

Statement H-I is used in section 3.

H-2: T(x) is an odd operator, i.e., T(-x) = - T(x).

This is used in section 4. Further statements have to do w l t h T ( x ) as a

Ha-~erstein operator in C(O,l), (see eq. (9-2)). They are used in sec-

tions 6 and 7-

H-3:
z u
f(s,x) is four times differentiable in x, with Ifv ivl, bounded uni-
a%

fornD.y o v e r 0 ~ s < l , and lira f(s,x) = f '(s,O) uniformly on 0 < s g 1.


x x
x-*O
(Statement H-2 already implies that f(s,O) = 0).

H-4a: Subllnearity; i.e. fx'(S'X) > O, 0 < s < i, and Xfx" (s,x) < O,

0 < s < i, - ~ < x< + ®.

H-4b: Superlinearity; i.e. f '(s,x) > O, 0 ~ s ~ l, and xf~'(s,x) > O,


. . . . X

O< s< i, -~< x<+=.

H-~: Asymptotic Linearity; i .e. lim f(s,X) = A(s) ~ 0 uniformly,


x

O~ S< i.

H-6: K(s,t) is a symmetric oscillation kernel.

While these notes may answer some questions about nonlinear eigen-

value problems, it is probable that many more questions are raised.

Section 2 deals with a process for extending or continuing solutions

of nonlinear equations in X × R under the assumption that we know a solu-

tion pair (Xo, k o) such that koI-T' (xo) has a bounded inverse. A more
- 1 0 9 -

convenient continuation process might be based on solution of the vector


dx
differential equation ~ = - [AI-T' (x)]'ix relative to the initial con-

dition x = Xo, k = ko, provided one could carefully show the differen-

tial equation to be valid.

Almost immediately in section 3 we assumed that Tt(x o) is compact;

thus a discussion of other spectral alternatives--the continuous spectrum,

the residual spectrum--was avoided. Without this assumption, points of

CqT' and R~T' can be exceptional points with respect to the nonlinear

transformation T. Thus one wonders, if (Xo,k o) is an exceptional point

such that k ° E CqTt(x o) or k O E R~Tt(Xo), whether or not eq. (2.3) ever

leads to a bifurcation problem.

Again in section 3, though the derivation of the bifurcation equa-

tion (3.4) using the extended pseudo inverse has some aesthetic appeal,

the arduous substitutions used to reduce this equation to manageable form,

namely eq. (3.8), do not. The method used was that of Bartle [ref. i,

p. 370, p. 373]. M. S. Berger [ref. 4, pp. 127-136], N. W. Bazley and

B. Zwahlen [ref. 23], and D. Sather [ref. 48] of the University of Wis-

consin, seem to have methods which avoid these laborious substitutions.

These authors make use of Krasnoselskii's bifurcation equation [ref. 14,

pp. 229-231] .*

In section 4 we considered only the case of bifurcation at an eigen-

value of multiplicity one. Bifurcation with higher multiplicity is a

very difficult problem, though progress has been made by M. S. Berger

[ref. 4, p. 134], Bazley and Zwahlen [ref. 3], Graves [ref. 12] and Sather

[ref. 48]. This topic is important however since in studying secondary

*Please read the Appendix.


II0

bifurcation, one cannot realistically assume that the multiplicity is

one (which is why we have used the oscillation kernel in sections 5, 6

and 7). Again even if the secon~z 7 bifurcation multiplicity is one,

there was trouble in applyiv~ the Newton Polygon method in the case where

the leading coefficient a I in eq. (4.2) vanishes (see section 8). This

problem needs attention in that this situation does arise with HAmmer-

stein operators in which K(l-s,l-t) = K(s,t) and f(l-s,x) = f(s,x).

There are several unsettled questions arising from section 7. If

there is no sublinearity or superlinearity condition, then we cannot

prove Theorem 6.2. Accordingly we can get secondary bifurcation on the

p'th branch in HA,--erstein's equation through ~ = ~p. These cases need

study.* Likewise if we do not assume that K(s,t) is an oscillation kernel t

secondary bifurcation can arise through the possibility that on the p'th

branch, xp~(s,~), we have ~ = ~n for n ~ p. Furthermore the secondary

bifurcation could be with multiplicity greater than unity, as mentioned

above. This provides a rich field for study. Again, is there a weaker

assuml~ion on the kernel K(s,t) (other than assuming it to be an oscilla-

tion kernel) which would guarantee that the operator (~.6) has only simple

eigenvalues, to~ether with some identifiable way of enumerating the eigen-

functions (such as by the zeros), regardless of what continuous function

x(s) we use in defining the operator (5.6)? It was observed in section 5

that the oscillation kernel is actually in excess of our requirements as

we know them now. Could such a weaker assumption about K(s,t) enable us

to study branches of e i g ~ c t i o n s emanating from negative primary bifUrca-

tion points? (Oscillation kernels have only positive eigenvalues.) If we

*Also there is the case of nonsy~netric K(s,t).


- iii -

must live with the oscillation kernel, for the time being, in our desire

to study branches of eigenfunctions in the large, how can this notion be

generalized? Is there an abstract analogue, such as an "oscillation opera-

tor" in Hilbert s~ce, which corresponds to the concrete idea of a linear

integral operator in L2(0,1) with oscillation kernel? It is readily ap-

preciated in these notes how our need for the oscillation kernel forces

us into the study of Hammerstein operators rather than of a more abstract

formulation.

With sections 6 and 7, the need is clearly that of a yet more re-

fined condition against secondary bifurcation, both for the individual

branches of eigenfunctions and for all the branches of elgenfunctions,

under the given hypotheses. We need a sufficient condition which is also

a necessary condition. If possible such conditions should be much more

easily interpretable than either conditions (6.9) or (6.1~). For example

a condition might be devised on the primary bifurcation points [~(o)]-

which would preclude secondary bifurcation of a branch, or of any branch.

In the e~Ample of section i, we saw ths~ b > ~i was such a condition.

Lastly, we seem to lack a good "figure of merit" for a sufficient con-

dition against secondary bifurcation. How can one tell in a complicated

problem which of several conditions is best without computation?

We have already mentioned the question which arises in section 8,

that of the vanishing leading coefficient in the bifurcation equation

(4.2) when we attempt to study the one secondary bifurcation of the

problem; the case does not lend itself very well to the Newton Polygon

analysis of Dieudonn~ or of Bartle. Can one modify the method, or find

a new method?*

*Please see the A I ~ 4 Y .


- 112 -

How can one explain the lack of secondary bifurcation, on any

branch of eigenfunctions in the autonomous case of boundary value prob-

lem (9.2), in terms of a condition against secondary bifurcation for

the equivalent Hammerstein equation? What about the conditions of

C. V. Coffman [ref. 6], and how do they relate to our conditions of

section 6, if they relate at all? Again, the particular primary bi-

i~ircation points ~n
(o) = -~
l for eigenvalue problem (9.14), which is
n
equivalent to boundary value problem (9.4), must satisfy some condi-

tion guaranteeing no secondary bifurcation of any eigenfunctlon branch•

Theorem 9-3 proves there is no secondary bifurcation on the basis of

the boundary value problem. What is this condition on the primary bi-

furcation points? Whatever the condition may be, we see that it is not

VACUOUS •

There are many other questions in nonlinear analysis not arising

specifically in these notes. Nonlinear eigenvalue problems occur in

their own right in physics. We mention the problem of the buckled beam

[ref. 14, pp. 181-183], the buckled circular plate [refs. 28, 35, 36, 53],
the problem of waves on the surface of a heavy liquid [ref. 42] t the prob-

lems of the heavy rotating chain and of the rotating rod [refs. 20 and 38]p

the problem of Taylor vortices in a rotating liquid [ref. 37], the Benard

problem [refs. 27, 29, 35], and the Problem of Ambipolar Diffusion of Ions

and Electrons in a Plasma, [refs. 22, 24]. There are certainly many more.

Eigenfunctions for linear operators have useful properties apart from

their immediate physical interpretation. One can prove such properties as

completeness, which enables one to make expansions useful in solving in-

homogeneous equations, the linear analogue of eq. (2.1). Are eigenfunctions


- 113 -

for nonlinear operators ever complete in any sense, and is there a

method of ~ s i o n in terms of these eigenfunctions which might be

useful in connection with eq. (2.1)? The reader might wish to read

some work by A. Inselberg on nonlinear superposition principles in

this connection 2 [ref. 31] •

It is well known that eigenfunctions for linear operators have ~slch

to do with the representation of solutions of initial value problems.

Would eigenfunctions of nonlinear operators have anything to do with

the representation of solutions of nonlinear initial value problems?

In this connection, the reader might consult the recent work of Dorroh,

Neuberger, Komura, and Kato on nonlinear semi-groups of nonexpansive

transformations [refs. 25, 26~ 32, 39, 43].

These notes have dealt largely with the point spectrum of the non-

linear operator T(x). These are the points A of the spectrum to which

eigenfunctions are associated through the eigenvalue problem T(x) = Ax.

Are there other kinds of points in the spectrum of T(x)? Under very

bread assumptions J.Neuberger [ref. 44] has shown that a nonlinear opera-

tor does indeed have a spectrum. Under various assumptions on T(x) is

there a continuous spectru~n or a residual spectrum, as with linear opera-

tors? Are there any other parts of the spectrum of these more general

operators?

Workers in nonlinear Functional Analysis should not want for Pos-

sible research poper topics.


- 114 -

BIBLIOGRAPHY

i. Battle, R. G., "Singular points of functional equations," A.M.S.

Transactions, Vol. 75 (1993), PP. 366-384-

2. Bazleyp N. and B. Zwshlen, "Remarks on the bifurcation of solutions

of a nonlinear eigenvalue problem," Archive for Rat. Mech. and Anal.,

Vol. e8 (1968), pp. 51-98.


3. Bazley, N. and B. Zwahlen, unpublished work by private co~m/nication.

4. Berger, M. S., "Bifurcation Theory for Real Solutions of Nonlinear

EllilM~ic Partial Differential Equations," Lecture Notes on Bifurca-

tion Theory sad Nonlinear Eigenvalue Problems, edited by J. B. Keller

sad S. Antmsa, N.Y.U. Courant Inst. pp. 91-184.

9- Byrd, P. F., and M. D. Friedman, Handbook of El!ilstic INtegrals for

Emgineers sad Physicists, Springer-Verlag, Berlin (1954).

6. Coffms~, C. V., "On the uniqueness of solutions of a nonlinear boundary

value problem," J. of Math. and Mech., Vol. 13 (1964) pp. 791-763.

7. Courant, R. sad D. Hilbert, Methods of Mathematical .~sics, Vol. I,

Tnterscience Publishers, New York (1953).

8. Dieudonne, J., "Sur le Polygon de Newton," Archive der Math., Vol. 2

(1949-90), pp. 49-99.

9. Gsatmacher, F. P. and M. G. Krein, Oscillation matrices and kernels,

sad small vibrations of mechanical s~stems, A.E.C. Transl. 4/~81;

Office of Technical Services, Dept. of Commerce, Washington, D. C.

(1961) ~9. Available also in German Translation, Academic-Verlag,

Berlin (19~).
i0. Goldberg, S., Unbounded Linear O~erators, McGraw Hill, 1966.
- 115 -

ii. Graves, L., Theory ~ Functign9 of Real Variables. McGraw Hill,

New York (i~6).


12. Graves, L., "Remarks on singular points of functional equations,"

A.M.S. Transactions, Vol. 77 (1955), PP. 150-157.

13. Hildebrandt, T. H., and L. M. Graves, "Implicit functions and their

differentials in general ~nalysis," A.M.S. Transactions, Vol. 29 (1927)

PP. 127-193.
14. Erasnoselskii~M. A., ~ i c a l methods in the theory of nonlinear

Integra! eqnations, Pergamon Press, London (1964).

15. Lusternik~ L. A., and V. I. Sobolev, Elements of func%io~l 8malysis,

Frederick Ungar Publishing Co., New York (1961).

16. Nirenberg, L., Functional A~!Ysis Lecture Notes, N.Y.U. Courant

Institute (1961).
17. Pimbley, G. H. Jr., "A sublinear Sturm-Liouville problem," J. of

MBth. and Mech., Vol. ii (1962), PP. 121-138.


18. Pimbley, G. H. Jr., "The eigenvalue problem for sublinear Hammerstein

operators with oscillation kernels," J. of M~th. and Mech., Vol. 12

(1963) Pp. 577-998.


19. Riesz, F. and B. Sz.-Nagy., Functional anal~sis~ Frederick UnEar

Publishing Co., New York (1955)-


20. Ta~lJbakhsh~ I.p and F. Odeh, "A nonlinear eigenvalue problem for

rotating rods," Archive for Rat. Mech. and Ansl.~Vol. 20 (1965)

Pp. 81-9~.
21. Zaanen, A. C., Linsar ~n~lysis~ Interscience Publishers Inc., New

York (1953) •
- 116

ADDITIONAL REFERENCES

22. Allis, W. P. and D. J. Rose, "The transition from free to ambipolar

diffusion," The Physical Review, Vol. 93 (1954), No. l, pp. 84-93.

23. Bazley, N. W. and B. Zwahlen, "Estimation of the bifurcation coeffi-

cient for nonlinear eigenvalue problems," Battelle Institute Advanced

Studies Center (Geneva, Switz.), Math. Report No. 16, Oct. 1968.

24. Dreicer, H., "Ambi-polar diffusion of ions and electrons in a

magnetic field," Interoffice communication at Los A1~mos Scientific

Laboratory (1959) •

2~. Dorroh, J. R.~ "Some classes of semi-groups of nonlinear transforma-

tions and their generators," J. of M~th. Soc. of Jal~n, Vol. 20 (1968),

No. 3, pp. 437-4~.


26. Dorroh, J. R., "A nonlinear Hille-Yosida-Phillips theorem," to appear

in Journal of Functional Analysis, Vol. 3 (1969).

27. Fife, P. C. and D. D. Joseph, "Existence of convective solutions of

the generalized B~nard problem which are analytic in their norm,"

to appear in Archives for Rational Mechanics and Analysis.

28. Friedrichs, K. O. and J. J. Stoker, "The non-linear boundary value

problem of the buckled plate," Amer. Jour. of Math., Vol. 63 (19~l),

pp. 839-888.

29. G'6rtler, H., K. Kirchg~ssner and P. Sorger, "Branching solutions of

the B~nard problem," to alxpear in Problems of H y ~ . . ~ c s and Con-

tinuum Mechanics, dedicated to L. I. Sedov.

30. H ~ e r s t e i n , A., "Nichtlineare integralgleichungen nebst ameeudungen,"

Acta Mathematica, Vol. 54 (1930), PP. 117-176.


- 117 -

31. !nselberg, A., "On classification and superposition principles for

nonlinear operators," A. F. Grant 7-64 Technical Report No. 4,

May 1965, sponsored by the Air Force Office of Scientific Research,

Washington 25, D. C.

32. Kato, T., "Nonlinear semigroups and evolution equations," J. Math.

Soc. of Japan, Vol. 19 (1967), PP. 908-520.

33. Keller, H. B., "Nonlinear bifurcation, " by private communication from

H. B. Keller, Calif. Inst. of Tech., 1969.

34. Keller, H. B., "Positive Solutions of some Nonlinear Eigenvalue Prob-

lems," to appear in J. Math. and Mech., Oct., 1969.

35- Keller, J. B. and S. Antman, "Bifurcation theory and nonlinear Eigen-

value Problems," Lecture notes, Courant Institute of Mathematical

Sciences, New York University, 1968.

36. Keller, J. B., Keller, H. B. and E. L. Reiss, "Buckled states of

circular plates," Q. Appl. Math.,Vol. 20 (1962), pp. 55-65.

37. Kirchg~ssner, K. and P. Sorger, "Stability analysis of branching

solutions of the Navier-Stokes equations," by private c o ~ i c a -

tion from K. Kirchg~ssner, Frelburg University, W. Ger~ny.

38. Kolodner, I. I., "Heavy rotating strlng--a nonlinear eigenvalue

problem," Comm. on Pure and Applied Math., Vol. 8 (19~5), PP. 395-

4O8.

39. Komura, Y., "Nonlinear seml-groups in Hilbert space," J. Math. Soc.

of Japan, Vol. 19 (1967), PP. 493-507.


- 118 -

40. Krasnoselskii, M. A., Positive Solutions of Operator Equatigns,

Chap. 6, section 6.4, P. Noordhoff Ltd., Groningen, Netherlands,

1964.
41. Krasnoselskii, M. A. "Some problems of nonlinear analysis," A.M.S.

Translations, series 2, Vol. i0 (1958), PP. 34~-409.

42. Nekrasov, A. I., "The exact theory of steady waves on the surface

of a heavy fluid," translated as a U. of Wisconsin Report: MRC-TSR

813.
43. Neuberger, J.W., "An exponential formula for one-l~rameter semi-
groups of nonlinear transformations," J. Math. Soc. of Japan, Vol.

18 (1966), PP. i~-157.


44. Neuberger, J.W., "Existence of a spectrum for nonlinear transforma-

tions," to appear in Pacific Journal of M~thematics.

49. Pimbley, G. H. Jr., "A fixed-point method for eigenfunctions of sub-

linear Hammerstein operators," Archive for Rat. Mech. and Anal.,

Vol. 31 (1968).
46. Pimbley, G. H. Jr., "Positive solutions of a quadratic integral

eq~tion," Archive for Rat. Mech. and Anal., Vol. 24 (1967), No. 2,

PP. 107-127.

47. Pimbley, G. H. Jr., "Two conditions preventing secondary bifurcations

for eigenfunctions of sublinear HA-~erstein operators," to aPPear in

J. of Math. and Mech., 1969.

48. Sather, D., "Branching of solutions of an equation in Hilbert space,"

to appear as a Mathematics Research Center Report, U. S. Army, Uni-

versity of Wisconsin.
- 119 -

49. Tchebotarev, N. G., "Newton's polygon and i t s r o l e in t h e p r e s e n t

develol~nent o f mathematics," Papers p r e s e n t e d on t h e occasion o f t h e

t r i - c e n t e n a r y o f I s a a c Newton's b i r t h , Moscow, 1943, PP. 99-126.

Copy available in U. S. Library of Congress. Translation at Los

Alamos Scientific Laboratory library, 512C514m~.

50. Vainberg, M. M. and P. G. Aizengendler, "The theory and methods of

investigation of branch points of solutions," Progress in Mathematics,

Vol. 2, edited by R. V. Gamkrelidge, Plenum Press, New York, 1968.

51. Vainberg, M. M. and V. A. Trenogin, "The methods of Lyapunov and

Schm~dt in the theory of nonlinear equations and their further de-

velol~ent, '~Jspekhi Matematicheskikh Nauk (English transl.), Vol. 17

(1962), No. 2, PP.

52. Wing, G. M., and L. F. Shampine, "Existence and uniqueness of solu-

tions of a class of nonlinear elliptic boundary value problems," by

private comm/mication from G. M. Wing, University of New Mexico,

Albuquerque, N. M.

53- Wolkowisky, J. H., "Existence of buckled states of circular plates,"

Comm. on Pure and APPlied Math., Vol. XX (1967), PP. 949-560.


- 1 2 0 -

APPI~VDIX: ANOTHER BIFURCATION METHOD, THE EXAMPLE OF SECTION i, RE-


CONSIDERED AGAIN •

In Section 3 we presented quite a traditional method of converting

the bifurcation equation, eq. (3.4) into a form suitable for application

of the Newton Polygon method of solution. It involved substitution of

the nonlinear expansion, eq. (3.5)I into expression (3-7) so as to com-

pute the q~antity Fs(Vs(U)). As the patient reader discovers, this method

is very arduous.

Several recent authors, notably L. Graves [ref. 12], M. S. Berger

[ref. 4~ p. 37], N. Bazley and B. Zwahlen [ref. 23], and D. Sather [ref.

48], succeed in discussing bifurcation without this arduous substitution

process. Their treatments are rather special, however, in that they apply

only in the case of bifurcation at the origin (xo = 8 in Section 3). In

this Appendix we present a treatment, patterned after those mentioned

above, with which bifurcation can be treated anywhere (i.e.~ x O ~ 8 ).

For simplicity, however~ we confine the treatment here to the case of unit

multiplicity (i.e., n = I); the above authors treat cases of higher multi-

plicity as well.

Let us digress at the outset to present a way of reducing eq. (2.3)

(where koI-Tt(x o) is singular) to eq. (3.2) an~l eq. (3.4)~ which is more

simple than that appearing in Section 3. It is correct in the case where

k o is an eigenvalue of the compact linear operator T s (xO) of Riesz index

unity, [ref. 21, p. 334, Th. 6]. In particular it works then in the case

when T t (xo) is a compact, symmetric or sym~etrizable operator on a real

Hilbert space H, [ref. 21~ p. 342, Theorem 18~ p. 397, Theorem 3]. This
- 121 -

reduction is usually attributed to M. A. Krasnoselskii, [ref. 14, p. 229].

If k o has Riesz index unity, then by definition, X = ~]l(xo) +~ Rl(Xo),

and there exist complementary projections EI, E I which project respectively

on nl(X o) and Rl(Xo) , and which c o ~ t e with T' (x o) • Moreover, koI-T' (x o)


has a bounded i n v e r s e on t h e subspace Rl(X o) = E1X.
We premultiply eq. (2.3) by the projection E 1 to obtain

' = - o + + lRl(Xo,h)

Letting h = k + u, where k E Rl(Xo), u E ~]l(Xo), gives

k = [kOX~' (Xo) ]'IE1F8 (k+u) ° (A.1)

Now [koI-T' (Xo)]'IE 1 i s Just t h e extended pseudo-inverse d/scussed i n Sec-


t i o n 3- Adding u to both sides of eq. (A.1) r e s u l t s in eq. (3.2), t o which
Theorem 3.2 can be a p p l i e d to produce the l o c a l s o l u t i o n h = Vs(u).
Again if we premultiply eq. (2.3) by the projection El, we have

E1Fs(h) = ( I - E 1 ) F s ( h ) = [AoI-T'(Xo)]Elh
--e
which is Just eq. (3.~).

Thus when the index of ko, as an eigenvalue of the coml~ct operator

Tt(Xo) , is unity, the approach to the bifurcation equations (3.2) and (3.4)

is straightforward.

Let us now return to the bifurcation analysis based on equation (3.2)

and (3.4). When the multiplicit~r' of k 0 is also unity, we set u = ~u 0 with

a real scalar 8,nd u 0 the normalized element Slm~rAning ~l(Xo). In the sequel
- 1 2 2 -

it will be convenient to redesignate the local solution of eq. (3.2),

given by Theorem 3.2, as h = V(5,~). We also generalize the ex!m~usion

(3-7) used in Section 3:

F~(h) = - 5 % - 5h + Cxo(h) + D x (h), (A .2)


0
where (~:Cx (h) is the first nonvanishing symmetric Fr&chet differential

form [ref. 15, p. 188] a t Xo, homogeneous of imteger order ~ ~ 2, z.

C x (h) =a-~.l d~T(Xo;h,...,h ) and D x (h) = Ra(Xo;h) is the r ~ i n d e r ,


O O
lIDx (h)ll
with lira o = 0.
llhll-o 11hII
Recalling now the bifurcation equation in the form (3.6), where

Uo* 6 X* is the normalized eigenelement of the adJoint operator T' (Xo)* ,

we write eq. (3.4) as

- 5Uo*X o - 5~Uo*U O + UO*C x (V(5,~)) + Uo~D x (V(5,~)) = O. (A.3)


O O

We now seek to pass directly from eq. (A.3) to a form which resembles

eq. (4.3) and to which the Newton polygon type of analysis may be applied.

Namely, from eq. (A.3) we write:

~%*Xo + ~%~o

= Uo*[Cx (~uo) + cx (v(~,~)) - cx (~uo) ~ u*~ x (v(~,~)) (A .4 )


0 0 0 0

-- g%o~Cx (%) i+ u o
o g%o~x (uo)
o
uo *~x ° (h) ]
+ o

g~UJCxo(%)
- 123 -

Following Bazley and Zwahlen [ref. 23, P. 4], we must show that the

quantity in brackets in eq. ~.4) tends to -n~ty as 5,~ ~ O. To this end

we have the following lemmas:

I ~ A.I: V~(5,~) exists and is continuous in its arguments; i.e. the

s o l u t i o n h = V(5,g) o f e q . ( 3 . 2 ) , (with u = ~Uo), is c o n t i n u o u s l y differ-

entiable with respect to ~.

Proof: The result follows from the generalized Implicit Function Theorem

(which yields the same result about eq. (3.2) as our Theorem (3.2), if not

the same nonlinear develolXaent (3-5) needed in Section 3) and its corollary

about derivatives [ref. 15~ P. 194, Th. i; p. 196~ Th. 2].

LemmA A.2: liraV(5,~ )


~-o ~ =u°"
5-.O

Proof: From eq. (3.2) we form t h e following difference equation:

v(~.~+t)-v(~.~t
) = R(Xo)F~(v(~,~+t)-F~t
(v(~,~)) + U •
0

Using eq. (A.2) and passing to the limit as t - O, we get the following

e q u a t i o n f o r V~(5,~):

+ ~Dxo(VCS,~);v
~) 1+ %; (A.~)
I- +
here dCx (h;k) and dD x (h; k) are the first Fr~chet differentials of the
o o

n o n l i n e a r o p e r a t o r s Cxo(h) , Dxo(h ) a t h o E X.

It can be verified that Cxo(h) = lU, d ~ ( X o ; ~ ) has a first

F r ~ c h e t differential dCxo (h2k) which satisfies t h e inequality

1
lldCXo(h,k)II < ~ K(Xo)llh]la'l'll~l

where K(Xo) > 0 i s a c o n s t a n t f o r g i v e n x o E X.


- 124 -

Again, D x (h) = T(Xo+h ) - T(x o) - T' (Xo)h - C x (h) is continuously


o o
Fr~chet differentiable, so that lira dD (h;k) = aO x (e;k), uniformly for
h~e Xo o
1121 = i. Moreover since Dx^(8) = 8, we have, using the definition of the

Fr~chet differential [ref. 15, P. 183],

IDxo(~k) Ex° (8 ;o'k)- = 8

since (8;k) = o(1121) and D (h) = o(llhll~). Thus lira d O (h;k) = 8 unl-
Exo Xo ,h!l~ ~o
ro~ on II~l = 1.

We now recall an estimate for V(5,~ ) derived in Section 3 and appear-

ing at the top of page 26; namely

llv(~,~)ll ~ 2(ll~ql.ll~olll~t + IJuoll-J~l ), Jlhll ~ d 2, 181 ~ 8 2, (A .6)

where d2,5 2 are numbers arising in the proof of Theorem 3.2. Thus

IIv(5,~)II - 0 as 5,¢ -. O. Using eq. (A.5) we then have, for sufficiently

small 5,~, the estimate:

~1~oll
1-211~ql{ ~K(x o)llv (8,~)II~'~+IID~ ¢v¢8,~ ) )11
o

where JlD~ (h)tt


o
= s~ IlaDxo. (h;k)ll , and w h e r e , a s was s h m m a b o v e ,

nm liD' (h)ll = o. ~is sh~s that Iiv~¢8,~)11 r~i~s boun~ ~s 8,~ - O~


llhll~o~o
say 11v~11 ~ ~_-

Using eq. (A.5) a second time, we have

live ¢8,¢ )-uoll ~ II~ql{ 18 I~+~K¢%).IIv(~,¢ )11~'~

o
- 125 -

so that again by virtue of the estimate (A.6) above, we see that

lira v(5,~) = Z±m v~(~,~) = u . Thls ~ o v e s the letmm.


~-.o ~ ~-.o o
5-.O

We return now to expression (A.4) for the bifurcation equation.

We note the following estimates:

u°* IC Xo
(v(~,~))-c xo (,uo)I

~aUo*Cx (uo)
o

-.0
0 as Isl,l~l -" o
in v l e w o f the s-homogeneity of C x (h), a n d L e m m A . 2 . Also
o

Uo*DXo(v (~, ~ ) )
~%o~x O
(%) o
0 a s 5 , ~ -. 0

by virtue of the property Dxo(h) = o(ll~l ~) and estimate ( A . 6 ) .

We now see, with reference to eq. (A.4), that the bifurcation equa-

tion for the case n = i can be written

SUo*Xo + S~Uo~Uo --~aUo*Cx (Uo)[l+~(s,~)] (A-7)


O

where

0 I cxo (v(~,~))-cxo (~uo)1 +Uo~nx0 (v(~,~))

(u o )

t e n d s t o z e r o as ~,~ ~ O.
- 126 -

Thus we have a bifurcation equation of the form (4.3) which can be

treated by the Newton Polygon method. In a manner similar to that in

Section 4, we can label the coefficients:

a I : - ~o*~o, a2 : . ~o~o,
C~

= C o) =
o

integer ~ m 2.

Bifurcation at the origin, x O = 0, leads to a I = O. We then have

Theorems 4.2 and 4.3, proved in exactly the same way as in Section 4.

(5,~) can be shown to have the needed differentiability. (Let us note

here that a 2 ~ 0 needs Justification unless X is a Hilbert sl~ace. Like-

wise C x (h) might more properly be taken as the first Fr~chet differential
o
form such that ~ l # 0.)
It is interesting to see how we can resolve a difficulty which arose

in Section 8 using the form of the bifurcation equation, namely (A.7),

derived in this Appendix.

For Hammerstein operators with symmetric kernel~ we have derived

specific coefficients for the bifurcation equation, namely those desig-

nated eq. (5.3). The corresponding coefficients for eq. (A.7) are

z ,~(~) m.z,
al=-Of;, Uo, X o ) , a2=-(f~:o~O,'%), %l=~.~.o ~x '~o J'
o o

integer a ~ 2.

For the example being discussed in Sections i and 8, these coefficients

become:
- 127 -

a I = - ([l+3~sb2 ]sin 2S,~sb) = 0,

a2
= - ([l+3~s 2 ]sin 2s, sin 2s)

= - sin22~Is - 4 a-b sin2s sin22sds

= - I1 + ~_~]a-b . < O~

1 ab
a S = ~[. ~ ( ~ s b ,sinB2s)

N
ab ~ ..22 sin s sin32sds = 0

= ab (1,si n4 2 s ) ab
..............
2b..a ~si~2~S

ab
= ~ 2-~.a ~ O,

wherewe recall from Sectlon 8 t h a t ~ s b = ~ 3 a ~ sln s are the polnts

ab
on the zero'th branch at k = Ash = ~ where secondary bifurcation takes

place.

With these coefficients~ eq. (A.7) becomes

aeS~ + e4~3[1+~(5,~)] = O. (A.8)

Eq. (A.8) is handlea in the same way as similar equation (4.4) is treated

in the proof of Theorem 4.2. Since a2a4 < O, bifurcation is to the right,

as is certainly indicated in Section i. There is a trivial solution ~ = 0


- 128 -

of eq. (A.8) which when substituted in eq. (3.2), along with x o = Osb,

leads to the continuation of a solution proportional to sin s, again as

expected from Section i. Two other nontrivial solutions of eq. (A.8),

namely ~ ( 8 ) , lead, through eq. (3.2), to a pair of solutions which

branch away from the secondary bifurcation point (q0sb,ksb). Again these

branching solutions can be observed by means of eq. (3.2), which we can

now write as

h = M - 8E~s b - ~E1h+ 2 a sin s sin t (t) dt

+ ~±(8)sin 2s,

to be expressed as linear combinations of sin s and sin 2s. (E1 is the

projection on the range spanned by sin s.)

In the study of secondary bifurcation, at least with the example of

Section l, the traditional bifurcation method as set forth in Section 3

seems to result in unwanted terms which are troublesome when a I vanishes.

The method described in this Appendix seems to be free of such terms. We

have yet to obtain a good understanding of this, and to reconcile the two

methods in this respect.

Offsetdruck : Julius Beltz, Weinheim/Bergstr

You might also like