You are on page 1of 179

MATHEMATICAL STRATEGIES FOR

PROGRAMMING BIOLOGICAL CELLS
by
Jomar F. Rabajante
A master’s thesis submitted to the
Institute of Mathematics
College of Science
University of the Philippines
Diliman, Quezon City
as partial fulfillment of the
requirements for the degree of
Master of Science in Applied Mathematics
(Mathematics in Life and Physical Sciences)
April 2012
This is to certify that this Master’s Thesis entitled “Mathematical Strategies for
Programming Biological Cells”, prepared and submitted by Jomar F. Rabajante
to fulfill part of the requirements for the degree of Master of Science in Applied
Mathematics, was successfully defended and approved on March 23, 2012.
Cherryl O. Talaue, Ph.D.
Thesis Co-Adviser
Baltazar D. Aguda, Ph.D.
Thesis Co-Adviser
Carlene P. Arceo, Ph.D.
Thesis Reader
The Institute of Mathematics endorses the acceptance of this Master’s Thesis as partial
fulfillment of the requirements for the degree of Master of Science in Applied Mathematics
(Mathematics in Life and Physical Sciences).
Marian P. Roque, Ph.D.
Director
Institute of Mathematics
This Master’s Thesis is hereby officially accepted as partial fulfillment of the requirements
for the degree of Master of Science in Applied Mathematics (Mathematics in Life and
Physical Sciences).
Jose Maria P. Balmaceda, Ph.D.
Dean, College of Science
Brief Curriculum Vitae
09 October 1984 Born, Sta. Cruz, Laguna, Philippines
1997-2001 Don Bosco High School, Sta. Cruz, Laguna
2006 B.S. Applied Mathematics
(Operations Research Option)
University of the Philippines Los Ba˜ nos
2006-2008 Corporate Planning Assistant
Insular Life Assurance Co. Ltd.
2008 Professional Service Staff
International Rice Research Institute
2008-present Instructor, Mathematics Division
Institute of Mathematical Sciences and Physics
University of the Philippines Los Ba˜ nos
PUBLICATIONS
• Rabajante, J.F., Figueroa, R.B. Jr. and Jacildo, A.J. 2009. Modeling the
area restrict searching strategy of stingless bees, Trigona biroi, as a quasi-
random walk process. Journal of Nature Studies, 8(2): 15-21.
• Esteves, R.J.P., Villadelrey, M.C. and Rabajante, J.F. 2010. Determining
the optimal distribution of bee colony locations to avoid overpopulation us-
ing mixed integer programming. Journal of Nature Studies, 9(1): 79-82.
• Castilan, M.G.D., Naanod, G.R.K., Otsuka, Y.T. and Rabajante, J.F. 2011.
From Numbers to Nature. Journal of Nature Studies, 9(2)/10(1): 35-39.
• Tambaoan, R.S., Rabajante, J.F., Esteves, R.J.P. and Villadelrey, M.C. 2011.
Prediction of migration path of a colony of bounded-rational species foraging
on patchily distributed resources. Advanced Studies in Biology, 3(7): 333-345.
iii
Table of Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2. Preliminaries
Biology of Cellular Programming . . . . . . . . . . . . . . . . . . . . 4
2.1 Stem cells in animals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Transcription factors and gene expression . . . . . . . . . . . . . . . . . . 8
2.3 Biological noise and stochastic differentiation . . . . . . . . . . . . . . . . 10
Chapter 3. Preliminaries
Mathematical Models of Gene Networks . . . . . . . . . . . . . . . . 12
3.1 The MacArthur et al. GRN . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 ODE models representing GRN dynamics . . . . . . . . . . . . . . . . . . 15
3.2.1 Cinquin and Demongeot ODE formalism . . . . . . . . . . . . . . 16
3.2.2 ODE model by MacArthur et al. . . . . . . . . . . . . . . . . . . 19
3.3 Stochastic Differential Equations . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 4. Preliminaries
Analysis of Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . 22
4.1 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Bifurcation analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 Fixed point iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4 Sylvester resultant method . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.5 Numerical solution to SDEs . . . . . . . . . . . . . . . . . . . . . . . . . 33
Chapter 5. Results and Discussion
Simplified GRN and ODE Model . . . . . . . . . . . . . . . . . . . . 35
5.1 Simplified MacArthur et al. model . . . . . . . . . . . . . . . . . . . . . 35
5.2 The generalized Cinquin-Demongeot ODE model . . . . . . . . . . . . . 38
5.3 Geometry of the Hill function . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Positive invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.5 Existence and uniqueness of solution . . . . . . . . . . . . . . . . . . . . 52
iv
Chapter 6. Results and Discussion
Finding the Equilibrium Points . . . . . . . . . . . . . . . . . . . . . 57
6.1 Location of equilibrium points . . . . . . . . . . . . . . . . . . . . . . . . 57
6.2 Cardinality of equilibrium points . . . . . . . . . . . . . . . . . . . . . . 60
6.2.1 Illustration 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.2.2 Illustration 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Chapter 7. Results and Discussion
Stability of Equilibria and Bifurcation . . . . . . . . . . . . . . . . . . 73
7.1 Stability of equilibrium points . . . . . . . . . . . . . . . . . . . . . . . . 73
7.2 Bifurcation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Chapter 8. Results and Discussion
Introduction of Stochastic Noise . . . . . . . . . . . . . . . . . . . . . 85
Chapter 9. Summary and Recommendations . . . . . . . . . . . . . . . . . . . . 100
Appendix A. More on Equilibrium Points: Illustrations . . . . . . . . . . . . . . . 106
A.1 Assume n = 2, c
i
= 1, c
ij
= 1 . . . . . . . . . . . . . . . . . . . . . . . . 107
A.1.1 Illustration 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
A.1.2 Illustration 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
A.1.3 Illustration 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
A.1.4 Illustration 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
A.2 Assume n = 2, c
i
= 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
A.2.1 Illustration 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
A.2.2 Illustration 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
A.2.3 Illustration 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.2.4 Illustration 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.2.5 Illustration 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.3 Assume n = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.3.1 Illustration 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.3.2 Illustration 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A.3.3 Illustration 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A.3.4 Illustration 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A.3.5 Illustration 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A.3.6 Illustration 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
A.3.7 Illustration 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
A.3.8 Illustration 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
A.4 Ad hoc geometric analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 124
A.5 Phase portrait with infinitely many equilibrium points . . . . . . . . . . 127
Appendix B. Multivariate Fixed Point Algorithm . . . . . . . . . . . . . . . . . . 128
v
Appendix C. More on Bifurcation of Parameters:
Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
C.1 Adding g
i
> 0, Illustration 1 . . . . . . . . . . . . . . . . . . . . . . . . . 131
C.2 Adding g
i
> 0, Illustration 2 . . . . . . . . . . . . . . . . . . . . . . . . . 132
C.3 g
i
as a function of time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
C.3.1 As a linear function . . . . . . . . . . . . . . . . . . . . . . . . . . 134
C.3.2 As an exponential function . . . . . . . . . . . . . . . . . . . . . . 137
C.4 The effect of γ
ij
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
C.5 Bifurcation diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
C.5.1 Illustration 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
C.5.2 Illustration 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
C.5.3 Illustration 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Appendix D. Scilab Program for Euler-Maruyama . . . . . . . . . . . . . . . . . . 147
List of References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
vi
Acknowledgments
I owe my deepest gratitude to those who made this thesis possible:
To Dr. Baltazar D. Aguda from the National Cancer Institute, USA for providing
the thesis topic, for imparting knowledge about models of cellular regulation, for sim-
plifying the MacArthur et al. (2008) GRN, for giving his valuable time to answer my
questions despite long distance communication, and for his patience, unselfish guidance
and encouragement;
To Dr. Cherryl O. Talaue for her all-out support, for spending time checking my
proofs and editing my manuscript, for granting my requests to write recommendation
letters, for the guidance, for the encouragement, and for always being available;
To Dr. Carlene P. Arceo for doing the proofreading of my thesis manuscript despite
her being on sabbatical leave, and to the members of my thesis panel for the constructive
criticisms;
To Mr. Mark Jayson V. Cortez and Ms. Jenny Lynn B. Carigma for checking my
manuscript for grammatical and style errors as well as for the motivation;
To the University of the Philippines Los Ba˜ nos (UPLB) and to the Math Division,
Institute of Mathematical Sciences and Physics (IMSP), UPLB for allowing me to go on
study leave with pay;
To Dr. Virgilio P. Sison, the Director of IMSP, for all the support and for being the
co-maker in my DOST scholarship contract;
To Prof. Ariel L. Babierra, the Head of the Math Division, IMSP and to Dr. Editha
C. Jose for the invaluable suggestions, help and encouragement;
To the Philippine Council for Industry, Energy and Emerging Technology Research
and Development (PCIEERD), Department of Science and Technology (DOST) for the
generous financial support; and
To my family for the inspiration, and to El Elyon for the unwavering strength.
vii
Abstract
Mathematical Strategies for Programming Biological Cells
Jomar F. Rabajante Co-Adviser:
University of the Philippines, 2012 Cherryl O. Talaue, Ph.D.
Co-Adviser:
Baltazar D. Aguda, Ph.D.
In this thesis, we study a phenomenological gene regulatory network (GRN) of a mes-
enchymal cell differentiation system. The GRN is composed of four nodes consisting of
pluripotency and differentiation modules. The differentiation module represents a circuit
of transcription factors (TFs) that activate osteogenesis, chondrogenesis, and adipogen-
esis.
We investigate the dynamics of the GRN using Ordinary Differential Equations (ODE).
The ODE model is based on a non-binary simultaneous decision model with autocatal-
ysis and mutual inhibition. The simultaneous decision model can represent a cellular
differentiation process that involves more than two possible cell lineages. We prove some
mathematical properties of the ODE model such as positive invariance and existence-
uniqueness of solutions. We employ geometric techniques to analyze the qualitative
behavior of the ODE model.
We determine the location and the maximum number of equilibrium points given
a set of parameter values. The solutions to the ODE model always converge to a stable
equilibrium point. Under some conditions, the solution may converge to the zero state.
We are able to show that the system can induce multistability that may give rise to
co-expression or to domination by some TFs.
We illustrate cases showing how the behavior of the system changes when we vary
viii
some of the parameter values. Varying the values of some parameters, such as the degra-
dation rate and the amount of exogenous stimulus, can decrease the size of the basin of
attraction of an undesirable equilibrium point as well as increase the size of the basin of
attraction of a desirable equilibrium point. A sufficient change in some parameter values
can make a trajectory of the ODE model escape an inactive or a dominated state.
Sufficient amounts of exogenous stimuli affect the potency of cells. The introduc-
tion of an exogenous stimulus is a possible strategy for controlling cell fate. A dominated
TF can exceed a dominating TF by adding a corresponding exogenous stimulus. More-
over, increasing the amount of exogenous stimulus can shutdown multistability of the
system such that only one stable equilibrium point remains.
We observe the case where a random noise is present in our system. We add a
Gaussian white noise term to our ODE model making the model a system of stochastic
DEs. Simulations reveal that it is possible for cells to switch lineages when the system is
multistable. We are able to show that a sole attractor can regulate the effect of moderate
stochastic noise in gene expression.
ix
List of Figures
1.1 Analysis of mesenchymal cell differentiation system. . . . . . . . . . . . 3
2.1 Stem cell self-renewal, differentiation and programming. This diagram
illustrates the abilities of stem cells to ploriferate through self-renewal,
differentiate into specialized cells and reprogram towards other cell types. 5
2.2 Priming and differentiation. Colored circles represent genes or TFs. The
sizes of the circles determine lineage bias. Priming is represented by col-
ored circles having equal sizes. The largest circle governs the possible
phenotype of the cell. [70] . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 The flow of information. The blue solid lines represent general flow and
the blue dashed lines represent special (possible) flow. The red dotted
lines represent the impossible flow as postulated in the Central Dogma of
Molecular Biology [41]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 C. Waddington’s epigenetic landscape — “creode” [168]. . . . . . . . . . 11
3.1 The coarse-graining of the differentiation module. The network in (a) is
simplified into (b), where arrows indicate up-regulation (activation) while
bars indicate down-regulation (repression). [113] . . . . . . . . . . . . . . 13
3.2 The MacArthur et al. [113] mesenchymal gene regulatory network. Arrows
indicate up-regulation (activation) while bars indicate down-regulation (re-
pression). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Gene expression or the concentration of the TFs can be represented by a
state vector, e.g. ([X
1
], [X
2
], [X
3
], [X
4
]) [70]. For example, TFs of equal
concentration can be represented by a vector with equal components, such
as (2.4, 2.4, 2.4, 2.4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Hierarchic decision model and simultaneous decision model. Bars repre-
sent repression or inhibition, while arrows represent activation. [36]. . . . 17
x
4.1 The slope of F(X) at the equilibrium point determines the linear stability.
Positive gradient means instability, negative gradient means stability. If
the gradient is zero, we look at the left and right neighboring gradients.
Refer to the Insect Outbreak Model: Spruce Budworm in [122]. . . . . . 26
4.2 Sample bifurcation diagram showing saddle-node bifurcation. . . . . . . . 28
4.3 An illustration of cobweb diagram. . . . . . . . . . . . . . . . . . . . . . 29
5.1 The original MacArthur et al. [113] mesenchymal gene regulatory network. 35
5.2 Possible paths that result in positive feedback loops. Shaded boxes denote
that the path repeats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3 The simplified MacArthur et al. GRN . . . . . . . . . . . . . . . . . . . . 37
5.4 Graph of the univariate Hill function when c
i
= 1. . . . . . . . . . . . . . 42
5.5 Possible graphs of the univariate Hill function when c
i
> 1. . . . . . . . . 43
5.6 The graph of Y = H
i
([X
i
]) shrinks as the value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
increases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.7 The Hill curve gets steeper as the value of autocatalytic cooperativity c
i
increases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.8 The graph of Y = H
i
([X
i
]) is translated upwards by g
i
units. . . . . . . . 45
5.9 The 3-dimensional curve induced by H
i
([X
1
], [X
2
]) + g
i
and the plane in-
duced by ρ
i
[X
i
], an example. . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.10 The intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
with varying values
of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
, an example. . . . . . . . . . . . . . . . . . . . 47
5.11 The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
= 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed. . . 49
5.12 The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
> 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed. . . 49
5.13 The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
= 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed. . . 50
5.14 The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
> 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed. . . 50
xi
5.15 Finding the univariate fixed points using cobweb diagram, an example.
We define the fixed point as [X
i
] satisfying H([X
i
]) +g
i
= ρ
i
[X
i
]. . . . . . 51
5.16 The curves are rotated making the line Y = ρ
i
[X
i
] as the horizontal axis.
Positive gradient means instability, negative gradient means stability. If
the gradient is zero, we look at the left and right neighboring gradients. . 51
5.17 When g
i
= 0, [X
i
] = 0 is a component of a stable equilibrium point. . . . 56
5.18 When g
j
> 0, [X
j
] = 0 will never be a component of an equilibrium point. 56
6.1 Sample numerical solution in time series with the upper bound and lower
bound. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.2 Y =
[X
i
]
c
i
K+[X
i
]
c
i
will never touch the point (1, 1) for 1 < c
i
< ∞. . . . . . . . 70
6.3 An example where ρ
i
(K
i
1/c
i
) > β
i
; Y = H
i
([X
i
]) and Y = ρ
i
[X
i
] only
intersect at the origin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.1 When g
i
= 0, c
i
= 1 and the decay line is tangent to the univariate Hill
curve at the origin, then the origin is a saddle. . . . . . . . . . . . . . . . 76
7.2 Varying the values of parameters may vary the size of the basin of at-
traction of the lower-valued stable intersection of Y = H
i
([X
i
]) + g
i
and
Y = ρ
i
[X
i
]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.3 The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c > 1 and g = 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is taken as a
parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.4 The possible topologies when Y = H
i
([X
i
]) essentially lies below the decay
line Y = ρ
i
[X
i
], g
i
= 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.5 The origin is unstable while the points where [X
i
]

=
β
ρ
−K−

n
j=1,j=i
[X
j
]

are stable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.6 Increasing the value of g
i
can result in an increased value of [X
i
] where
Y = H
i
([X
i
]) +g
i
and Y = ρ
i
([X
i
]) intersects. . . . . . . . . . . . . . . . 83
7.7 Increasing the value of g
i
can result in an increased value of [X
i
]

, and
consequently in decreased value of [X
j
] where Y = H
j
([X
j
]) + g
j
and
Y = ρ
j
([X
j
]) intersects, j = i. . . . . . . . . . . . . . . . . . . . . . . . . 84
xii
8.1 For Illustration 1; ODE solution and SDE realization with G(X) = 1. . . 88
8.2 For Illustration 1; ODE solution and SDE realization with G(X) = X. . 88
8.3 For Illustration 1; ODE solution and SDE realization with G(X) =

X. 89
8.4 For Illustration 1; ODE solution and SDE realization with G(X) = F(X). 89
8.5 For Illustration 1; ODE solution and SDE realization using the random
population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . . 90
8.6 For Illustration 2; ODE solution and SDE realization with G(X) = 1. . . 92
8.7 For Illustration 2; ODE solution and SDE realization with G(X) = X. . 92
8.8 For Illustration 2; ODE solution and SDE realization with G(X) =

X. 93
8.9 For Illustration 2; ODE solution and SDE realization with G(X) = F(X). 93
8.10 For Illustration 2; ODE solution and SDE realization using the random
population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . . 94
8.11 For Illustration 3; ODE solution and SDE realization with G(X) = 1. . . 96
8.12 For Illustration 3; ODE solution and SDE realization with G(X) = X. . 96
8.13 For Illustration 3; ODE solution and SDE realization with G(X) =

X. 97
8.14 For Illustration 3; ODE solution and SDE realization with G(X) = F(X). 97
8.15 For Illustration 3; ODE solution and SDE realization using the random
population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.16 Phase portrait of [X
1
] and [X
2
]. . . . . . . . . . . . . . . . . . . . . . . . 98
8.17 Reactivating switched-off TFs by introducing random noise where G(X) = 1. 99
9.1 The simplified MacArthur et al. GRN . . . . . . . . . . . . . . . . . . . . 100
A.1 Intersections of F
1
, F
2
and zero-plane, an example. . . . . . . . . . . . . 106
A.2 The intersection of Y = H
1
([X
1
]) + 1 and Y = 10[X
1
] with [X
2
] = 1.001
and [X
3
] = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
A.3 The intersection of Y = H
2
([X
2
]) and Y = 10[X
2
] with [X
1
] = 0.10103
and [X
3
] = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
A.4 The intersection of Y = H
3
([X
3
]) and Y = 10[X
3
] with [X
1
] = 0.10103
and [X
2
] = 1.001. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
A.5 A sample phase portrait of the system with infinitely many non-isolated
equilibrium points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
xiii
C.1 Determining the adequate g
1
> 0 that would give rise to a sole equilibrium
point where [X
1
]

> [X
2
]

. . . . . . . . . . . . . . . . . . . . . . . . . . . 133
C.2 An example where without g
1
, [X
1
]

= 0. . . . . . . . . . . . . . . . . . . 135
C.3 [X
1
]

escaped the zero state because of the introduction of g
1
which is a
decaying linear function. . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
C.4 An example of shifting from a lower stable component to a higher stable
component through adding g
i
(t) = −υ
i
t +g
i
(0). . . . . . . . . . . . . . . 136
C.5 [X
1
]

escaped the zero state because of the introduction of g
1
which is a
decaying exponential function. . . . . . . . . . . . . . . . . . . . . . . . . 137
C.6 Parameter plot of γ, an example. . . . . . . . . . . . . . . . . . . . . . . 138
C.7 Intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c > 1 and g = 0;
and an event of bifurcation. . . . . . . . . . . . . . . . . . . . . . . . . . 139
C.8 Saddle node bifurcation; β
1
is varied. . . . . . . . . . . . . . . . . . . . . 140
C.9 Saddle node bifurcation; K
1
is varied. . . . . . . . . . . . . . . . . . . . . 141
C.10 Saddle node bifurcation; ρ
1
is varied. . . . . . . . . . . . . . . . . . . . . 141
C.11 Cusp bifurcation; β
1
and g
1
are varied. . . . . . . . . . . . . . . . . . . . 142
C.12 Cusp bifurcation; K
1
and c are varied. . . . . . . . . . . . . . . . . . . . 142
C.13 Cusp bifurcation; K
1
and g
1
are varied. . . . . . . . . . . . . . . . . . . . 143
C.14 Cusp bifurcation; ρ
1
and g
1
are varied. . . . . . . . . . . . . . . . . . . . 143
C.15 Saddle node bifurcation; ρ
2
is varied. . . . . . . . . . . . . . . . . . . . . 144
C.16 Saddle node bifurcation; g
2
is varied. . . . . . . . . . . . . . . . . . . . . 145
C.17 Saddle node bifurcation; ρ
2
is varied. . . . . . . . . . . . . . . . . . . . . 146
C.18 Saddle node bifurcation; g
2
is varied. . . . . . . . . . . . . . . . . . . . . 146
xiv
Chapter 1
Introduction
The field of Biomathematics has proven to be useful and essential for understanding
the behavior and control of dynamic biological interactions. These interactions span a
wide spectrum of spatio-temporal scales — from interacting chemical species in a cell to
individual organisms in a community, and from fast interactions occurring within seconds
to those that slowly progress in years. Mathematical and in silico models enable scientists
to generate quantitative predictions that may serve as initial input for testing biological
hypotheses to minimize trial and error, as well as to investigate complex biological systems
that are impractical or infeasible to study through in situ and in vitro experiments.
One classic question that scientists want to answer is how simple cells generate com-
plex organisms. In this study, we are interested in the analysis of gene interaction net-
works that orchestrate the differentiation of stem cells to various cell lineages that make
up an organism. We are also motivated by the prospects of utilizing stem cells in regen-
erative medicine (such as through replenishment of damaged tissues as well as treatment
of Parkinson’s disease and diabetes) [1, 50, 107, 151, 171, 180], in revolutionizing drug
discovery [2, 48, 136, 141, 142], and in the control of so-called cancer stem cells that had
been hypothesized to maintain the growth of tumors [57, 65, 110, 171, 172].
The current -omics (genomics, transcriptomics, proteomics, etc.) and systems biol-
ogy revolution [3, 33, 61, 62, 63, 93, 96, 99, 100, 108, 133] are continually providing
details about gene networks. The focus of this study is the mathematical analysis of
a gene network [113] involved in the differentiation of multipotent stem cells to three
mesenchymal stromal stem cells, namely, cells that form bones (osteoblasts), cartilages
(chondrocytes), and fats (adipocytes). This gene network shows the coupled interaction
among stem-cell-specific transcription factors and lineage-specifying transcription factors
induced by exogenous stimuli.
1
Chapter 1. Introduction 2
MacArthur et al. [113] proposed a model of this gene network, and we hypothesize
that further and more substantial analytical and computational study of this model would
reveal important insights into the control of the mesenchymal cell differentiation system.
We refer to the process of controlling the fate of a stem cell towards a chosen lineage as
cellular programming.
We analyze the gene network of MacArthur et al. [113] by simplifying the network
model while preserving the essential qualitative dynamics. In Chapter (5) of this the-
sis, we simplify the MacArthur et al. [113] network model to highlight the essential
components of the mesenchymal cell differentiation system and for easier analysis.
We translate the simplified network model into a system of Ordinary Differential
Equations (ODEs) using the Cinquin-Demongeot formalism [38]. The system of ODEs
formulated by Cinquin-Demongeot [38] is one of the mathematical models appropriate to
represent the dynamics depicted in the simplified MacArthur et al. [113] gene network.
The state variables of the ODE model represent the concentration of the transcription fac-
tors involved in gene expression. The Cinquin-Demongeot [38] ODE model can represent
various biological interactions, such as molecular interactions during gene transcription,
and it can represent cellular differentiation with more than two possible outcomes.
Stability and bifurcation analyses of the ODE model are important in understanding
the dynamics of cellular differentiation. An asymptotically stable equilibrium point is
associated with a certain cell type. In Chapters (6) and (7), we determine the biologically
feasible (nonnegative real-valued) coexisting stable equilibrium points of the ODE model
for a given set of parameters. We also identify if varying the values of some parameters,
such as those associated with the exogenous stimuli, can steer the system toward a desired
state.
Furthermore, in Chapter (8), we numerically investigate the robustness of the gene
network against stochastic noise by adding a noise term to the deterministic ODEs. The
objectives of the study are summarized in the following diagram:
Chapter 1. Introduction 3
Figure 1.1: Analysis of mesenchymal cell differentiation system.
Chapter 2
Preliminaries
Biology of Cellular Programming
2.1 Stem cells in animals
Stem cells are very important for the development, growth and repair of tissues. These
are cells that can undergo mitosis (cell division) and have two contrasting abilities —
ability for self-renewal and ability to differentiate into different specialized cell types.
Self-renewal is the ability of stem cells to proliferate, that is, one or both daughter cells
remain as stem cells after cell division. When a stem cell undergoes differentiation,
it develops into a more mature (specialized) cell, losing its abilities to self-renew and to
differentiate towards other cell types. In addition, scientists have shown that some cells
can dedifferentiate and some can be transdifferentiated. Dedifferentiation means that
a differentiated cell is transformed back to an earlier stage, while transdifferentiation
means that a cell is programmed to switch cell lineages.
The maturity of a stem cell is classified based on the cell’s potency (the cell’s capability
to differentiate into various types). The three major kinds of stem cell potency are
totipotency, pluripotency and multipotency. Figure (2.1) shows these three types of
potencies and the differentiation process. Totipotent stem cells have the potential to
generate all cells including extraembryonic tissues, such as the placenta, and they are the
ancestors of all cells of an organism. A zygote is an example of a totipotent stem cell.
Pluripotent stem cells are descendants of totipotent stem cells that have lost their
ability to generate extraembryonic tissues but not their ability to generate all cells of the
embryo. Examples of these stem cells are the cells of the epiblast from the inner cell mass
of the blastocyst embryo. These stem cells can differentiate into almost all types of cells;
specifically, they form the endoderm, mesoderm and ectoderm germ layers. Pluripotent
4
Chapter 2. Preliminaries Biology of Cellular Programming 5
Figure 2.1: Stem cell self-renewal, differentiation and programming. This diagram
illustrates the abilities of stem cells to ploriferate through self-renewal, differentiate into
specialized cells and reprogram towards other cell types.
stem cells form all cell types found in an adult organism. The stomach, intestines, liver,
pancreas, urinary bladder, lungs and thyroid are formed from the endoderm layer; the
central nervous system, lens of the eye, epidermis, hair, sweat glands, nails, teeth and
mammary glands are formed from the ectoderm layer. The mesoderm layer connects the
endoderm and ectoderm layers, and forms the bones, muscles, connective tissues, heart,
blood cells, kidneys, spleen and middle layer of the skin.
Embryonic stem (ES) cells, epiblast stem cells, embryonic germ cells (derived from
primordial germ cells), spermatogonial male germ stem cells and induced pluripotent
Chapter 2. Preliminaries Biology of Cellular Programming 6
stem cells (iPSCs) are examples of pluripotent stem cells that are cultured in vitro.
ES cells are derived from the inner cell mass of the blastocyst embryo upon explantation
(isolated from the normal embryo).
Some adult stem cells, which can be somatic (related to the body) or germline (related
to the gametes such as ovum and sperm), with embryonic stem cell-like pluripotency have
been found by researchers under certain environments [16, 97, 103, 125, 170, 181]. Um-
bilical cord blood, adipose tissue and bone marrow are found to be sources of pluripotent
stem cells.
The production of iPSCs in 2006 [109, 162] is a major breakthrough for stem cell
research. The iPSCs are cells that are artificially reprogrammed to dedifferentiate from
differentiated or partially differentiated cells to become pluripotent again. With only few
ethical issues compared to embryo cloning, iPSCs can be used for possible therapeutic
purposes such as treating degenerative diseases, repairing damaged tissues and repro-
gramming cancer stem cells. However, there are plenty of issues on the use of iPSCs such
as safety and efficiency. Currently, there is still no strong proof that generated iPSCs
and natural ES cells are totally identical [158].
Pluripotent stem cells that differentiate to specific cell lineages lose their pluripotency,
that is, they lose their ability to generate other kinds of cells. Multipotent stem
cells are descendants of pluripotent stem cells but are already partially differentiated
— they have the ability to self-renew yet can differentiate only to specific cell lineages.
Multipotent stem cells are adult stem cells that are commonly considered as progenitor
cells (cells that are in the stage between being pluripotent and fully differentiated).
When a multipotent stem cell further differentiates, it matures to a more specialized
cell lineage. Oligopotent and unipotent stem cells are progenitor cells that have very
limited ability for self-renewal and are less potent. Oligopotent stem cells are descendants
of multipotent stem cells and can only differentiate into very few cell types. Usually,
stem cells are given special names based on the degree of potency, such as tripotent and
bipotent depending on whether the cell can only differentiate into three and two cell fates,
respectively. Unipotent stem cells, which are commonly called precursor cells, can only
Chapter 2. Preliminaries Biology of Cellular Programming 7
differentiate into one cell type but are not the same as fully differentiated cells. Fully
differentiated cells are at the determined terminal state, that is, they have completed
the differentiation process, have exited the cell cycle, and have already lost the ability to
self-renew [23, 123].
Figure 2.2: Priming and differentiation. Colored circles represent genes or TFs. The
sizes of the circles determine lineage bias. Priming is represented by colored circles having
equal sizes. The largest circle governs the possible phenotype of the cell. [70]
In vitro, ex vivo and in vivo programming have already been done [138, 139, 151,
177]. The idea of programming biological cells indicates that some cells are “plastic”
(i.e., some cells have the ability to change lineages). This plasticity of cells proves that
some cells do not permanently inactivate unexpressed genes but rather retain all genetic
information (see Figure (2.2)). Three in vitro approaches of cellular programming have
been discussed in a review by Yamanaka [177]. These approaches are nuclear transfer,
cell fusion and transcription-factor transduction [19, 44, 51, 58, 106, 177]. The process
of nuclear transfer has been used to successfully clone Dolly the sheep. Transcription-
factor transduction, commonly called direct programming, alters the expression of
Chapter 2. Preliminaries Biology of Cellular Programming 8
transcription factors (TFs) by overexpression or by deletion. Overexpressing one TF
may down-regulate other TFs that would lead to a change in the phenotype of a cell. In
2006, Yamanaka and Takahashi [162] identified four factors — OCT3/4, SOX2, c-MYC,
and KLF4 — that are enough to reprogram cells from mouse fibroblasts to become
iPSCs (through the use of retrovirus). In 2007, Yamanaka, Takahashi and colleagues
[161] generated iPSCs from adult human fibroblasts by the same defined factors.
The three cellular programming approaches discussed by Yamanaka [177] have re-
vealed common features — demethylation of pluripotency gene promoters and activation
of ES-cell-specific TFs such as OCT4, SOX2 and NANOG [113, 124, 129]. In this study,
we only consider the TF transduction approach. To understand cellular differentiation
and TF transduction, we need to look at gene regulatory networks. Gene regulatory
networks (GRNs) establish the interactions of molecules and other signals for the ac-
tivation or inhibition of genes. We consider the key pluripotency transcription factors
OCT4, SOX2 and NANOG as the elements of the core pluripotency module in our GRN.
For a more detailed discussion about stem cells in animals, the following references
may be consulted [1, 12, 20, 22, 25, 34, 39, 42, 59, 74, 78, 80, 84, 103, 117, 148, 151, 159,
169, 177].
2.2 Transcription factors and gene expression
Genes contain hereditary information and are segments of the deoxyribonucleic acid
(DNA). Gene expression is the process in which information from a gene is used to
synthesize functional products such as proteins. Examples of these gene products are
proteins that give the cell its structure and function.
Genes in the DNA direct protein synthesis. Transcription and translation are the two
major processes that transform the information from nucleic acids to proteins (see Figure
(2.3)). In the transcription process, the DNA commands the synthesis of ribonucleic
Chapter 2. Preliminaries Biology of Cellular Programming 9
Figure 2.3: The flow of information. The blue solid lines represent general flow and the
blue dashed lines represent special (possible) flow. The red dotted lines represent the
impossible flow as postulated in the Central Dogma of Molecular Biology [41].
acid (RNA) and the information is transcribed from the DNA template to the RNA. The
RNA, specifically messenger RNA or mRNA, then carries the information to the part
of the cell where protein synthesis will happen. In the translation process, the cell
translates the information from the mRNA to proteins.
During transcription, the promoter (a DNA sequence where RNA polymerase enzyme
attaches) initiates transcription, while the terminator (also a DNA sequence) marks the
end of transcription. However, the RNA polymerase binds to the promoter only after
some transcription factors (TFs), a collection of proteins, are attached to the pro-
moter.
Gene expression is usually regulated by DNA-binding proteins (such as by TFs) at the
transcription process, sometimes utilizing external signals. TFs play a main role in gene
regulatory networks. A TF that binds to an enhancer (a control element) and stimulates
transcription of a gene is called an activator; a TF that binds to a silencer (also a control
element) and inhibits transcription of a gene is called a repressor. Hundreds of TFs
were discovered in eukaryotes. In highly specialized cells, only a small fraction of their
genes are activated.
Examples of TFs are OCT4, SOX2 and NANOG as well as RUNX2, SOX9 and PPAR-
γ. RUNX2, SOX9 and PPAR-γ stimulate formation of bone cells, cartilage cells and fat
Chapter 2. Preliminaries Biology of Cellular Programming 10
cells, respectively [113].
For a more detailed discussion about the relationship between transcription factors
and gene expression, the following references may be consulted [24, 89, 126].
2.3 Biological noise and stochastic differentiation
It is believed that stochastic fluctuations in gene expression affect cell fate commitment
in normal development and in in vitro culture of cells. The path that the cell would take
is not absolutely deterministic but is rather affected by two kinds of noise — intrinsic
and extrinsic [128, 130, 160, 174]. Intrinsic noise is the inherent noise produced during
biochemical processes inside the cell, while extrinsic noise is the noise produced from
the external environment (such as from the other cells). In some cases, extrinsic noise
dominates the intrinsic noise and influences cell-to-cell variation [174] because the internal
environment of a cell is regulated by homeostasis.
Unregulated random fluctuations can cause negative effects to the organism. However,
in most cases, these stochastic fluctuations are naturally regulated enough to maintain
order [30, 111]. Stochastic fluctuations have positive effects to the system such as driving
oscillations and inducing switching in cell fates [71, 111, 174]. The papers [113] and [176]
discuss the importance of random noise in dedifferentiation, especially in the production
of iPSCs.
When a stem cell undergoes cell division, the two daughter cells may both still be
identical to the original, may both have already been differentiated, or may have one cell
identical to the original and the other already differentiated. Cells that would undergo
differentiation have plenty of cell lineages to choose from, but their cell fates are based
on some pattern formation [24]. The model “creode” by C. Waddington [168], as shown
in Figure (2.4), illustrates the paths that a cell might take. In Waddington’s model, cell
differentiation is depicted by a ball rolling down a landscape of hills and valleys. The
parts of the valleys where the ball can stay without rolling can be regarded as attractors
that represent cell types. GRNs determine the topography of the landscape.
Chapter 2. Preliminaries Biology of Cellular Programming 11
Figure 2.4: C. Waddington’s epigenetic landscape — “creode” [168].
For a more detailed discussion about biological noise and stochastic differentiation,
the following references may be consulted [9, 15, 26, 28, 30, 53, 64, 80, 81, 83, 85, 94,
101, 105, 111, 112, 127, 131, 132, 152, 164, 176].
Chapter 3
Preliminaries
Mathematical Models of Gene Networks
This chapter gives a review of the existing literatures on models of gene regulatory
networks (GRN).
Commonly, to start the mathematical analysis of GRNs, a directed graph is con-
structed to visualize the interaction of the molecules involved. Various network analysis
techniques are available to extract information from the constructed directed graph such
as clustering algorithms and motif analysis [4, 30, 45, 68, 90]. The study of the network
topology is important in understanding the biological system that the network represents.
Gene regulatory systems are commonly modeled as Bayesian networks, Boolean net-
works, generalized logical networks, Petri nets, ordinary differential equations, partial
differential equations, chemical master equations, stochastic differential equations and
rule-based simulations [29, 45]. The choice of mathematical model depends on the as-
sumptions made about the nature of the GRN and on the objectives of the study.
In this thesis, we study the directed graph constructed by MacArthur et al. [113]
and its corresponding Ordinary Differential Equations (ODEs) formulated by Cinquin-
Demongeot [38] and its corresponding Stochastic Differential Equations (SDEs). By using
an ODE model, we assume that the time-dependent macroscopic dynamics of the GRN
are continuous in both time and state space. We assume continuous dynamics because the
process of lineage determination involves a temporal extension, that is, cells pass through
intermediate stages [70]. We use ODEs to model the average dynamics of the GRN. ODEs
are primarily used to represent the deterministic dynamics of phenomenological (coarse-
grained) regulatory networks [70, 121]. In addition, we can add a random noise term to
the ODE model to study stochasticity in cellular differentiation.
12
Chapter 3. Preliminaries Mathematical Models of Gene Networks 13
3.1 The MacArthur et al. GRN
The MacArthur et al. [113] GRN is composed of a pluripotency module (the circuit
consisting of OCT4, SOX2, NANOG and their heterodimer and heterotrimer) and a
differentiation module (the circuit consisting of RUNX2, SOX9 and PPAR-γ) [113]. The
transcription factors RUNX2, SOX9 and PPAR-γ activate the formation of bone cells,
cartilage cells and fat cells, respectively.
Figure 3.1: The coarse-graining of the differentiation module. The network in (a) is
simplified into (b), where arrows indicate up-regulation (activation) while bars indicate
down-regulation (repression). [113]
The derivation of the core differentiation module is shown in Figure (3.1) where the
interactions through intermediaries are consolidated to create a simplified network. The
MacArthur et al. [113] GRN that we are going to study is shown in Figure (3.2).
Feedback loops (which are important for the existence of homeostasis) and autoregu-
lation (or autoactivation, which means that a molecule enhances its own expression) are
necessary to attain pluripotency [177]. These feedback loops and autoregulation are also
Chapter 3. Preliminaries Mathematical Models of Gene Networks 14
Figure 3.2: The MacArthur et al. [113] mesenchymal gene regulatory network. Arrows
indicate up-regulation (activation) while bars indicate down-regulation (repression).
present in the MacArthur et al. GRN [113]; however, they are not enough to generate iP-
SCs. Based on the deterministic computational analysis of MacArthur et al. [113], their
pluripotency module cannot be reactivated once silenced, that is, it becomes resistant
to reprogramming. However, they found that introducing stochastic noise to the system
can reactivate the pluripotency module [113].
Chapter 3. Preliminaries Mathematical Models of Gene Networks 15
3.2 ODE models representing GRN dynamics
A state X = ([X
1
], [X
2
], . . . , [X
n
]) represents a temporal stage in the cellular differentia-
tion or programming process (see Figure (3.3)). We define [X
i
] as a component (coordi-
nate) of a state. A stable state (stable equilibrium point) X

= ([X
1
]

, [X
2
]

, . . . , [X
n
]

)
represents a certain cell type, e.g., pluripotent, tripotent, bipotent, unipotent or terminal
state.
Figure 3.3: Gene expression or the concentration of the TFs can be represented by a
state vector, e.g. ([X
1
], [X
2
], [X
3
], [X
4
]) [70]. For example, TFs of equal concentration
can be represented by a vector with equal components, such as (2.4, 2.4, 2.4, 2.4).
Modelers of GRN often use the function H
+
(or H

) which is bounded monotone
increasing (or decreasing) with values between zero and one. Examples of such functions
are the sigmoidal, hyperbolic and threshold piecewise-linear functions. If we use sigmoidal
H
+
and H

called the Hill functions, we define
H
+
([X], K, c) :=
[X]
c
K
c
+ [X]
c
(3.1)
for activation of gene expression and
H

([X], K, c) := 1 −H
+
([X], K, c) =
K
c
K
c
+ [X]
c
(3.2)
for repression, where the variable [X] is the concentration of the molecule involved [69,
73, 96, 121, 144]. The parameter K is the threshold or dissociation constant and is equal
Chapter 3. Preliminaries Mathematical Models of Gene Networks 16
to the value of X at which the Hill function is equal to 1/2. The parameter c is called
the Hill constant or Hill coefficient and describes the steepness of the Hill curve. The Hill
constant often denotes multimerization-induced cooperativity (a multimer is an assembly
of multiple monomers or molecules) and may represent the number of cooperative binding
sites if c is restricted to a positive integer. However, in some cases, the Hill constant can
be a positive real number (usually 1 < c < n where n is the number of equivalent
cooperative binding sites) [73, 174]. If c = 1, then there is no cooperativity [38] and the
Hill function becomes the Michaelis-Menten function which is hyperbolic. If data are
available, we can estimate the value of c by inference.
Various ODE models and formulations are presented in [13, 14, 27, 30, 31, 32, 43,
47, 69, 76, 96, 115, 135, 173]. Examples of these are the neural network [166] model,
the S-systems (power-law) [167] model, the Andrecut [7] model, the Cinquin-Demongeot
2002 [36] model, and the Cinquin-Demongeot 2005 [38] model. The Cinquin-Demongeot
2002 and 2005 models can represent various GRNs and are more amenable to analysis.
3.2.1 Cinquin and Demongeot ODE formalism
According to Waddington’s model [168], cell differentiation is similar to a ball rolling
down a landscape of hills and valleys. The ridges of the hills can be regarded as the
unstable equilibrium points while the parts of the valleys where the ball can stay without
rolling further (i.e., at relative minima of the landscape) can be regarded as stable equi-
librium points (attractors). Hence, the movement of the ball and its possible location
after some time can be represented by dynamical systems, specifically ODEs. However,
it should be noted that existing evidence showing the presence of attractors is limited to
some mammalian cells [112].
The theory that some cells can differentiate into many different cell types gives the
idea that the model representing the dynamics of such cells may exhibit multistability
(multiple stable equilibrium points). However, not all GRNs are reducible to binary or
boolean hierarchic decision network (see Figure (3.4)), that is why Cinquin and Demon-
geot formulated models that can represent cellular differentiation with more than two
Chapter 3. Preliminaries Mathematical Models of Gene Networks 17
Figure 3.4: Hierarchic decision model and simultaneous decision model. Bars represent
repression or inhibition, while arrows represent activation. [36].
possible outcomes (multistability) obtained through different developmental pathways
[3, 38, 35]. The simultaneous decision network (see Figure (3.4)) is a near approximation
of the Waddington illustration where there are possibly many cell lineages involved.
In 2002, Cinquin and Demongeot proposed an ODE model representing the simulta-
neous decision network [36]. In 2005, they proposed another ODE model representing
the simultaneous decision network but with autocatalysis (autoactivation) [38]. Both the
Cinquin-Demongeot models are based on the simultaneous decision graph where there is
mutual inhibition. All elements in the Cinquin-Demongeot models are symmetric, that
is, each node has the same relationship with all other nodes, and all equations in the
system of ODEs have equal parameter values.
Chapter 3. Preliminaries Mathematical Models of Gene Networks 18
Equations (3.3) and (3.4) are the Cinquin-Demongeot ODE models without autocatal-
ysis (2002 version, [36]) and with autocatalysis (2005 version, [38]), respectively. Let us
suppose we have n antagonistic transcription factors. The state variable [X
i
] represents
the concentration of the corresponding TF protein such that the TF expression is subject
to a first-order degradation (exponential decay). The parameters c, β and g represent the
relative speed of transcription (or strength of the unrepressed TF expression relative to
the first-order degradation), cooperativity and “leak”, respectively. The parameter g is a
basal expression of the corresponding TF and a constant production term that enhances
the value of [X
i
], which is possibly affected by an exogenous stimulus. For simplification,
only the transcription regulation process is considered in [38]. The models are assumed
to be intracellular and cell-autonomous (i.e., we only consider processes inside a single
cell without the influence of other cells).
Without autocatalysis :
d[X
i
]
dt
=
β
1 +
n

j=1,j=i
[X
j
]
c
−[X
i
], i = 1, 2, . . . , n (3.3)
With autocatalysis :
d[X
i
]
dt
=
β[X
i
]
c
1 +
n

j=1
[X
j
]
c
−[X
i
] +g, i = 1, 2, . . . , n (3.4)
The terms
β
1 +
n

j=1,j=i
[X
j
]
c
and
β[X
i
]
c
1 +
n

j=1
[X
j
]
c
(3.5)
are Hill-like functions. In this study, we only consider Cinquin-Demongeot (2005 version)
model (3.4) because autocatalysis is a common property of cell fate-determining factors
known as “master” switches [38].
Chapter 3. Preliminaries Mathematical Models of Gene Networks 19
In [38], Cinquin and Demongeot observed that their model (with autocatalysis) can
show the priming behavior of stem cells (i.e., genes are equally expressed) as well as
the up-regulation of one gene and down-regulation of the others. They also proved that
multistability of their model where g = 0 is manipulable by changing the value of c
(cooperativity); however, manipulating the level of cooperativity is of minimal biological
relevance. Also, their model is more sensitive to stochastic noise when the equilibrium
points are near each other.
3.2.2 ODE model by MacArthur et al.
MacArthur et al. [113] proposed an ODE model (Equations (3.6) and (3.7)) to rep-
resent their GRN (refer to Figure (3.2)). Let [P
i
] be the concentration of the TF
protein in the pluripotency module, specifically, [P
1
] := [OCT4], [P
2
] := [SOX2] and
[P
3
] := [NANOG]. Also, let [L
i
] be the concentration of the TF protein in the differen-
tiation module where [L
1
] := [RUNX2], [L
2
] := [SOX9] and [L
3
] := [PPAR−γ]. The
parameter s
i
represents the effect of the growth factors stimulating the differentiation
towards the i-th cell lineage, specifically, s
1
:= [RA+BMP4], s
2
:= [RA+TGF−β] and
s
3
:= [RA+Insulin]. In mouse ES cells, RUNX2 is stimulated by retinoic acid (RA) and
BMP4; SOX9 by RA and TGF-β; and PPAR-γ by RA and Insulin. The derivation of the
ODE model and the interpretation of the parameters are discussed in the supplementary
materials of [113].
d[P
i
]
dt
=
k
1i
[P
1
][P
2
](1+[P
3
])
(1+k
0

j
s
j)(1+[P
1
][P
2
](1+[P
3
])+k
PL

j
[L
j
])
−b[P
i
] (3.6)
d[L
i
]
dt
=
k
2(s
i
+k
3

j=i
s
j)[L
i
]
2m
1+k
LC
1
[P
1
][P
2
]+k
LC
2
[P
1
][P
2
][P
3
]+[L
i
]
2
+k
LL(s
i
+k
3

j=i
s
j)

j=i
[L
j
]
2
−b[L
i
] (3.7)
However, this system of coupled ODEs is difficult to study using analytic techniques.
MacArthur et al. [113] simply conducted numerical simulations to investigate the behav-
ior of the system. They tried to analytically analyze the system but only for a specific
Chapter 3. Preliminaries Mathematical Models of Gene Networks 20
case where P
i
= 0, i = 1, 2, 3, that is, when the pluripotency module is switched-off.
The ODE model (3.8) that they analyzed when the pluripotency module is switched-off
follows the Cinquin-Demongeot [38] formalism with c = 2, that is,
d[L
i
]
dt
=
[L
i
]
2
1 + [L
i
]
2
+a

j=i
[L
j
]
2
−b[L
i
], i = 1, 2, 3 (3.8)
MacArthur et al. [113] analytically proved that the three cell types (tripotent, bipo-
tent and terminal states) are simultaneously stable for some parameter values in (3.8).
However, as the effect of an exogenous stimulus is increased above some threshold value,
the tripotent state becomes unstable leaving only two stable cell types (bipotent and
terminal state). If the effect of the exogenous stimulus is further increased, the bipotent
state also becomes unstable leaving the terminal state as the sole stable cell type. In
addition, MacArthur et al. [113] showed that dedifferentiation is not possible without
the aid of stochastic noise.
3.3 Stochastic Differential Equations
A time-dependent Gaussian white noise term can be added to the ODE model to inves-
tigate the effect of random fluctuations in gene expression. This Gaussian white noise
term combines and averages multiple heterogeneous sources of temporal noise. Equations
(3.10) to (3.13) show some of the different SDE models [71, 72, 113, 174] of the form
dX = F(X)dt +σG(X)dW (3.9)
that we use in this study. We employ different G(X) to observe the various effects of
the added Gaussian white noise term. We let F(X) be the right-hand side of our ODE
equations, σ be a diagonal matrix of parameters representing the amplitude of noise, and
Chapter 3. Preliminaries Mathematical Models of Gene Networks 21
W be a Brownian motion (Wiener process). If the genes in a cell are isogenic (essentially
identical) then we can suppose the diagonal entries of the matrix σ are all equal.
dX = F(X)dt +σdW (3.10)
dX = F(X)dt +σXdW (3.11)
dX = F(X)dt +σ

XdW (3.12)
dX = F(X)dt +σF(X)dW (3.13)
Notice that in Equations (3.11) and (3.12), the noise term is affected by the value
of X. As the concentration X increases, the effect of the noise term also increases.
Whereas, in Equations (3.13), the noise term is affected by the value of F(X), that is,
as the deterministic change in the concentration X with respect to time
_
dX
dt
= F(X)
_
increases, the effect of the noise term also increases. In Equation (3.10), the noise term
is not dependent on any variable.
For a more detailed discussion about various modeling techniques, the following ref-
erences may be consulted [6, 11, 18, 21, 46, 52, 55, 60, 66, 67, 75, 77, 79, 87, 88, 92, 118,
137, 140, 143, 149, 153, 154, 163, 165, 175, 179, 182].
Chapter 4
Preliminaries
Analysis of Nonlinear Systems
This chapter gives a brief discussion of the theoretical background on the qualitative
analysis of coupled nonlinear dynamical systems.
Consider autonomous system of ODEs
d[X
i
]
dt
= F
i
([X
1
], [X
2
], . . . , [X
n
]), i = 1, 2, . . . , n, (4.1)
with initial condition [X
i
](0) := [X
i
]
0
∀i. We assume that t ≥ 0 and F
i
: B → R
n
,
i = 1, 2, . . . , n where B ⊆ R
n
. If we have a nonautonoumous system of ODEs,
d[X
i
]
dt
=
F
i
([X
1
], [X
2
], . . . , [X
n
], t), i = 1, 2, . . . , n, then we convert it to an autonomous system by
defining t := [X
n+1
] and
d[X
n+1
]
dt
= 1 [134].
For simplicity, let F := (F
i
, i = 1, 2, . . . , n), X := ([X
i
], i = 1, 2, . . . , n) and X
0
:=
([X
i
]
0
, i = 1, 2, . . . , n).
For an ODE model to be useful, it is necessary that it has a solution. Existence of a
unique solution for a given initial condition is important to effectively predict the behavior
of our system. Moreover, we are assured that the solution curves of an autonomous system
do not intersect with each other when existence and uniqueness conditions hold [56].
Suppose X(t) is a differentiable function. The solution to (4.1) satisfies the following
integral equation:
X(t) = X
0
+
_
t
0
F(X(τ))dτ. (4.2)
22
Chapter 4. Preliminaries Analysis of Nonlinear Systems 23
The following are theorems that guarantee local existence and uniqueness of solutions
to ODEs:
Theorem 4.1 Existence theorem (Peano, Cauchy). Consider the autonomous system
(4.1). Suppose that F is continuous on B. Then the system has a solution (not necessarily
unique) on [0, δ] for sufficiently small δ > 0 given any X
0
∈ B.
Theorem 4.2 Local existence-uniqueness theorem (Picard, Lindel¨ orf, Lipschitz,
Cauchy). Consider the autonomous system (4.1). Suppose that F is locally Lipschitz
continuous on B, that is, F satisfies the following condition: For each point X
0
∈ B
there is an -neighborhood of X
0
(denoted as B

(X
0
) where B

(X
0
) ⊆ B) and a positive
constant m
0
such that |F(X) −F(Y )| ≤ m
0
|X −Y | ∀X, Y ∈ B

(X
0
). Then the system
has exactly one solution on [0, δ] for sufficiently small δ > 0 given any X
0
∈ B.
Theorem (4.2) can be extended to a global case stated as:
Theorem 4.3 Global existence-uniqueness theorem. If there is a positive constant
m such that |F(X) −F(Y )| ≤m|X −Y | ∀X, Y ∈ B (i.e., F is globally Lipschitz
continuous on B) then the system has exactly one solution defined for all t ∈ R

for
any X
0
∈ B.
If all the partial derivatives
∂F
i
∂[X
j
]
i, j = 1, 2, . . . , n are continuous on B (i.e., F ∈
C
1
(B)) then F is locally Lipschitz continuous on B. If the absolute value of these partial
derivatives are also bounded for all X ∈ B then F is globally Lipschitz continuous on
B. The global condition says that if the growth of F with respect to X is at most linear
then we have a global solution. If F satisfies the local Lipschitz condition but not the
global Lipschitz condition, then it is possible that after some finite time t, the solution
will “blow-up”.
We define a point X = ([X
1
], [X
2
], . . . , [X
n
]) as a state of the system, and the collec-
tion of these states is called the state space. The solution curve of the system starting
from a fixed initial condition is called a trajectory or orbit. The collection of trajectories
Chapter 4. Preliminaries Analysis of Nonlinear Systems 24
given any initial condition is called the flow of the differential equation and is denoted by
φ(X
0
). The concept of the flow of the differential equation indicates the dependence of
the system on initial conditions. The flow of the differential equation can be represented
geometrically in the phase space R
n
using a phase portrait. There exists a corresponding
vector defined by the ODE that is tangent to each point in every trajectory; and the
collection of all tangent vectors of the system is a vector field. A vector field is often
helpful in visualizing the phase portrait of the system. Moreover, various methods are
also available to numerically solve the system (4.1) such as the Euler and Runge-Kutta
4 methods.
4.1 Stability analysis
In nonlinear analysis of systems, it is important to find points where our system is at rest
and determine whether these points are stable or unstable. In modeling cellular differ-
entiation, an asymptotically stable equilibrium point, which is an attractor, is associated
with a certain cell type. For any initial condition in a neighborhood of the attractor, the
trajectories tend towards the attractor even if slightly perturbed.
Definition 4.1 Equilibrium point. The point X

:= ([X
1
]

, [X
2
]

, . . . , [X
n
]

) ∈ R
n
is
said to be an equilibrium point (also called as critical point, stationary point or steady
state) of the system (4.1) if and only if F(X

) = 0.
Finding the equilibrium points corresponds to solving for the real-valued solutions to
the system of equations F(X) = 0. It is possible that this system of equations has a
unique solution, several solutions, a continuum of solutions, or no solution.
In order to describe the local behavior of the system (4.1) near a specific equilibrium
point X

, we linearize the system by getting the Jacobian matrix JF(X), defined as
Chapter 4. Preliminaries Analysis of Nonlinear Systems 25
JF(X) =
_
¸
¸
¸
¸
¸
_
∂F
1
∂[X
1
]
∂F
1
∂[X
2
]
· · ·
∂F
1
∂[Xn]
∂F
2
∂[X
1
]
∂F
2
∂[X
2
]
· · ·
∂F
2
∂[Xn]
.
.
.
.
.
.
.
.
.
.
.
.
∂Fn
∂[X
1
]
∂Fn
∂[X
2
]
· · ·
∂Fn
∂[Xn]
_
¸
¸
¸
¸
¸
_
(4.3)
and then evaluating JF(X

). If none of the eigenvalues of the matrix JF(X

) has zero
real part then X

is called a hyperbolic equilibrium point. In this chapter, we focus
the discussion on hyperbolic equilibrium points; but for details about nonhyperbolic
equilibrium points, refer to [134].
We use the eigenvalues of JF(X

) to determine the stability of equilibrium points.
Definition 4.2 Asymptotically stable and unstable equilibrium points. The
equilibrium point X

is asymptotically stable when the solutions near X

converge to X

as t → ∞. The equilibrium point X

is unstable when some or all solutions near X

tend away from X

as t →∞.
Theorem 4.4 Stability of equilibrium points. If all the eigenvalues of JF(X

) have
negative real parts then X

is an asymptotically stable equilibrium point. If at least one
of the eigenvalues of JF(X

) has a positive real part then X

is an unstable equilibrium
point.
For simplicity, we will call an asymptotically stable equilibrium point, “stable”. There
are various tests for determining the stability of an equilibrium point such as by using
Theorem (4.4), or by using geometric analysis as shown in Figure (4.1). In addition, we
define X

as a saddle if it is an unstable equilibrium point but JF(X

) has at least
one eigenvalue with negative real part. For further details regarding the local behavior
of nonlinear systems in the neighborhood of an equilibrium point, refer to the Stable
Manifold Theorem and the Hartman-Grobman Theorem [134].
Chapter 4. Preliminaries Analysis of Nonlinear Systems 26
Figure 4.1: The slope of F(X) at the equilibrium point determines the linear stability.
Positive gradient means instability, negative gradient means stability. If the gradient is
zero, we look at the left and right neighboring gradients. Refer to the Insect Outbreak
Model: Spruce Budworm in [122].
It is also useful to determine the set of initial conditions X
0
with trajectories con-
verging to a specific stable equilibrium point X

. We call this set of initial conditions
the domain or basin of attraction of X

, denoted by

X
∗ :=
_
X
0
: lim
t→∞
φ(X
0
) = X

_
. (4.4)
In addition, a set
ˆ
B ⊆ B is called positively invariant with respect to the flow φ(X
0
)
if for any X
0

ˆ
B, φ(X
0
) ⊆
ˆ
B for all t ≥ 0, that is, the flow of the ODE remains in
ˆ
B.
There are other types of attractors, such as ω-limit cycles and strange attractors
[56]. A limit cycle is a periodic orbit (a closed trajectory which is not an equilibrium
point) that is isolated. An asymptotically stable limit cycle is called an ω-limit cycle.
Strange attractors usually occur when the dynamics of the system is chaotic. Moreover,
under some conditions, a trajectory may be contained in a non-attracting but neutrally
stable center (see [56] for discussion about centers). However, the extensive numerical
simulations by MacArthur et al. [113] suggest that their ODE model (Equations (3.6)
and (3.7)) does not have oscillators (periodic orbit) and strange trajectories. Cinquin and
Demongeot [38] also claim that the solutions to their model (refer to Equations (3.4))
always tend towards an equilibrium and never oscillate [38].
Chapter 4. Preliminaries Analysis of Nonlinear Systems 27
The existence of a center, ω-limit cycle or strange attractor that would result to
recurring changes in phenotype is abnormal for a natural fully differentiated cell. Limit
cycles are associated with the concept of continuous cell proliferation (self-renewal) where
there are recurring biochemical states during cell division cycles [82]. However, cell
division is beyond the scope of this thesis.
Various theorems are available for checking the possible existence or non-existence of
limit cycles (although most are for two-dimensional planar systems only). The Poincar´e-
Bendixson Theorem for planar systems [134] states that if F ∈ C
1
(B) and a trajectory
remains in a compact region of B whose ω-limit set (e.g. attracting set) does not contain
any equilibrium point, then the trajectory approaches a periodic orbit. Furthermore, if
F ∈ C
1
(B) and a trajectory remains in a compact region of B as well as if there are
only a finite number of equilibrium points, then the ω-limit set of any trajectory of the
planar system can be one of three types — an equilibrium point, a periodic orbit or a
compound separatrix cycle.
Some researches have shown the effect of the presence of positive or negative feedback
loops in GRNs such as possible multistability (existence of multiple stable equilibrium
points) and existence of oscillations [8, 37, 45, 104, 119, 155]. It is also important to note
that a strange (chaotic) attractor will not exist for n < 3 [56].
4.2 Bifurcation analysis
The behavior of the solutions of system (4.1) depends not only on the initial conditions
but also on the values of the parameters. The parameters of the model may be associated
with real-world quantities that can be manipulated to control the solutions. Varying the
value of a parameter (or parameters) may result in dramatic changes in the qualitative
nature of the solutions, such as a change in the number of equilibrium points or a change
in the stability. Here, we now let F be a function of the state variables X and of the
parameter matrix µ (i.e., F(X, µ)). We define the values of the parameters where such
dramatic change occurs as bifurcation value, denoted by µ

. If we simultaneously vary
the values of p number of parameters then we have a p-parameter bifurcation.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 28
If p-parameter bifurcation is sufficient for a bifurcation type to occur then we classify
the bifurcation type as codimension p. Examples of codimension one bifurcation type
are saddle-node (fold), supercritical Poincar´e-Andronov-Hopf and subcritical Poincar´e-
Andronov-Hopf bifurcations. Transcritical, supercritical pitchfork and subcritical pitch-
fork bifurcations are also often regarded as codimension one. Cusp bifurcation is of
codimension two.
Figure 4.2: Sample bifurcation diagram showing saddle-node bifurcation.
In a local bifurcation, the equilibrium point X

is nonhyperbolic at the bifurcation
value. For n ≥ 2, if JF(X

) has a pair of purely imaginary eigenvalues and no other
eigenvalues with zero real part at the bifurcation value then under some assumptions a
Hopf bifurcation may occur and a limit cycle might arise from X

. We can visualize the
bifurcation of equilibria using a bifurcation diagram. For further details about bifurcation
theory, refer to [86, 102, 134]. There are softwares available for numerical bifurcation
analysis such as Oscill8 [40] which uses AUTO (http://indy.cs.concordia.ca/auto/).
4.3 Fixed point iteration
Definition 4.3 Fixed point. The point X

is a fixed point of the real-valued function
Q if Q(X

) = X

.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 29
We use fixed point iteration (FPI) to find approximate stable equilibrium points of
the Cinquin-Demongeot [38] model. If X

is a stable equilibrium point then for initial
conditions X
0
sufficiently close to X

(where X
0
= X

), the sequences generated by FPI
will converge to X

(i.e., is locally convergent). If X
0
= X

, we can either have a stable
or unstable equilibrium point.
Algorithm 1 Fixed point iteration
Suppose Q is continuous on the region B.
Input initial guess X
(0)
:= X
0
∈ B and acceptable tolerance error ∈ R

.
While
¸
¸
X
(i+1)
−X
(i)
¸
¸
> do X
(i+1)
:= Q(X
(i)
).
If
¸
¸
X
(i+1)
−X
(i)
¸
¸
≤ is satisfied then X
(i+1)
is the approximate fixed point.
Figure 4.3: An illustration of cobweb diagram.
The geometric illustration of FPI is called a cobweb diagram as illustrated in Figure
(4.3).
Chapter 4. Preliminaries Analysis of Nonlinear Systems 30
4.4 Sylvester resultant method
To find the equilibrium points, we can rewrite the Cinquin-Demongeot ODE model where
the exponent is a positive integer as a system of polynomial equations. Assume F(X) = 0
can be written as a polynomial system P(X) = 0. The topic of solving multivariate
nonlinear polynomial systems is still in its development stage. However, there are already
various available algebraic and geometric methods for solving P(X) = 0 such as Newton-
like methods, homotopic solvers, subdivision methods, algebraic solvers using Gr¨obner
basis, and geometric solvers using resultant construction [120]. In resultant construction,
we treat the problem of solving P(X) = 0 as a problem of finding intersections of curves.
All P
i
(X) should have no common factor of degree greater than zero so that P(X)
has a finite number of complex solutions. The following B´ezout Theorem gives a bound
on the number of complex solutions including the multiplicities.
Theorem 4.5 B´ezout theorem. Consider real-valued polynomials P
1
, P
2
, . . . , P
n
where
P
i
has degree deg
i
. Suppose all the polynomials have no common factor of degree greater
than zero (i.e., they are collectively relatively prime). Then the number of isolated
complex solutions to the system P
1
(X) = P
2
(X) = . . . = P
n
(X) = 0 is at most
(deg
1
)(deg
2
) . . . (deg
n
).
The method of using the Sylvester resultant is a classical algorithm in Algebraic
Geometry used to find the complex solutions of a system of two polynomial equations in
two variables. It can also be used for solving a polynomial system of n equations with
n variables where n > 2, by repeated application of the algorithm. The idea of using
Sylvester resultants for solving multivariate polynomial systems is to eliminate all except
for one variable. There are other resultant construction methods for solving multivariate
polynomial systems with n > 2 such as the Dixon resultant, Macaulay resultant and
U-resultant methods, but we will only focus on the Sylvester resultant. The algorithm
for using Sylvester resultants is illustrated in the following paragraphs.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 31
Consider two polynomials P
1
([X
1
], [X
2
]) and P
2
([X
1
], [X
2
]). We eliminate [X
1
] by
constructing the Sylvester matrix associated to the two polynomials with [X
1
] as the
variable (i.e., we take [X
2
] as fixed parameter). The size of the Sylvester matrix is
(deg
1
+ deg
2
) ×(deg
1
+ deg
2
) where deg
1
and deg
2
are the degrees of the polynomial P
1
and P
2
in the variable [X
1
], respectively.
We give an example to show how to construct a Sylvester matrix. Let us suppose
P
1
([X
1
], [X
2
]) = 2[X
1
]
3
+ 4[X
1
]
2
[X
2
] + 7[X
1
][X
2
]
2
+ 10[X
2
]
3
+ 8 (4.5)
P
2
([X
1
], [X
2
]) = 5[X
1
]
2
+ 2[X
1
][X
2
] + [X
2
]
2
+ 6. (4.6)
Since the degree of P
1
in terms of [X
1
] is 3 and the degree of P
2
in terms of [X
1
] is 2, then
the size of the Sylvester matrix (with [X
1
] as variable) is 5 ×5. The Sylvester matrix of
P
1
and P
2
with [X
1
] as variable is
_
¸
¸
¸
¸
¸
¸
¸
¸
_
2 4[X
2
] 7[X
2
]
2
10[X
2
]
3
+ 8 0
0 2 4[X
2
] 7[X
2
]
2
10[X
2
]
3
+ 8
5 2[X
2
] [X
2
]
2
+ 6 0 0
0 5 2[X
2
] [X
2
]
2
+ 6 0
0 0 5 2[X
2
] [X
2
]
2
+ 6
_
¸
¸
¸
¸
¸
¸
¸
¸
_
. (4.7)
The first row of the Sylvester matrix contains the coefficients of [X
1
]
3
, [X
1
]
2
, [X
1
]
1
and
[X
1
]
0
in P
1
. We shift each element of the first row one column to the right to form the
second row. The third row contains the coefficients of [X
1
]
2
, [X
1
]
1
and [X
1
]
0
in P
2
. We
shift each element of the third row one column to the right to form the fourth row. We
again shift each element of the fourth row one column to the right to form the fifth row.
Generally, we continue the process of shifting each element of the previous row to form
the next row until the coefficient of [X
1
]
0
reaches the last column. All cells of the matrix
without entries coming from the coefficients of the polynomials are assigned the value
zero.
We use the determinant of the Sylvester matrix to find the intersection of P
1
and P
2
.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 32
Definition 4.4 Sylvester resultant. We call the determinant of the Sylvester matrix
of P
1
and P
2
in [X
1
] (where [X
2
] is a fixed parameter) the Sylvester resultant, denoted by
res(P
1
, P
2
; [X
1
]).
Theorem 4.6 Zeroes of the Sylvester resultant. The values where res(P
1
, P
2
; [X
1
]) =
0 are the complex values of [X
2
] where P
1
([X
1
], [X
2
]) = P
2
([X
1
], [X
2
]) = 0.
We denote the complex values of [X
2
] where P
1
([X
1
], [X
2
]) = P
2
([X
1
], [X
2
]) = 0 by

[X
2
]

. To find

[X
1
]

, we solve the univariate system P
1
([X
1
],

[X
2
]

) = P
2
([X
1
],

[X
2
]

) = 0
for all possible values of

[X
2
]

.
The following theorem can be used to determine if P
1
and P
2
either do not intersect,
or intersect at infinitely many points.
Theorem 4.7 None and infinitely many solutions. res(P
1
, P
2
; [X
1
]) is nonzero
for any [X
2
] if and only if P
1
([X
1
], [X
2
]) = P
2
([X
1
], [X
2
]) = 0 has no complex solutions.
Furthermore, the following statements are equivalent:
1. res(P
1
, P
2
; [X
1
]) is identically zero (i.e., zero for any values of [X
2
]).
2. P
1
and P
2
have a common factor of degree greater than zero.
3. P
1
= P
2
= 0 has infinitely many complex solutions.
We can extend the Sylvester resultant method to a multivariate case, say with three
polynomials P
1
([X
1
], [X
2
], [X
3
]), P
2
([X
1
], [X
2
], [X
3
]) and P
3
([X
1
], [X
2
], [X
3
]), by getting
R
1
= res(P
1
, P
2
; [X
1
]) and R
2
= res(P
2
, P
3
; [X
1
]). Notice that R
1
and R
2
are both in
terms of [X
2
] and [X
3
]. We then get R
3
= res(R
1
, R
2
; [X
2
]) which is in terms of [X
3
].
We solve the univariate polynomial equation R
3
= 0 by using available solvers to obtain

[X
3
]

. After this, we find

[X
2
]

by substituting

[X
3
]

in R
1
and R
2
and solve R
1
=
Chapter 4. Preliminaries Analysis of Nonlinear Systems 33
R
2
= 0. We then find

[X
1
]

by solving P
1
([X
1
],

[X
2
]

,

[X
3
]

) = P
2
([X
1
],

[X
2
]

,

[X
3
]

) =
P
3
([X
1
],

[X
2
]

,

[X
3
]

) = 0.
For a more detailed discussion on solving systems of multivariate polynomial equa-
tions, the following references may be consulted [17, 49, 98, 156, 157, 178].
4.5 Numerical solution to SDEs
The solutions to ODEs are functions, while the solutions to SDEs are stochastic processes.
We define a continuous-time stochastic process X as a set of random variables X
(t)
where the index variable t ≥ 0 takes a continuous set of values. The index variable t may
represent time.
Suppose we have an SDE model of the form dX = F(X)dt + σG(X)dW where W
is a stochastic process called Brownian motion (Wiener process). The differential dW of
W is called white noise. Brownian motion is the continuous version of “random walk”
and has the following properties:
1. For each t, the random variable W
(t)
is normally distributed with mean zero and
variance t.
2. For each t
i
< t
i+1
, the normal random variable ∆W
(t
i
)
= W
(t
i+1
)
− W
(t
i
)
is in-
dependent of the random variables W
(t
j
)
, 0 ≤ j ≤ t
i
(i.e., W has independent
increments).
3. Brownian motion W can be represented by continuous paths (but is not differen-
tiable).
Suppose W
(t
0
)
= 0. We can simulate a Brownian motion using computers by dis-
cretizing time as 0 = t
0
< t
1
< . . . and choosing a random number that would represent
∆W
(t
i−1
)
from the normal distribution N(0, t
i
− t
i−1
) =

t
i
−t
i−1
N(0, 1). This implies
that we obtain W
(t
i
)
by multiplying

t
i
−t
i−1
by a standard normal random number and
then adding the product to W
(t
i−1
)
.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 34
The solution to an SDE model has different realizations because it is based on
random numbers. We can approximate a realization of the solution by using numerical
solvers such as the Euler-Maruyama and Milstein methods. In this thesis, we use the
Euler-Maruyama method. The Euler-Maruyama method is similar to the Euler method
for ODEs.
Algorithm 2 Euler-Maruyama method
Discretize the time as 0 < t
1
< t
2
< . . . < t
end
.
Suppose Y
t
i
is the approximate solution to X
(t
i
)
.
Input initial condition X
t
0
. Let Y
t
0
:= X
t
0
.
For i = 0, 1, 2 . . . , end −1 do
∆W
(t
i
)
=

t
i+1
−t
i
rand
N(0,1)
, where rand
N(0,1)
is a standard normal random number.
Y
t
i+1
= Y
t
i
+F(Y
t
i
)(t
i+1
−t
i
) +σG(Y
t
i
)(∆W
(t
i
)
).
end
Euler-Maruyama has order 1/2, that is, for any time t the expected value of the error
E {|X
t
−Y
t
|} is an element of O((∆t)
1/2
) as ∆t →0. Note that for easy simulation, we
can suppose that we have equal step sizes ∆t
i
= (t
i+1
−t
i
). For a more detailed discussion
on Brownian motion and SDEs, the following reference may be consulted [95, 147].
For a more detailed discussion on the analysis of nonlinear systems, the following
references may be consulted [3, 56, 134, 146, 147].
Chapter 5
Results and Discussion
Simplified GRN and ODE Model
In this thesis, we represent the dynamics of the simplified gene network of MacArthur
et al. [113] using a system of Ordinary Differential Equations (ODEs) based on the
Cinquin-Demongeot formalism [38]. We prove the existence and uniqueness of solutions
to the ODE model under some assumptions.
5.1 Simplified MacArthur et al. model
Figure 5.1: The original MacArthur et al. [113] mesenchymal gene regulatory network.
35
Chapter 5. Results and Discussion Simplified GRN and ODE Model 36
Let us recall the MacArthur et al. [113] GRN in Chapter (3) (see Figure (5.1)). This
GRN represents a multipotent cell that could differentiate into three cell types — bone,
cartilage and fat.
Figure 5.2: Possible paths that result in positive feedback loops. Shaded boxes denote
that the path repeats.
We refer to the group of OCT4, SOX2, NANOG and their multimers (protein com-
plexes) as the pluripotency module, and the group of SOX9, RUNX2 and PPAR-γ as
the differentiation module. OCT4, SOX2, NANOG and their multimers in the original
MacArthur et al. GRN [113] do not have autoactivation loops, but notice that the path
NANOG →OCT4-SOX2-NANOG →OCT4 →OCT4-SOX2 →SOX2 →OCT4-SOX2-
NANOG →NANOG is one of the positive feedback loops of the GRN (see Figure (5.2)).
A positive feedback loop that contains OCT4, SOX2, NANOG and their multimers can
be regarded as an autoactivation loop of the pluripotency module.
Both the OCT4-SOX2-NANOG and OCT4-SOX2 multimers inhibit SOX9, RUNX2
and PPAR-γ (as represented by the green bars in Figure (5.1)). On the other hand,
SOX9, RUNX2 and PPAR-γ inhibit OCT4, SOX2 and NANOG (as represented by the
Chapter 5. Results and Discussion Simplified GRN and ODE Model 37
blue bars in Figure (5.1)). These inhibitions imply that the pluripotency module inhibits
the differentiation module and vice versa.
Figure 5.3: The simplified MacArthur et al. GRN
Since the pluripotent module can be represented as a node with autoactivation and
mutual inhibition with the other nodes, then we can simplify the GRN (5.1) by coarse-
graining. We represent the pluripotency module as one node, and we call it the sTF
(stemness transcription factor) node. From eight nodes, we only have four nodes. The
coarse-grained biological network of the MacArthur et al. GRN [113] is shown in Figure
(5.3) and from now on we shall refer to this as our simplified network. This simplified
Chapter 5. Results and Discussion Simplified GRN and ODE Model 38
network represents a phenomenological model of the mesenchymal cell differentiation
system. Since each node undergoes autocatalysis (autoactivation) and inhibition by
the other nodes (as shown by the arrows and bars) then the simplified GRN is in the
simultaneous-decision-model form that can be translated into a Cinquin-Demongeot [38]
ODE model (refer to Figure (3.4)).
It is difficult to study the qualitative behavior of the ODE model by MacArthur et
al. [113] (see Equations (3.6) and (3.7)) using analytic methods. This is the reason for
the simplification of the MacArthur et al. [113] GRN where the essential qualitative
dynamics are still preserved. We translate the dynamics of the simplified network into a
Cinquin-Demongeot [38] ODE model for easier analysis.
One limitation of a phenomenological model is that it excludes time-delays that may
arise from the deleted molecular details. However, a phenomenological model is sufficient
to address the general principles of cellular differentiation and cellular programming such
as the temporal behavior of the dynamics of the GRN [70].
5.2 The generalized Cinquin-Demongeot ODE model
In [38], Cinquin and Demongeot suggested to extend their model to include combinato-
rial interactions and non-symmetrical networks (i.e., each node does not have the same
relationship with other nodes and each equation in the system of ODEs does not have
equal parameter values). We include more adjustable parameters to their model to rep-
resent a wider range of situations. In this generalized model, some differentiation factors
can be stronger than others. We generalize the Cinquin-Demongeot (2005) ODE model
as follows (X = ([X
1
], [X
2
], . . . , [X
n
])):
d[X
i
]
dt
= F
i
(X) =
β
i
[X
i
]
c
i
K
i
c
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij

i
s
i
−ρ
i
[X
i
] (5.1)
where i = 1, 2, ..., n and n is the number of nodes.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 39
In our simplified network, we have four nodes and thus, n = 4. Some of our results
are applicable not only to n = 4 but to any dimension. The state variable [X
i
] represents
the concentration of the corresponding TF. Specifically, let [X
1
] := [RUNX2], [X
2
] :=
[SOX9], [X
3
] := [PPAR−γ] and [X
4
] := [sTF]. To have biological significance, we
restrict [X
i
] and the parameters to be nonnegative real numbers.
The parameter β
i
is the relative speed of transcription, ρ
i
is the assumed first-order
degradation rate associated with X
i
, and γ
ij
is the differentiation stimulus that affects
the inhibition of X
i
by X
j
. If γ
ij
= 0 then X
j
does not inhibit the growth of [X
i
]. We
denote the term α
i
s
i
by
g
i
:= α
i
s
i
which represents basal or constitutive expression of the corresponding TF that is affected
by the exogenous stimulus with concentration s
i
. In other words, α
i
s
i
is a constant
production term that enhances the concentration of X
i
. Specifically, let s
1
:= [RA +
BMP4], s
2
:= [RA +TGF−β], s
3
:= [RA +Insulin] and s
4
:= 0.
We define the multivariate function H
i
by
H
i
([X
i
], [X
2
], . . . , [X
n
]) =
β
i
[X
i
]
c
i
K
i
c
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
(5.2)
which comes from the typical Hill equation. The terms

n
j=1,j=i
γ
ij
[X
j
]
c
ij
in the denomi-
nator reflects the inhibitory influence of other TFs on the change of concentration of X
i
.
We denote the parameter K
i
c
i
> 0 by
K
i
:= K
i
c
i
which is related to the threshold or dissociation constant.
The parameter c
i
≥ 1 represents the Hill constant and affects the steepness of the
Hill curve associated with [X
i
], and denotes the homomultimerization-induced positive
cooperativity (for autocatalysis). The parameter c
ij
denotes the heteromultimerization-
induced negative cooperativity (for mutual inhibition). Cooperativity describes the inter-
actions among binding sites where the affinity or relationship of a binding site positively
Chapter 5. Results and Discussion Simplified GRN and ODE Model 40
or negatively changes depending on itself or on the other binding sites. Note that coop-
erativity requires more than one binding site.
Notice that the lower bound of H
i
(5.2) is zero and its upper bound is β
i
. Thus, the
parameter β
i
can also be interpreted as the maximal expression rate of the corresponding
TF.
Explicitly, two mathematically unequal amounts of concentration can be regarded as
biologically equal if their difference is not significant, that is, [X
i
] ≈ [X
j
] if [X
i
] = [X
j
] ±
where is acceptably small. We say that [X
i
] sufficiently dominates [X
j
] if [X
i
] >
[X
j
] and [X
i
] ≈ [X
j
]. In addition, scientists compare the concentration of X
i
to the
concentration of X
j
by looking at the ratio of [X
i
] and [X
j
] — for example, [X
i
] ≈ [X
j
],
[X
j
] = 0 if
[X
i
]
[X
j
]
>
1
≥ 1 or if
[X
i
]
[X
j
]
<
2
≤ 1 where
1
and
2
are some acceptable tolerance
constants.
We say that a TF is switched-off or inactive if [TF] = 0, and switched-on oth-
erwise. However, as an approximation, a TF with sufficiently low concentration can be
considered to be “switched-off”.
If no component representing a node from the differentiation module sufficiently dom-
inates [sTF] (e.g. [sTF] ≥ [OCT4], [sTF] ≥ [SOX2] and [sTF] ≥ [PPAR−γ]) and sTF
is switched-on, then the state represents a pluripotent cell. If all the components of a
state are (approxmiately) equal and all TFs are switched-on, then the state represents a
primed stem cell.
If at least one component from the differentiation module sufficiently dominates
[sTF], then the state represents either a partially differentiated or a fully differ-
entiated cell. If exactly three components from the differentiation module are (approx-
imately) equal, then the state represents a tripotent cell. If exactly two components
from the differentiation module are (approximately) equal and sufficiently dominate all
other components (possibly including [sTF]), then the state represents a bipotent cell.
If sTF is switched-off, then the cell had lost its ability to self-renew.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 41
If exactly one component from the differentiation module sufficiently dominates all
other components (possibly including [sTF]) but sTF is still switched-on, then the state
represents a unipotent cell. If exactly one TF from the differentiation module remains
switched-on and all other TFs including sTF are switched-off, then the state represents
a fully differentiated cell.
A trajectory converging to the equilibrium point (0, 0, . . . , 0) is a trivial case because
the zero state neither represents a pluripotent cell nor a cell differentiating into bone,
cartilage or fat. The trivial case may represent a cell differentiating towards other cell
types (e.g., towards becoming a neural cell) which are not in the domain of our GRN.
The zero state may also represent a cell that is in quiescent stage.
Definition 5.1 Stable component and stable equilibrium point. If [X
i
] converges
to [X
i
]

for all initial conditions [X
i
]
0
near [X
i
]

, then we say that the i-th component
[X
i
]

of an equilibrium point X

is stable; otherwise, [X
i
]

is unstable. The equilibrium
point X

= ([X
1
]

, [X
2
]

, . . . , [X
n
]

) of the system (5.1) is stable if and only if all its
components are stable.
5.3 Geometry of the Hill function
The Hill function defined by Equation (5.2) is a multivariate sigmoidal function when
c
i
> 1 and a multivariate hyperbolic-like function when c
i
= 1. If 1 < c
i
< n then X
i
has
autocatalytic cooperativity, and if 1 < c
ij
< n then the affinity of X
j
to X
i
has negative
cooperativity. In addition, the state variable X
i
has no autocatalytic cooperativity if
c
i
= 1, while the affinity of X
j
to X
i
has no negative cooperativity if c
ij
= 1.
We can investigate the multivariate Hill function by looking at the univariate function
defined by
H
i
([X
i
]) =
β
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
(5.3)
Chapter 5. Results and Discussion Simplified GRN and ODE Model 42
where [X
j
], j = i is taken as a parameter. This means that we project the high-
dimensional space onto a two-dimensional plane. If c
i
= 1, the graph of the univariate
Hill function in the first quadrant of the Cartesian plane is hyperbolic (for any value of
[X
j
], j = i), similar to the topology shown in Figure (5.4). If c
i
> 1, the graph of the
univariate Hill function in the first quadrant is sigmoidal or “S”-shaped (for any value of
[X
j
], j = i), similar to one of the topologies shown in Figure (5.5).
When the value of
K
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
(5.4)
in the denominator of H
i
([X
i
]) increases, the graph of the Hill curve (for any c ≥ 1)
shrinks, as illustrated in Figure (5.6). When the value of c
i
increases, the graph of
Y = H
i
([X
i
]) gets steeper, as illustrated in Figure (5.7). If we add a term g
i
to H
i
([X
i
])
then the graph of Y = H
i
([X
i
]) in the Cartesian plane is translated upwards by g
i
units,
as illustrated in Figure (5.8).
We investigate the geometry of the Hill function as a prerequisite to our study of
determining the behavior of equilibrium points of our system (5.1).
Figure 5.4: Graph of the univariate Hill function when c
i
= 1.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 43
Figure 5.5: Possible graphs of the univariate Hill function when c
i
> 1.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 44
Figure 5.6: The graph of Y = H
i
([X
i
]) shrinks as the value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
increases.
Figure 5.7: The Hill curve gets steeper as the value of autocatalytic cooperativity c
i
increases.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 45
Figure 5.8: The graph of Y = H
i
([X
i
]) is translated upwards by g
i
units.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 46
5.4 Positive invariance
We solve the multivariate equation F
i
(X) = 0 (for a specific i) by solving the intersections
of the (n+1)-dimensional curve induced by H
i
([X
1
], [X
2
], . . . , [X
n
]) +g
i
and the (n+1)-
dimensional hyperplane induced by ρ
i
[X
i
], as illustrated in Figure (5.9). That is, we find
the real solutions to
β
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij

i
s
i
= ρ
i
[X
i
]. (5.5)
For easier analysis, we observe the intersections of the univariate functions defined by
Y = H
i
([X
i
])+g
i
and Y = ρ
i
[X
i
] while varying the value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
in the
denominator of the univariate Hill function H
i
([X
i
]) (see Figure (5.10) for illustration).
In the univariate case, we can look at Y = ρ
i
[X
i
] as a line in the Cartesian plane passing
through the origin with slope equal to ρ.
Figure 5.9: The 3-dimensional curve induced by H
i
([X
1
], [X
2
])+g
i
and the plane induced
by ρ
i
[X
i
], an example.
The following theorem guarantees that the state variables of our ODE model (5.1)
will never take negative values.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 47
Figure 5.10: The intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
with varying values
of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
, an example.
Lemma 5.1 Positive invariance. The flow φ(X
0
) of the generalized (multivariate)
Cinquin-Demongeot ODE model (5.1) (where X
0
= ([X
1
]
0
, [X
2
]
0
, . . . , [X
n
]
0
) ∈ R

n
can
be any initial condition) is always in R

n
.
Proof. Suppose ρ
i
> 0 ∀i and we have a nonnegative initial value X
0
. Figures (5.11) to
(5.14) illustrate all possible cases showing the topologies of the intersections of Y = ρ
i
[X
i
]
and Y = H
i
([X
i
]) +g
i
.
We employ the concept of fixed point iteration (where we define our fixed point as
[X
i
] satisfying H
i
([X
i
]) + g
i
= ρ
i
[X
i
]), or the geometric analysis shown in Figure (4.1)
(where we rotate the graph of the curves, making Y = ρ
i
[X
i
] the horizontal axis) to each
topology of the intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) + g
i
. Figures (5.15) and
(5.16) illustrate how the fixed point method and the geometric analysis shown in Figure
(4.1) are done.
Given specific values of [X
j
], j = i, the univariate Hill curve Y = H
i
([X
i
]) and
Y = ρ
i
[X
i
] have the following possible number of intersections (see Figures (5.11) to
(5.14)):
• two intersections (where one is stable);
Chapter 5. Results and Discussion Simplified GRN and ODE Model 48
• one intersection (which is stable); and
• three intersections (where two are stable).
We can see that there exists a stable intersection located in the first quadrant (including
the axes) of the Cartesian plane that always attracts the trajectory of our ODE model
for any initial condition without escaping the first quadrant (including the axes). Hence,
the flow of the ODE model (5.1) will stay in R

n
for ρ
i
> 0 ∀i.
Now, suppose ρ
i
= 0 for at least one i. Then
d[X
i
]
dt
≥ 0 for all ([X
1
], [X
2
], . . . , [X
n
])
given nonnegative initial condition [X
i
]
0
— that is, the change in [X
i
] with respect to
time is always nonnegative implying that the value of [X
i
] will never decrease starting
from the initial condition [X
i
]
0
. Since [X
i
]
0
≥ 0, then [X
i
] ≥ 0 for any time t.
Thus, φ(X
0
) ⊆ R

n
∀X
0
∈ R

n
.
Consequently, Lemma (5.1) implies that F
i
is a function F
i
: R

n
→ R
n
, for i =
1, 2, . . . , n.
The following theorems are consequences of the proof of Theorem (5.1). We use
Lemma (5.2) in proving theorems in the succeeding chapters.
Lemma 5.2 Suppose ρ
i
> 0 for all i. Then the generalized Cinquin-Demongeot ODE
model (5.1) with X
0
∈ R

n
always has a stable equilibrium point. Moreover, any trajec-
tory of the model will converge to a stable equilibrium point.
Proof. This follows from the proof of Lemma (5.1).
Proposition 5.3 Suppose ρ
i
> 0 for all i. Then F
i
(X) will not “blow-up” and will not
approach infinity given any initial condition X
0
∈ R

n
.
Proof. Since all trajectories of our system converge to a stable equilibrium point by
Lemma (5.1) and (5.2).
Chapter 5. Results and Discussion Simplified GRN and ODE Model 49
Figure 5.11: The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
= 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed.
Figure 5.12: The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
> 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 50
Figure 5.13: The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
= 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed.
Figure 5.14: The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
> 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is fixed.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 51
Figure 5.15: Finding the univariate fixed points using cobweb diagram, an example.
We define the fixed point as [X
i
] satisfying H([X
i
]) +g
i
= ρ
i
[X
i
].
Figure 5.16: The curves are rotated making the line Y = ρ
i
[X
i
] as the horizontal axis.
Positive gradient means instability, negative gradient means stability. If the gradient is
zero, we look at the left and right neighboring gradients.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 52
5.5 Existence and uniqueness of solution
Recall Peano’s Existence Theorem (4.1) stating that if each F
i
is continuous on B then the
system of ODEs has a local solution (not necessarily unique), given any initial condition
X
0
∈ B ⊆ R

n
.
Also, recall the local and global existence-uniqueness theorems (4.2) and (4.3). If the
partial derivatives
∂F
i
∂[X
j
]
i, j = 1, 2, . . . , n are continuous on B ⊆ R

n
, then the system of
ODEs has a unique local solution given any initial condition X
0
∈ B. Moreover, if the
absolute value of these partial derivatives are bounded for all X ∈ B, then the system of
ODEs has exactly one solution defined for all t ∈ R

for any initial condition X
0
∈ B.
Lipschitz continuity is important in proving the existence and uniqueness of solutions.
Observing Figures (5.4) and (5.5), we can see that there are functions H
i
that are not
differentiable at [X
i
] = 0. Consequently, if H
i
is not differentiable at [X
i
] = 0 then F
i
is
also not differentiable at [X
i
] = 0. If we include [X
i
] = 0 in the domain of F
i
, then this
makes F
i
not Lipschitz continuous. We classify F
i
based on the nature of the parameter
c
i
.
We define two types of c
i
:
Type 1 c
i
≥ 1 and either an integer or a rational of the form c
i
=
p
i
q
i
where p
i
, q
i
are
positive integers and q
i
is odd.
Type 2 c
i
> 1 and either an irrational or a non-integer rational of the form c
i
=
p
i
q
i
where p
i
, q
i
are positive integers and q
i
is even.
The function F
i
, with c
i
of type 1, is differentiable at [X
i
] = 0; while the function F
i
,
with c
i
of type 2, is not differentiable at [X
i
] = 0.
Now, we prove several theorems that assure the existence and uniqueness of the
solution to our ODE model (5.1), given an initial condition.
Theorem 5.4 Suppose F
i
: R

n
→ R
n
, for i = 1, 2, . . . , n. Suppose that for all i, c
i
is
of type 1. Then the generalized Cinquin-Demongeot ODE model (5.1) has exactly one
solution defined for all t ∈ [0, ∞) for any initial condition X
0
∈ R

n
.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 53
Proof. Since K
i
> 0, then the denominator of the Hill function H
i
(5.2) is not identically
zero for any X ∈ R

n
. This implies that each F
i
is defined and continuous on R

n
. Since
for all i, c
i
is of type 1, then it follows that F
i
is differentiable on R

n
.
The partial derivative
∂F
i
∂[X
i
]
is as follows:
∂F
i
∂[X
i
]
=
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
β
i
c
i
[X
i
]
c
i
−1
−β
i
c
i
[X
i
]
2c
i
−1
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
2
−ρ
i
. (5.6)
The partial derivative
∂F
i
∂[X
l
]
, i = l is as follows:
∂F
i
∂[X
l
]
=
−β
i
[X
i
]
c
i
(c
il

il
[X
l
]
c
il
−1
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
2
. (5.7)
The denominator
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
2
(5.8)
in the partial derivative
∂F
i
∂[X
l
]
(for l = i and l = i) is not identically zero for all X ∈ R

n
.
Hence, all the partial derivatives
∂F
i
∂[X
l
]
, i, l = 1, 2, . . . , n are continuous on R

n
.
Notice that the degree of the denominator (5.8) is greater than the degree of its
corresponding numerator in Equations (5.6) and (5.7). It follows that as the value of
at least one state variable approaches infinity, then the value of
∂F
i
∂[X
i
]
approaches the
constant −ρ
i
and the value of
∂F
i
∂[X
l
]
(l = i) vanishes.
Since
∂F
i
∂[X
l
]
(for l = i and l = i) is continuous on R

n
(i.e., there are no “asymptotes”
on R

n
that would make the partial derivatives “blow-up”) and
∂F
i
∂[X
l
]
(l = i and l = i)
approaches a constant as the value of at least one state variable approaches infinity, then
¸
¸
¸
∂F
i
∂[X
l
]
¸
¸
¸, i, l = 1, 2, . . . , n are bounded for all X ∈ R

n
.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 54
Therefore, the system has a unique solution defined for all t ∈ R

for any initial
condition X
0
∈ R

n
.
Proposition 5.5 Suppose F
i
: R

n
→ R
n
, for i = 1, 2, . . . , n. Suppose that for at least
one i, c
i
is of type 2. Then the generalized Cinquin-Demongeot ODE model (5.1) has
a local solution (not necessarily unique) given [X
i
]
0
= 0 as an initial value. Moreover,
the generalized Cinquin-Demongeot ODE model (5.1) has a unique local solution given
[X
i
]
0
= 0 as an initial value.
Proof. Since K
i
> 0, then the denominator of the Hill function H
i
(5.2) is not identically
zero for any X ∈ R

n
. This implies that each F
i
is defined and continuous on R

n
.
By Peano’s Existence Theorem, the system has a local solution (not necessarily unique)
given [X
i
]
0
= 0 as an initial condition.
Suppose that for at least one i, c
i
is of type 2. Then for such certain i, F
i
is
differentiable on R

n
except when [X
i
] = 0. Note that the partial derivatives
∂F
i
∂[X
l
]
,
i, l = 1, 2, . . . , n (see Equation (5.6) and (5.7)) are continuous on R
+
n
(i.e., F
i
is locally
Lipschitz continuous on R
+
n
). Hence, the generalized Cinquin-Demongeot ODE model
(5.1) has a unique local solution given [X
i
]
0
= 0 as an initial value.
Remark: From the preceding proposition, at [X
i
] > 0 the trajectory of our ODE model
(5.1) is unique, but when the trajectory passes through [X
i
] = 0 the ODE model (5.1)
may (i.e., we are not sure) have more than one solution. Nevertheless, this will not affect
our analysis to effectively predict the behavior of our system when g
i
= 0 since [X
i
] = 0
is a component of a stable equilibrium point (i.e., [X
i
] will stay zero as t → ∞). Thus,
assuming g
i
= 0, the flow of our ODE model does not change its qualitative behavior
even if the trajectory passes through [X
i
] = 0. See Figure (5.17) for illustration.
If g
i
> 0, we can show that even with the assumption that c
i
is of type 2 for at least
one i, our ODE model (5.1) can still have a unique solution defined for all t ∈ [0, ∞) by
restricting the domain of F
i
.
Chapter 5. Results and Discussion Simplified GRN and ODE Model 55
Proposition 5.6 Suppose there are F
j
having g
j
> 0 and c
j
of type 2. Suppose there
are no F
i
, i = j having g
i
= 0 and c
i
of type 2. Then the generalized Cinquin-Demongeot
ODE model (5.1) has exactly one solution defined for all t ∈ [0, ∞) for any initial values
[X
j
]
0
∈ R
+
and [X
i
]
0
∈ R

, i = j.
Proof. Notice that for g
j
> 0, [X
j
] = 0 will never be a component of a stable equilibrium
point (see Figure (5.18)). This implies that we can reduce the space of positive invariance
associated to [X
j
] (refer to Lemma (5.1)) from R

to R
+
. Thus, for initial value [X
j
] > 0,
we can restrict the domain of F
j
with respect to the variable [X
j
] to R
+
(i.e., we eliminate
the possibility of making [X
j
] = 0 as an initial value).
We follow the same flow of proof as in Theorem (5.4). Since we only consider [X
j
] ∈
R
+
, then F
j
is now differentiable everywhere. The absolute value of the partial derivatives
of F
j
are continuous and bounded for all X where [X
j
] ∈ R
+
. Moreover, since there are
no F
i
, i = j having g
i
= 0 and c
i
of type 2, then it means that F
i
must have c
i
of type
1. The absolute value of the partial derivatives of F
i
are continuous and bounded for all
X where [X
i
] ∈ R

.
Hence, we conclude that the generalized Cinquin-Demongeot ODE model (5.1) has
exactly one solution defined for all t ∈ [0, ∞) for any initial values [X
j
]
0
∈ R
+
and
[X
i
]
0
∈ R

, i = j.
Suppose g
j
> 0 but [X
j
]
0
= 0 is the j-th component of the initial condition. We
can still do numerical simulations to solve the ODE model (5.1) even when F
j
is not
differentiable at [X
j
] = 0. However, we need to do the numerical simulation with caution
because we are not sure if multiple solutions will arise. We can use multivariate fixed
point algorithm to investigate the corresponding stable equilibrium point for this kind of
system.
Note that a c
ij
of type 2 does not affect the existence and uniqueness of a solution
because [X
j
]
c
ij
only affects the shrinkage of the graph of H
i
([X
i
]) (see Figure (5.6)).
Chapter 5. Results and Discussion Simplified GRN and ODE Model 56
Figure 5.17: When g
i
= 0, [X
i
] = 0 is a component of a stable equilibrium point.
Figure 5.18: When g
j
> 0, [X
j
] = 0 will never be a component of an equilibrium point.
Chapter 6
Results and Discussion
Finding the Equilibrium Points
In this chapter, we determine the location and number of equilibrium points of the gen-
eralized Cinquin-Demongeot ODE model (5.1). We only consider the biologically feasible
equilibrium points — those that are real-valued and nonnegative. For the following dis-
cussions, recall that K
i
> 0 ∀i. Appendix A contains illustrations related to this chapter.
6.1 Location of equilibrium points
Lemma 6.1 Given nonnegative state variables and parameters in (5.1), if g
i
> 0 then
ρ
i
> 0 is a necessary and sufficient condition for the existence of an equilibrium point.
Proof. Since [X
i
] ≥ 0, g
i
> 0 and all other parameters are nonnegative, then the decay
term −ρ
i
[X
i
] with ρ
i
> 0 is necessary for
F
i
(X) =
β
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
−ρ
i
[X
i
] +g
i
to be zero. The Hill curve induced by H
i
([X
1
], [X
2
], . . . , [X
n
]) (5.2) translated upwards
by g
i
> 0 and the hyperplane induced by ρ
i
[X
i
] will always intersect when ρ
i
> 0 and
will not intersect if ρ
i
= 0 (see Figures (5.11) to (5.14)).
Remark: If g
i
= 0 and ρ
i
= 0 then we have an equilibrium point with zero i-th component
(i.e.
i
(..., 0, ...)) but this equilibrium point is obviously unstable.
Theorem 6.2 The generalized Cinquin-Demongeot ODE model (5.1) has an equilibrium
point with i-th component equal to zero (i.e., [X
i
]

= 0) if and only if g
i
= 0.
57
Chapter 6. Results and Discussion Finding the Equilibrium Points 58
Proof. If g
i
= 0 then
F
i
(X) =
β
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
−ρ
i
[X
i
] + 0 = 0,
implying [X
i
] = 0 is a root of F
i
(X) = 0. Furthermore, if [X
i
] = 0 is a root of F
i
(X) = 0
then by substitution,
β
i
[0]
c
i
K
i
+ [0]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
−ρ
i
[0] +g
i
= 0,
g
i
must be zero.
The following corollary is very important because the case where the trajectory con-
verges to the origin is trivial. This zero state neither represents a pluripotent cell nor a
cell differentiating into bone, cartilage or fat.
Corollary 6.3 The zero state (0, 0, . . . , 0) can only be an equilibrium point if and only
if g
i
= 0 ∀i.
Proposition 6.4 Suppose ρ
i
> 0. If both β
i
> 0 and g
i
> 0 then
g
i
ρ
i
cannot be an i-th
component of an equilibrium point.
Proof. Suppose β
i
> 0, g
i
> 0 and
g
i
ρ
i
is an i-th component of an equilibrium point. Then
F
i
_
[X
1
], . . . ,
g
i
ρ
i
, . . . , [X
n
]
_
=
β
i
_
g
i
ρ
i
_
c
i
K
i
+
_
g
i
ρ
i
_
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
−ρ
i
g
i
ρ
i
+g
i
= 0
=
β
i
_
g
i
ρ
i
_
c
i
K
i
+
_
g
i
ρ
i
_
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
= 0
implying that β
i
_
g
i
ρ
i
_
c
i
= 0. Thus β
i
= 0 or g
i
= 0, a contradiction.
Chapter 6. Results and Discussion Finding the Equilibrium Points 59
Remark: If g
i
, ρ
i
> 0 then [X
i
] =
g
i
ρ
i
can only be an i-th component of an equilibrium
point if β
i
= 0.
Theorem 6.5 Suppose ρ
i
> 0. The value
g
i

i
ρ
i
is the upper bound of, but will never be
equal to, [X
i
]

(where [X
i
]

is the i-th component of an equilibrium point). The equilibrium
points of our system lie in the hyperspace
_
g
1
ρ
1
,
g
1

1
ρ
1
_
×
_
g
2
ρ
2
,
g
2

2
ρ
2
_
×. . . ×
_
g
n
ρ
n
,
g
n

n
ρ
n
_
. (6.1)
Proof. From Lemma (5.2), our system (5.1) always has an equilibrium point. Note that
[X
i
]

< ∞ ∀i because [X
i
]

= ∞ cannot be a component of an equilibrium point.
The minimum value of H
i
is zero which happens when β
i
= 0 or when [X
i
] = 0.
Hence, if H
i
([X
1
], [X
2
], . . . , [X
n
]) = 0 then F
i
(X) = g
i
−ρ
i
[X
i
] = 0, implying [X
i
] =
g
i
ρ
i
.
The upper bound of H
i
is β
i
which will only happen when [X
i
] = ∞. If H
i
([X
1
], [X
2
],
. . . , [X
n
]) = β
i
then F
i
(X) = β
i
− ρ
i
[X
i
] + g
i
= 0, implying [X
i
] =
g
i

i
ρ
i
(but note that
this is just an upper bound and cannot be a component of an equilibrium point). See
Figure (6.1) for illustration.
Remark: The Hill curve and ρ[X
i
] intersect at infinity when g
i
→∞, β
i
→∞or ρ
i
→0.
Moreover, if we have multiple stable equilibrium points lying on the hyperspace (6.1)
then one strategy for increasing the basin of attraction of a stable equilibrium point is by
increasing the value of β
i
(however, the number of stable equilibrium points may change
by doing this strategy).
In Chapter 5 Section 5.4, we are able to show the existence of an equilibrium point
but we do not know the value of the equilibrium point. Solving the system F
i
(X) = 0,
i = 1, 2, . . . , n can be interpreted as finding the intersections of the (n + 1)-dimensional
curves induced by each F
i
(X) and the (n + 1)-dimensional zero-hyperplane.
Chapter 6. Results and Discussion Finding the Equilibrium Points 60
Figure 6.1: Sample numerical solution in time series with the upper bound and lower
bound.
6.2 Cardinality of equilibrium points
In this section, we use the B´ezout Theorem (4.5) and Sylvester resultant method to
determine the number and exact values of equilibrium points.
Suppose c
i
and c
ij
are integers for all i and j. The corresponding polynomial equation
to (i = 1, 2, . . . , n)
F
i
(X) =
β
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
−ρ
i
[X
i
] +g
i
= 0 (6.2)
is
P
i
(X) =β
i
[X
i
]
c
i
+ (g
i
−ρ
i
[X
i
])
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
= 0
=−ρ
i
[X
i
]
c
i
+1
+ (β
i
+g
i
) [X
i
]
c
i

_
K
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_

i
[X
i
])
+g
i
n

j=1,j=i
γ
ij
[X
j
]
c
ij
+g
i
K
i
= 0. (6.3)
Chapter 6. Results and Discussion Finding the Equilibrium Points 61
Theorem 6.6 Assume that all equations in the polynomial system (6.3) have no common
factor of degree greater than zero given a certain set of parameter values. Then the number
of equilibrium points of the generalized Cinquin-Demongeot ODE model (5.1) (where c
i
and c
ij
are integers) is at most
max{c
1
+ 1, c
1j
+ 1 ∀j} ×max{c
2
+ 1, c
2j
+ 1 ∀j} ×. . . ×max{c
n
+ 1, c
nj
+ 1 ∀j}.
Proof. The degree of P
i
is deg
i
= max{c
i
+1, c
ij
+1 ∀j}. Since for some parameter values
we assume that all equations in the polynomial system (6.3) have no common factor of
degree greater than zero then by the B´ezout Theorem (4.5), the number of complex
solutions to the polynomial system is at most max{c
1
+1, c
1j
+1 ∀j}×max{c
2
+1, c
2j
+
1 ∀j} × . . . × max{c
n
+ 1, c
nj
+ 1 ∀j}. It follows that this is the upper bound of the
number of equilibrium points.
The B´ezout Theorem (4.5) does not give the exact number of equilibrium points
but only the upper bound. Also, Theorem (6.6) is dependent on the value of c
i
and
c
ij
as well as on n. According to Cinquin and Demongeot, manipulating the strength
of cooperativity (c
i
and c
ij
) is of minimal biological relevance [38]. Nevertheless, the
possible dependence of the number of equilibrium points on n (dimension of our state
space) has a biological implication. The dependence on n may be due to the potency of
the cell.
It is necessary to check if all equations in the polynomial system have no common
factor of degree greater than zero, because if they do then there will be infinitely many
complex solutions. Recall from Theorem (4.7) that we can determine the existence of
infinitely many complex solutions by checking if res(P
1
, P
2
; X
i
) (the determinant of the
Sylvester matrix) is identically zero, or by checking if P
1
and P
2
have a non-constant
common factor.
However, the infinite number of complex solutions arise if [X
i
] can take any complex
value. There can be solutions with negative (and possibly complex-valued) components
Chapter 6. Results and Discussion Finding the Equilibrium Points 62
that have no biological importance. Consequently, we need to do ad hoc investigation to
remove the solutions with negative or non-real-valued components and to check whether
the infinite number of solutions still arise when [X
i
] ∀i are restricted to be nonnegative
real numbers. It is possible that our polynomial system (6.3) has a finite number of
nonnegative real solutions even though the system has a non-constant common factor.
In order to determine the exceptions, we determine the set of parameter values (where
the strengths of cooperativity are integer-valued) that would give rise to a system of
equations having a non-constant common factor. We have found one case (and this is
the only case) where such common factor exists.
Theorem 6.7 Suppose c
i
= c
ij
= 1, g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0
and K
i
= K
j
= K > 0, for all i and j. Then the ODE model (5.1) has infinitely many
non-isolated equilibrium points if and only if β > ρK. Moreover, if β ≤ ρK then there
is exactly one equilibrium point which is the origin.
Proof. Recall (6.3), from the nonlinear system F
i
(X) = 0 (i = 1, 2, . . . , n),
β
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
−ρ
i
[X
i
] +g
i
= 0
we have the corresponding polynomial system P
i
(X) = 0 (i = 1, 2, . . . , n),
β
i
[X
i
]
c
i
−ρ
i
K
i
[X
i
] −ρ
i
[X
i
]
c
i
+1
−ρ
i
[X
i
]
n

j=1,j=i
γ
ij
[X
j
]
c
ij
+g
i
K
i
+g
i
[X
i
]
c
i
+g
i
n

j=1,j=i
γ
ij
[X
j
]
c
ij
= 0.
Suppose c
i
= c
ij
= 1, g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0 and
K
i
= K
j
= K > 0 (notice that we have a Michaelis-Menten-like symmetric system).
Chapter 6. Results and Discussion Finding the Equilibrium Points 63
Then the polynomial system can be written as (i = 1, 2, . . . , n)
β[X
i
] −ρK[X
i
] −ρ[X
i
]
2
−ρ[X
i
]
n

j=1,j=i
[X
j
] = 0
⇒[X
i
]
_
β −ρK −ρ[X
i
] −ρ
n

j=1,j=i
[X
j
]
_
= 0
⇒[X
i
] = 0 or
_
β −ρK −ρ[X
i
] −ρ
n

j=1,j=i
[X
j
]
_
= 0. (6.4)
Notice that the factor
β −ρK −ρ[X
i
] −ρ
n

j=1,j=i
[X
j
]
= β −ρK −ρ
n

j=1
[X
j
] (6.5)
is common to all equations in the polynomial system given the assumed parameter values.
Thus, by Theorem (4.7), there are infinitely many complex solutions where [X
j
] can be
any complex number. However, note that we have restricted [X
j
] to be nonnegative, so
we do further investigation to determine the conditions for the existence of an infinite
number of solutions given strictly nonnegative [X
j
]. We focus our investigation on real-
valued solutions.
Suppose B = β −ρK.
Case 1: If β = ρK then B = 0 and thus, B−ρ

n
j=1
[X
j
] will never be zero except when
[X
j
] = 0 ∀j (since [X
j
] can take only nonnegative values). Hence, the only equilibrium
point to the system is the origin.
Case 2: If β < ρK then B < 0 and thus, B − ρ

n
j=1
[X
j
] will always be negative and
will not have any zero for any nonnegative value of [X
j
]. Hence, the only equilibrium
point is the origin (which is derived from [X
i
] = 0, i = 1, 2, . . . , n in Equation (6.4)).
Case 3: If β > ρK then B > 0 and thus, there exist solutions to the equation
B − ρ

n
j=1
[X
j
] = 0. Notice that the set of nonnegative real-valued solutions to B −
Chapter 6. Results and Discussion Finding the Equilibrium Points 64
ρ

n
j=1
[X
j
] = 0 is a hyperplane (e.g., it is a line for n = 2 and it is a plane for n = 3).
Hence, there are infinitely many non-isolated equilibrium points when β > ρK.
Conversely, if the generalized Cinquin-Demongeot ODE model (5.1) with c
i
= c
ij
= 1,
g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0 and K
i
= K
j
= K > 0 has
infinitely many equilibrium points then the model’s corresponding polynomial system
has a common factor of degree greater than zero. The only possible common factor is
shown in (6.5). The only case where such factor will have infinitely many non-isolated
nonnegative solutions is when β > ρK.
Corollary 6.8 Suppose c
i
= c
ij
= 1, g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0
and K
i
= K
j
= K > 0. If β > ρK then the equilibrium points of system (5.1) are the
origin and the non-isolated points lying on the hyperplane with equation
n

j=1
[X
j
] =
β
ρ
−K, [X
j
] ≥ 0 ∀j. (6.6)
Proof. This is a consequence of the proof of Theorem (6.7).
Theorem 6.9 The generalized Cinquin-Demongeot ODE model (5.1) (where c
i
and c
ij
are integers) has a finite number of equilibrium points except when all of the following
conditions are satisfied: c
i
= c
ij
= 1, g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0,
K
i
= K
j
= K > 0 and β > ρK, for all i and j.
Proof. When c
i
= c
ij
= 1, g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0,
K
i
= K
j
= K > 0 and β > ρK for all i and j then, by Theorem (6.7), the generalized
Cinquin-Demongeot ODE model (5.1) has an infinite number of equilibrium points.
Now, suppose at least one of the following conditions is not satisfied: c
i
= c
ij
= 1,
g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0, K
i
= K
j
= K > 0 and β > ρK,
for all i and j. Recall the corresponding polynomial system P
i
(X) = 0 (6.3) to our
generalized Cinquin-Demongeot ODE model (5.1) (where c
i
and c
ij
are integers), which
is (for i = 1, 2, . . . , n)
Chapter 6. Results and Discussion Finding the Equilibrium Points 65
β
i
[X
i
]
c
i
−ρ
i
K
i
[X
i
] −ρ
i
[X
i
]
c
i
+1
−ρ
i
[X
i
]
n

j=1,j=i
γ
ij
[X
j
]
c
ij
+g
i
K
i
+g
i
[X
i
]
c
i
+g
i
n

j=1,j=i
γ
ij
[X
j
]
c
ij
= 0. (6.7)
Suppose g
i
= 0. The factorization of the polynomials in the above polynomial system
(6.7) is of the form:
[X
i
]
_
β
i
[X
i
]
c
i
−1
−ρ
i
K
i
−ρ
i
[X
i
]
c
i
−ρ
i
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
(6.8)
The factor [X
i
] is definitely not a common factor of our system. Moreover, there will
always be a [X
i
]
c
i
−1
term in the factor β
i
[X
i
]
c
i
−1
−ρ
i
K
i
−ρ
i
[X
i
]
c
i
−ρ
i

n
j=1,j=i
γ
ij
[X
j
]
c
ij
that will make β
i
[X
i
]
c
i
−1
−ρ
i
K
i
−ρ
i
[X
i
]
c
i
−ρ
i

n
j=1,j=i
γ
ij
[X
j
]
c
ij
not a common factor of
our system.
For example, suppose g
i
= 0, γ
ij
= 1, β
i
= β, ρ
i
= ρ, K
i
= K and c
i
= c
ij
for all i
and j, then we have
Factor in equation 1 : [X
1
]
c
1
−1
−ρK −ρ
n

j=1
[X
j
]
c
j
−1
Factor in equation 2 : [X
2
]
c
2
−1
−ρK −ρ
n

j=1
[X
j
]
c
j
−1
(6.9)
.
.
.
Factor in equation 3 : [X
n
]
cn−1
−ρK −ρ
n

j=1
[X
j
]
c
j
−1
.
Notice that the presence of [X
i
]
c
i
−1
in equation i makes at least one factor unique (“at
least one” because at most n − 1 equations may satisfy the restriction: c
i
= c
ij
= 1,
g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0, K
i
= K
j
= K > 0 and β > ρK, and
at least one equation does not).
Chapter 6. Results and Discussion Finding the Equilibrium Points 66
Suppose g
i
= 0 for at least one i. By the above proof (for g
j
= 0, j = i) as well as by
the presence of [X
i
]
c
i
(in the first term of Equation (6.7)) and [X
i
] (in the second, third
and fourth terms of Equation (6.7)), then the polynomials in the polynomial system (6.7)
are collectively relatively prime.
For example, suppose γ
ij
= 1, β
i
= β, ρ
i
= ρ, K
i
= K, g
i
= g and c
i
= c
ij
for all i
and j, then we have
β[X
1
]
c
1
−ρK[X
1
] −ρ[X
1
]
n

j=1
[X
j
]
c
j
+gK +g
n

j=1
[X
j
]
c
j
β[X
2
]
c
2
−ρK[X
2
] −ρ[X
2
]
n

j=1
[X
j
]
c
j
+gK +g
n

j=1
[X
j
]
c
j
(6.10)
.
.
.
β[X
n
]
cn
−ρK[X
n
] −ρ[X
n
]
n

j=1
[X
j
]
c
j
+gK +g
n

j=1
[X
j
]
c
j
Notice that the presence of [X
i
] in equation i makes at least one equation relatively prime
(“at least one” because at most n −1 equations may satisfy the restriction: c
i
= c
ij
= 1,
g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0, K
i
= K
j
= K > 0 and β > ρK, and
at least one equation does not).
Therefore, by Theorem (4.7), there is a finite number of equilibrium points.

Now, we prove a theorem showing that the equilibrium points ([X
1
]

[X
2
]

, [X
3
]

, 0) of
a system with n = 4 and g
4
= 0 are exactly the equilibrium points of the corresponding
system with n = 3. Generally, we state the following theorem:
Theorem 6.10 Suppose g
n
= 0. Then the n-dimensional system is more general than
the (n − 1)-dimensional system. That is, we can derive the equilibrium points of the
(n−1)-dimensional system by getting the equilibrium points of the n-dimensional system
where [X
n
]

= 0.
Proof. When [X
n
]

= 0 and g
n
= 0, the n-dimensional system reduces to an (n − 1)-
dimensional system.
Chapter 6. Results and Discussion Finding the Equilibrium Points 67
In the following illustrations, we show how to find equilibrium points using the
Sylvester resultant. We assign specific values to some parameters. Let us consider our
simplified MacArthur et al. GRN where n = 4 (5.3).
6.2.1 Illustration 1
Consider that all parameters are equal to 1 except for g
2
= g
3
= g
4
= 0. We have the
following polynomial system:
P
1
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
1
] −[X
1
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
])
+ (1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0
P
2
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
2
] −[X
2
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0 (6.11)
P
3
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
3
] −[X
3
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0
P
4
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
4
] −[X
4
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0.
The Sylvester matrix associated with P
1
and P
2
with [X
1
] as variable is
_
¸
¸
_
−1 1 −[X
2
] −[X
3
] −[X
4
] 1 + [X
2
] + [X
3
] + [X
4
]
−[X
2
] −[X
2
]
2
−[X
2
][X
3
] −[X
2
][X
4
] 0
0 −[X
2
] −[X
2
]
2
−[X
2
][X
3
] −[X
2
][X
4
]
_
¸
¸
_
. (6.12)
Then res(P
1
, P
2
; [X
1
]) = [X
2
]
2
. Therefore, [X
2
]

= 0.
By doing the same procedure as above, res(P
1
, P
3
; [X
1
]) = [X
3
]
2
and res(P
1
, P
4
; [X
1
])
= [X
4
]
2
. Hence, [X
3
]

= [X
4
]

= 0. Note that we cannot use res(P
2
, P
3
; [X
1
]), res(P
3
, P
4
;
[X
1
]) and res(P
2
, P
4
; [X
1
]) because these resultants are identically zero, that is, P
2
, P
3
and P
4
have common factors. So we need to be careful in choosing what combinations
of polynomial equations are to be used in determining the equilibrium points. Note that
P
1
does not share a common factor with the other three polynomial equations.
Chapter 6. Results and Discussion Finding the Equilibrium Points 68
Substituting [X
2
]

= [X
3
]

= [X
4
]

= 0 in P
1
we have −[X
1
]

2
+ 1 + [X
1
]

= 0. This
means that [X
1
]

=
1+

5
2
(we disregard
1−

5
2
because this is negative).
Therefore, we only have one equilibrium point,
_
1+

5
2
, 0, 0, 0
_
.
6.2.2 Illustration 2
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2, g
i
= 0, i, j = 1, 2, 3, 4.
We have the following polynomial system:
P
1
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
1
]
2
−[X
1
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0
P
2
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
2
]
2
−[X
2
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0 (6.13)
P
3
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
3
]
2
−[X
3
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0
P
4
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
4
]
2
−[X
4
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0.
The Sylvester matrix associated with P
1
and P
2
with [X
1
] as variable is
_
¸
¸
¸
¸
¸
¸
¸
¸
_
a
11
a
12
a
13
a
14
0
0 a
11
a
12
a
13
a
14
a
31
a
32
a
33
0 0
0 a
31
a
32
a
33
0
0 0 a
31
a
32
a
33
_
¸
¸
¸
¸
¸
¸
¸
¸
_
. (6.14)
where a
11
= −1, a
12
= 1, a
13
= −1 −[X
2
]
2
−[X
3
]
2
−[X
4
]
2
, a
14
= 0, a
31
= −[X
2
], a
32
= 0
and a
33
= [X
2
]
2
−[X
2
] −[X
2
]
3
−[X
2
][X
3
]
2
−[X
2
][X
4
]
2
. Then
res(P
1
, P
2
; [X
1
]) =−[X
2
]
3
(−[X
2
] + [X
4
]
2
+ 2[X
2
]
2
+ [X
3
]
2
+ 1) (6.15)
(−[X
2
] + [X
4
]
2
+ [X
2
]
2
+ [X
3
]
2
+ 1).
Chapter 6. Results and Discussion Finding the Equilibrium Points 69
Notice that the factors −[X
2
]+[X
4
]
2
+2[X
2
]
2
+[X
3
]
2
+1 and −[X
2
]+[X
4
]
2
+[X
2
]
2
+[X
3
]
2
+1
in (6.15) do not have real zeros. Therefore, [X
2
]

= 0.
Since the system is symmetric, then it follows that [X
1
]

= [X
3
]

= [X
4
]

= 0, too.
Hence, we only have one equilibrium point which is the origin.
Additional illustrations are presented in Appendix A (for n = 2 and n = 3).
When all parameters are equal to 1 except for c
i
= c
ij
= 2 and g
i
= 0 for all i, j,
then the only equilibrium point is the origin. Actually, this kind of system is the original
Cinquin-Demongeot ODE model [38] without “leak” where β = 1 and c = 2 (refer to
system (3.4)). We state the following proposition:
Proposition 6.11 If c
i
> 1, g
i
= 0, K
i
≥ 1, β
i
= 1 and ρ
i
= 1 for all i, then our system
has only one equilibrium point which is the origin.
Proof. Let us first consider the case where [X
j
] = 0 ∀j = i and K
i
= 1. The graphs of
Y = H
i
([X
i
]) (with increasing value of c
i
) and Y = [X
i
] is illustrated in Figure (6.2). If
c
i
→∞then [X
i
]
c
i
→∞for any [X
i
] > 1. As [X
i
]
c
i
→∞(with [X
i
] > 1), the univariate
Hill function H
i
([X
i
]) →β = 1. Hence, the univariate Hill curve Y = H
i
([X
i
]) will never
touch the point (1, 1) lying on Y = [X
i
] for finite c
i
.
Now, as the values of γ
ij
[X
j
] ∀j = i and K
i
increase then the univariate Hill curve
Y = H
i
([X
i
]) will just shrink and will definitely not intersect the decay line Y = [X
i
]
except at the origin (see Chapter 5 Section 5.3 for the discussion regarding the geometry
of the Hill curve).
Hence, for any value of [X
i
] and [X
j
] (for all j = i), the univariate Hill curve Y =
H
i
([X
i
]) will only intersect the decay line Y = [X
i
] at the origin. In other words,
[X
i
]
c
i
K + [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
i
< [X
i
] (6.16)
except when [X
i
] = 0.
Chapter 6. Results and Discussion Finding the Equilibrium Points 70
Figure 6.2: Y =
[X
i
]
c
i
K+[X
i
]
c
i
will never touch the point (1, 1) for 1 < c
i
< ∞.
Proposition (6.11) implies that the system with c
i
> 1, g
i
= 0, K
i
≥ 1, β
i
= 1 and
ρ
i
= 1 for all i, j represents a trivial case (i.e., the fate of the cell is not in the domain
of our GRN, or the cell is in quiescent stage). This is not the only set of parameters
that gives a trivial case. A generalization of the above Proposition (6.11) is stated in the
following statement:
Proposition 6.12 If c
i
> 1, g
i
= 0 and
ρ
i
(K
i
1/c
i
) ≥ β
i
(6.17)
for all i, then our system has only one equilibrium point which is the origin.
Proof. Let us first consider the case where [X
j
] = 0, for all j = i. Recall that the upper
bound of H
i
([X
i
]) is β
i
. Also, recall that when [X
i
] = K
1/c
i
i
then H
i
([X
i
]) = β
i
/2 (see
Section 3.2 in Chapter 3). Note that (K
1/c
i
i
, β
i
/2) is the inflection point of our univariate
Hill curve. We substitute [X
i
] = K
1/c
i
i
in the decay function Y = ρ
i
[X
i
], and if the value
of ρ
i
(K
i
1/c
i
) is larger or equal to the value of the upper bound β
i
then Y = H
i
([X
i
]) and
Y = ρ
i
[X
i
] only intersect at the origin. See Figure (6.3) for illustration.
Chapter 6. Results and Discussion Finding the Equilibrium Points 71
Figure 6.3: An example where ρ
i
(K
i
1/c
i
) > β
i
; Y = H
i
([X
i
]) and Y = ρ
i
[X
i
] only
intersect at the origin.
Now, as the values of γ
ij
[X
j
] for all j = i increase then the univariate Hill curve
Y = H
i
([X
i
]) will just shrink and will definitely not intersect the decay line Y = [X
i
]
except at the origin.
However, note that Proposition (6.12) is a sufficient but not a necessary condition.
There are some cases where ρ
i
(K
i
1/c
i
) < β
i
yet Y = H
i
([X
i
]) and Y = ρ
i
[X
i
] only
intersect at the origin.
Corollary 6.13 If c
i
> 1, g
i
= 0, K
i
≥ 1 and ρ
i
≥ β
i
for all i, then our system has only
one equilibrium point which is the origin.
Proof. Since ρ
i
≥ β
i
and K
i
≥ 1, then ρ
i
(K
i
1/c
i
) ≥ β
i
. Then we invoke Theorem
(6.12).
For c
i
= 1 and g
i
= 0, we state the following proposition:
Chapter 6. Results and Discussion Finding the Equilibrium Points 72
Proposition 6.14 Suppose c
i
= 1, g
i
= 0 and
β
i
K
i
≤ ρ
i
for all i. Then our system has
only one equilibrium point which is the origin.
Proof. Let us first consider the case where [X
j
] = 0, for all j = i. Recall that Y =
H
i
([X
i
]) where c
i
= 1 is a hyperbolic curve. The partial derivative
∂H
i
∂[X
i
]
=

∂[X
i
]
_
β
i
[X
i
]
K
i
+ [X
i
]
_
=
K
i
β
i
(K
i
+ [X
i
])
2
(6.18)
means that the slope of the hyperbolic curve is monotonically decreasing as [X
i
] increases.
The partial derivative at [X
i
] = 0 is
∂H
i
∂[X
i
]
=
β
i
K
i
≤ ρ
i
, (6.19)
which means that the slope of Y = H
i
([X
i
]) at [X
i
] = 0 is less than the slope of the decay
line Y = ρ
i
[X
i
] at [X
i
] = 0. Hence, the Hill curve Y = H
i
([X
i
]) lies below the decay line
for all [X
i
] > 0.
Suppose c
i
≥ 1 and g
i
= 0 for all i. In general, the origin is the only equilibrium
point of our ODE model (5.1) if and only if the univariate curve Y = H
i
([X
i
]) lies below
the decay line Y = ρ
i
[X
i
] (i.e., H
i
([X
i
]) < ρ
i
[X
i
], ∀[X
i
] > 0) for all i. This statement is
similar to Theorem (7.6) in the next Chapter.
Remark: We have seen the importance of the univariate Hill function H
i
([X
i
]). For
instance, when n = 1, c
1
= 2, ρ
1
> 0 and g
1
= 0, the Hill curve and the decay line
intersect at
[X
1
]

= 0,
β
1
±
_
β
2
1
−4ρ
2
1
K
1

1
. (6.20)
Notice that the equilibrium points depend on the parameters β
1
, ρ
1
and K
1
.
According to Cinquin and Demongeot [38], a sufficiently large c coupled with a suf-
ficiently large β are needed for the existence of an equilibrium point with a component
dominating the other components. Moreover, decreasing the value ρ
i
or adding the term
g
i
may result to an increased value of [X
i
]

.
Chapter 7
Results and Discussion
Stability of Equilibria and Bifurcation
We determine the stability of the equilibrium points of the generalized Cinquin-Demongeot
(2005) ODE model (5.1) for a given set of parameters. We also identify if varying the
values of some parameters, such as those associated with the exogenous stimuli, can steer
the system towards a desired state.
7.1 Stability of equilibrium points
Recall Lemma (5.2): Suppose ρ
i
> 0 for all i. Then the generalized Cinquin-Demongeot
ODE model (5.1) with X
0
∈ R

n
always has a stable equilibrium point. Moreover, any
trajectory of the model will converge to a stable equilibrium point.
Theorem 7.1 Given a set of parameters where ρ
i
> 0 for all i, if the system (5.1) has
only one equilibrium point then this point is stable.
Proof. This is a consequence of Lemma (5.2).
The following theorem assures us that our system (for any dimension n) will never
have an asymptotically stable limit cycle:
Theorem 7.2 Suppose ρ
i
> 0 for all i, then any trajectory of our system (5.1) never
converges to a neutrally stable center, to an ω-limit cycle, or to a strange attractor. This
also implies that (5.1) will never have an asymptotically stable limit cycle.
73
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 74
Proof. Since for any nonnegative initial condition, the trajectory of the ODE model
converges to a stable equilibrium point (see Lemma (5.2)), then any trajectory will never
stay orbiting a center, will never converge to an ω-limit cycle, and will never converge to
a strange attractor.
Moreover, suppose an ω-limit cycle exists. Then given some initial condition, the
trajectory of the system converges to this ω-limit cycle. This contradicts Lemma (5.2)
stating that any trajectory always converges to a stable equilibrium point.
Now, the following is the Jacobian of our system.
JF(X) =
_
¸
¸
¸
¸
¸
_
a
11
a
12
· · · a
1n
a
21
a
22
· · · a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
· · · a
nn
_
¸
¸
¸
¸
¸
_
(7.1)
where
a
ii
=
∂F
i
∂[X
i
]
=
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
β
i
c
i
[X
i
]
c
i
−1
−β
i
c
i
[X
i
]
2c
i
−1
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
2
−ρ
i
=
_
K
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
β
i
c
i
[X
i
]
c
i
−1
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
2
−ρ
i
(7.2)
a
il
=
∂F
i
∂[X
l
]
=
−β
i
[X
i
]
c
i
(c
il

il
[X
l
]
c
il
−1
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
2
, i = l. (7.3)
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 75
Notice that
∂F
i
∂[X
i
]
> 0 if
ρ
i
<
_
K
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
β
i
c
i
[X
i
]
c
i
−1
_
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
_
2
. (7.4)
Hence, if ρ
i
= 0 then the value of F
i
will always increase as [X
i
] increases. That is why
when ρ
i
= 0 for at least one i, we do not have a stable equilibrium point. Moreover,
∂F
i
∂[X
l
]
, i = l is always non-positive because as [X
l
] increases, the value of F
i
decreases.
Theorem 7.3 In our system (5.1), suppose g
i
= 0 and c
i
= 1 ∀i. Then the origin
is a stable equilibrium point when ρ
i
>
β
i
K
i
∀i, or an unstable equilibrium point when
ρ
i
<
β
i
K
i
for at least one i. When ρ
i
=
β
i
K
i
for at least one i, then we have a nonhyperbolic
equilibrium point, which is an attractor if [X
i
] is restricted to be nonnegative and ρ
j

β
j
K
j
∀j = i.
Proof. The characteristic polynomial associated with the Jacobian of our system when
X = (0, 0, ..., 0) is
|JF(0) −λI| =
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
β
1
K
1
−ρ
1
−λ 0 · · · 0
0
β
2
K
2
−ρ
2
−λ · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · ·
βn
Kn
−ρ
n
−λ
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=
_
β
1
K
1
−ρ
1
−λ
__
β
2
K
2
−ρ
2
−λ
_
. . .
_
β
n
K
n
−ρ
n
−λ
_
. (7.5)
The eigenvalues (λ) are
β
1
K
1
−ρ
1
,
β
2
K
2
−ρ
2
, . . . ,
βn
Kn
−ρ
n
. Therefore, the zero vector is a
stable equilibrium point when
β
i
K
i
−ρ
i
< 0 or ρ
i
>
β
i
K
i
∀i. The zero vector is an unstable
equilibrium point when
β
i
K
i
−ρ
i
> 0 or ρ
i
<
β
i
K
i
for at least one i.
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 76
Figure 7.1: When g
i
= 0, c
i
= 1 and the decay line is tangent to the univariate Hill
curve at the origin, then the origin is a saddle.
If
β
i
K
i
−ρ
i
= 0 or ρ
i
=
β
i
K
i
for at least one i then we have a nonhyperbolic equilibrium
point. Geometrically, we can see that this is a saddle — stable at the right and unstable
at the left of [X
i
]

= 0 (see Figure (7.1) for illustration). Hence, if we restrict [X
i
] ≥ 0
and if ρ
j

β
j
K
j
∀j = i, then this nonhyperbolic equilibrium point is stable.
Theorem 7.4 Suppose ρ
i
> 0, g
i
= 0 and c
i
> 1 ∀i, then the origin is a stable equilib-
rium point of the system (5.1).
Proof. By Corollary (6.3), if g
i
= 0 for all i then the origin is an equilibrium point.
The characteristic polynomial associated with the Jacobian of our system when X =
(0, 0, ..., 0) is
|JF(0) −λI| =
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
−ρ
1
−λ 0 · · · 0
0 −ρ
2
−λ · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · −ρ
n
−λ
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
= (−ρ
1
−λ)(−ρ
2
−λ) . . . (−ρ
n
−λ). (7.6)
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 77
The eigenvalues (λ) are −ρ
1
, −ρ
2
, . . . , −ρ
n
which are all negative. Therefore, the zero
state is a stable equilibrium point.
We can vary the size of the basin of attraction of the stable zero i-th component (or of
any lower-valued stable component) of an equilibrium point by varying the value of β
i
or
K
i
, or sometimes by varying the value of ρ
i
. Let us consider Figure (7.2) for illustration.
In Figure (7.2), the original basin of attraction of the origin is [0, +∞) but increasing
the value of β
i
decreases the basin of attraction to [0, C). Decreasing the value of K
i
decreases the basin of attraction of the origin to [0, A), and decreasing the value of ρ
i
decreases the basin of attraction of the origin to [0, B).
Figure 7.2: Varying the values of parameters may vary the size of the basin of attraction
of the lower-valued stable intersection of Y = H
i
([X
i
]) +g
i
and Y = ρ
i
[X
i
].
In addition, the size of the basin of attraction of an equilibrium point depends on
the number of existing equilibrium points and on the size of the hyperspace (6.1). Given
specific parameter values, the hyperspace (6.1) is fixed and the basin of attraction of each
existing equilibrium point is distributed in this hyperspace. If there are multiple stable
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 78
equilibrium points then there are multiple basins of attraction that share the size of the
hyperspace.
Now, we propose two additional methods for determining the stability of equilibrium
points other than the usual numerical methods for solving ODEs and other than using
the Jacobian — using a multivariate fixed point algorithm and using ad hoc geometric
analysis. We discuss multivariate fixed point algorithm in Appendix B. We prove the
following theorems using ad hoc geometric analysis.
Theorem 7.5 Suppose c
i
> 1. Then [X
i
]

= 0 (where [X
i
]

is the i-th component of an
equilibrium point) is always a stable component.
Proof. Recall from Theorem (6.2) that our system has an equilibrium point with i-th
component equal to zero if and only if g
i
= 0. The only possible topologies of the
intersections of Y = H
i
([X
i
]) and Y = ρ
i
[X
i
] are shown in Figure (7.3). Notice that zero
i-th component is always stable.
Figure 7.3: The possible number of intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) + g
i
where c > 1 and g = 0. The value of K
i
+

n
j=1,j=i
γ
ij
[X
j
]
c
ij
is taken as a parameter.
Theorem (7.5) is very important because this proves that when the pluripotency
module (where g
4
= 0, see discussion in Chapter 5 Section 5.2) is switched-off then it
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 79
can never be switched-on again, unless we make g
4
> 0 or introduce some random noise.
This is consistent with the observation of MacArthur et al. in [113].
Theorem 7.6 Suppose c
i
≥ 1 and g
i
= 0 for all i. The only stable equilibrium point
of our ODE model (5.1) is the origin if and only if the univariate curve Y = H
i
([X
i
])
essentially lies below the decay line Y = ρ
i
[X
i
] (i.e., H
i
([X
i
]) ≤ ρ
i
[X
i
], ∀[X
i
] > 0) for all
i.
Proof. Suppose, the curve Y = H
i
([X
i
]) essentially lies below the decay line Y = ρ
i
[X
i
],
then the intersections can be any of the forms shown in Figure (7.4). It is clear that zero
is the only stable intersection.
Figure 7.4: The possible topologies when Y = H
i
([X
i
]) essentially lies below the decay
line Y = ρ
i
[X
i
], g
i
= 0.
Conversely, suppose the only stable equilibrium point is the origin. Hence [X
j
], j = i
must converge to zero. We substitute [X
j
]

= 0, j = i to H
i
([X
1
], [X
2
], . . . , [X
n
]) (5.2).
The intersections of Y = H
i
(0, . . . , [X
i
], . . . , 0) = H([X
i
]) and Y = ρ
i
[X
i
] must contain
the origin (since we assumed that the origin is an equilibrium point). Looking at the
possible topologies of the intersections of Y = H
i
([X
i
]) and Y = ρ
i
[X
i
] (see Figures (5.11)
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 80
and (5.13)), zero can only be the sole stable intersection if the intersection is any of the
form shown in Figure(7.4). Therefore, the curve Y = H
i
([X
i
]) essentially lies below the
decay line Y = ρ
i
[X
i
].
Theorem 7.7 Suppose c
i
= c
ij
= 1, g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0, ρ
i
= ρ
j
= ρ > 0,
K
i
= K
j
= K > 0 and β > ρK. Then the origin is an unstable equilibrium point of the
system (5.1) while the points lying on the hyperplane
n

j=1
[X
j
] =
β
ρ
−K. (7.7)
are stable equilibrium points.
Proof. From Corollary (6.8), the origin and the points lying on the hyperplane are equi-
librium points of the system (5.1) given the assumed parameter values.
Suppose

n
j=1,j=i
[X
j
] = 0 in the denominator of H
i
(5.2). At [X
i
] = 0, the slope of
the Hill curve Y = H
i
([X
i
]) is
∂H
i
∂[X
i
]
=
β
K
. (7.8)
Since β > ρK then
β
K
> ρ. This implies that the slope of Y = H
i
([X
i
]) at [X
i
] = 0 is
greater than the slope of the decay line Y = ρ[X
i
]. Therefore, when

n
j=1,j=i
[X
j
] = 0
in the denominator of H
i
(5.2), there are two possible intersections of Y = H
i
([X
i
]) and
Y = ρ[X
i
]. The intersection is at the origin (which is unstable) and at [X
i
] =
β
ρ
− K
(which is stable).
Now, suppose

n
j=1,j=i
[X
j
] in the denominator of H
i
varies. Then the intersection
of Y = H
i
([X
i
]) and Y = ρ[X
i
] is at the origin (which is unstable) and at [X
i
] =
β
ρ

K−

n
j=1,j=i
[X
j
] (which is stable). Hence, the hyperplane [X
i
] =
β
ρ
−K−

n
j=1,j=i
[X
j
],
where [X
i
] and [X
j
] are nonnegative, is a set of stable equilibrium points. See Figure
(7.5) for illustration.

Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 81
Figure 7.5: The origin is unstable while the points where [X
i
]

=
β
ρ
−K−

n
j=1,j=i
[X
j
]

are stable.
In GRNs, the existence of infinitely many non-isolated equilibrium points is biolog-
ically volatile. A small perturbation in the initial value of the system may lead the
trajectory of the system to converge to a different equilibrium point that may result to a
change in the phenotype of the cell. The basin of attraction of each stable non-isolated
equilibrium point may not be as large compared to the basin of attraction of a stable
isolated equilibrium point. This special phenomenon represents competition where the
co-expression, extinction and domination of the TFs depend on the value of each TF, and
the dependence among TFs is a continuum. The existence of an attracting hyperplane
is also discovered by Cinquin and Demongeot in [38].
7.2 Bifurcation of parameters
We have seen in Chapter 6 and in Section 7.1 (also see Appendix C) the effect of the
parameters β
i
, K
i
, ρ
i
, c
i
, c
ij
and g
i
on the number, size of the basins of attraction and
behavior of equilibrium points. Varying the values of some parameters can decrease the
size of the basin of attraction of an undesirable equilibrium point as well as increase the
size of the basin of attraction of a desirable equilibrium point. We can mathematically
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 82
manipulate the parameter values to ensure that the initial condition is in the basin of
attraction of our desired equilibrium point.
Intuitively, we can make the i-th component of an equilibrium point dominate other
components by increasing β
i
or g
i
or, in some instances, by decreasing ρ
i
. Decreasing
the value of K
i
or sometimes increasing the value of c
i
minimizes the size of the basin of
attraction of the lower-valued stable intersection of Y = H
i
([X
i
]) + g
i
and Y = ρ
i
[X
i
],
thus, the chance of converging to an equilibrium point with [X
i
]

> [X
j
]

j = i may
increase. However, the effect of K
i
and c
i
in increasing the value of [X
i
]

is not as drastic
compared to β
i
, g
i
and ρ
i
, since K
i
and c
i
do not affect the upper bound of the hyperspace
(6.1). In addition, increasing the value of c
i
or of c
ij
may result in an increased number
of equilibrium points, and probably in multistability (by Theorem (6.6)).
We show in Appendix C some numerical bifurcation analysis to illustrate possible
bifurcation types that may occur.
In this section, we determine how to obtain an equilibrium point that has an i-th
component sufficiently dominating other components, especially by introducing an ex-
ogenous stimulus. We focus on the parameter g
i
because the introduction of an exogenous
stimulus is experimentally feasible.
Increasing the effect of exogenous stimuli
If we increase the value of g
i
up to a sufficient level, then we can increase the value of [X
i
]
where Y = H
i
([X
i
]) + g
i
and Y = ρ
i
([X
i
]) intersect. We can also make such increased
[X
i
] the only intersection. See Figure (7.6) for illustration.
Moreover, as we increase the value of g
i
up to a sufficient level, we increase the possible
value of [X
i
]

. Since [X
i
] inhibits [X
j
], then as we increase the value of [X
i
]

, we can
decrease the value of [X
j
], j = i where Y = H
j
([X
j
]) +g
j
and Y = ρ
j
([X
j
]) intersect. We
can also make such decreased [X
j
] the only intersection. If g
j
= 0, we can make [X
j
] = 0
the only intersection of Y = H
j
([X
j
]) and Y = ρ
j
([X
j
]). See Figure (7.7) for illustration.
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 83
Figure 7.6: Increasing the value of g
i
can result in an increased value of [X
i
] where
Y = H
i
([X
i
]) +g
i
and Y = ρ
i
([X
i
]) intersects.
Therefore, by changing the value of g
i
we can have a sole stable equilibrium point
where the i-th component dominates the others. For any initial condition, the trajectory
of the ODE model (5.1) will converge to this sole equilibrium point. By varying the
value of g
i
, we can manipulate the cell fate of a stem cell — controlling the tripotency,
bipotency, unipotency and terminal state of the cell. We present illustrations in Appendix
C showing the effect of increasing the value of g
i
.
Remark: Suppose, given a specific initial condition, the solution to our system tends
to an equilibrium point with [X
i
]

= 0. If we want our solution to escape [X
i
]

= 0
then one strategy is to add g
i
> 0. The idea of adding a sufficient amount of g
i
> 0
is to make the solution of our system escape a certain equilibrium point. However, it
is sometimes impractical or infeasible to continuously introduce a constant amount of
exogenous stimulus to control cell fates, that is why we may rather consider introducing
g
i
that degrades through time.
We can make g
i
a function of time (i.e., g
i
is varying through time). This strategy
means that we are adding another equation and state variable to our system of ODEs.
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 84
Figure 7.7: Increasing the value of g
i
can result in an increased value of [X
i
]

, and
consequently in decreased value of [X
j
] where Y = H
j
([X
j
]) + g
j
and Y = ρ
j
([X
j
])
intersects, j = i.
We can think of g
i
as an additional node to our GRN and we call it as the injection
node. In our case, we consider functions that represent a degrading amount of g
i
. Refer
to Appendix C for illustration.
Adding a degrading amount of g
i
affects cell fate but this strategy may not give rise
to a sole equilibrium point. Moreover, this strategy is only applicable to systems with
multiple stable equilibrium points where convergence of trajectories is sensitive to initial
conditions.
Chapter 8
Results and Discussion
Introduction of Stochastic Noise
We numerically investigate the effect of random noise to the cell differentiation system us-
ing Stochastic Differential Equations (SDEs). In [38], Cinquin and Demongeot suggested
to extend their model to include stochastic kinetics.
We have written a Scilab [150] program (see Algorithm (5)-(6) in Appendix D) to
simulate the effect of stochastic noise to the dynamics of our GRN. We employ several
functions G (see Section 3.3 in Chapter 3) to observe the various effect of the added
Gaussian white noise term. The different functions G are:
G(X) = 1, (8.1)
G(X) = X, (8.2)
G(X) =

X, (8.3)
G(X) = F(X), and (8.4)
G(X) =
_
H(X) + ˇ g + ˇ ρX. (8.5)
In function (8.1), the noise term is not dependent on any variable. This function is
used by MacArthur et al. in [113].
The noise term with (8.2) or (8.3) is affected by the value of X. That is, as the
concentration X increases/decreases, the effect of the noise term also increases/decreases.
Function (8.2) is used by Glauche et al. in [71]. However, using (8.2) or (8.3) has
undesirable biological implication — as [X
i
] dominates the other concentration of TFs,
the effect of random noise to [X
i
] intensifies.
85
Chapter 8. Results and Discussion Introduction of Stochastic Noise 86
In (8.4), the noise term is affected by the value of F(X) (the right hand side of our
corresponding ODE model) — that is, as the deterministic change in the concentration
X with respect to time
_
dX
dt
= F(X)
_
increases/decreases, the effect of the noise term
also increases/decreases. In using (8.4), we expect a decreasing amount of noise through
time because our ODE system (5.1) always converges to an equilibrium point. In other
words, as F(X) →0 (since F(X

) = 0), the effect of noise also vanishes.
Function (8.5) is based on the random population growth model [5, 91]. The ˇ g and ˇ ρ
are the matrices containing the parameters g
i
and ρ
i
(i = 1, 2, . . . , n), respectively.
In Algorithm (5)-(6) (see Appendix D), we use Euler Method to numerically solve
the system of ODEs, while we use Euler-Maruyama method to numerically solve the
corresponding system of SDEs. The output of the algorithm is a time-series of the
solutions. The solution of the ODE model is visualized by the thick solid line, while a
realization of the SDE model is visualized by the thin solid line. In the Euler-Maruyama
method, we set [X
i
] < 0 to be [X
i
] = 0.
The SDE models that we have used in this thesis are not exhaustive. We can consider
other types of SDE models, such as
dX = F(X)dt +σ
A
_
H(X)dW
A
−σ
B
_
ˇ ρXdW
B

C
_
ˇ gdW
C
. (8.6)
In the following examples we suppose n = 4 and σ
ii
= 0.5. Let the simulation step
size be 0.01.
Illustration 1
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters equal to
1 except for β
i
= 5 and g
i
= 0 for all i. This system has infinitely many non-isolated
stable equilibrium points (see Theorem (7.7)).
The corresponding system of SDEs is as follows:
Chapter 8. Results and Discussion Introduction of Stochastic Noise 87
d[X1] =
_
5[X
1
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
−[X
1
]
_
dt +σ
11
G
1
([X
1
])dW
1
d[X2] =
_
5[X
2
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
−[X
2
]
_
dt +σ
22
G
2
([X
2
])dW
2
(8.7)
d[X3] =
_
5[X
3
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
−[X
3
]
_
dt +σ
33
G
3
([X
3
])dW
3
d[X4] =
_
5[X
4
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
−[X
4
]
_
dt +σ
44
G
4
([X
4
])dW
4
.
We assume G
1
= G
2
= G
3
= G. Suppose the initial condition is [X
i
]
0
= 4 for all i.
Figures (8.1) to (8.5) show different realizations of the corresponding SDE model.
In the deterministic case, we expect that the solutions to [X
1
], [X
2
], [X
3
] and [X
4
] will
converge to an equilibrium point with equal components because our system is symmetric
and [X
1
]
0
= [X
2
]
0
= [X
3
]
0
= [X
4
]
0
. However, because of the presence of noise, some TFs
seem to dominate the others. It is possible that the solution to the SDE may approach a
different equilibrium point. This biological volatility is due to the presence of infinitely
many stable equilibrium points.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 88
Figure 8.1: For Illustration 1; ODE solution and SDE realization with G(X) = 1.
Figure 8.2: For Illustration 1; ODE solution and SDE realization with G(X) = X.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 89
Figure 8.3: For Illustration 1; ODE solution and SDE realization with G(X) =

X.
Figure 8.4: For Illustration 1; ODE solution and SDE realization with G(X) = F(X).
Chapter 8. Results and Discussion Introduction of Stochastic Noise 90
Figure 8.5: For Illustration 1; ODE solution and SDE realization using the random
population growth model.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 91
Illustration 2
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters equal to
1 except for c
i
= c
ij
= 2 (for all i, j), g
1
= 5, g
2
= 3, g
3
= 1 and g
4
= 0. This
system has a sole equilibrium point which is ([X
1
]

≈ 5.72411, [X
2
]

≈ 3.23066, [X
3
]


1.02313, [X
4
]

= 0).
The corresponding system of SDEs is as follows:
d[X1] =
_
[X
1
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
−[X
1
] + 5
_
dt +σ
11
G
1
([X
1
])dW
1
d[X2] =
_
[X
2
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
−[X
2
] + 3
_
dt +σ
22
G
2
([X
2
])dW
2
(8.8)
d[X3] =
_
[X
3
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
−[X
3
] + 1
_
dt +σ
33
G
3
([X
3
])dW
3
d[X4] =
_
[X
4
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
−[X
4
]
_
dt +σ
44
G
4
([X
4
])dW
4
.
We assume G
1
= G
2
= G
3
= G. Figures (8.6) to (8.10) show different realizations of the
corresponding SDE model with initial condition [X
i
]
0
= 3 for all i.
In the deterministic case, we expect that the solution will converge to the sole equilib-
rium point. From our simulation, it seems that our system is robust against the presence
of moderate noise. The realization of the SDE model nearly follows the deterministic
trajectory. We expect this to happen because for any initial condition, the solution to
our system will tend towards only one attractor.
Recall that one possible strategy for controlling our system to have only one stable
equilibrium point is by introducing adequate amount of exogenous stimuli (see Chapter
7 Section 7.2). In order to have assurance that cells will not change lineages, we need to
make the desired i-th lineage have a corresponding [X
i
]

that sufficiently dominates the
concentration of the other TFs.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 92
Figure 8.6: For Illustration 2; ODE solution and SDE realization with G(X) = 1.
Figure 8.7: For Illustration 2; ODE solution and SDE realization with G(X) = X.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 93
Figure 8.8: For Illustration 2; ODE solution and SDE realization with G(X) =

X.
Figure 8.9: For Illustration 2; ODE solution and SDE realization with G(X) = F(X).
Chapter 8. Results and Discussion Introduction of Stochastic Noise 94
Figure 8.10: For Illustration 2; ODE solution and SDE realization using the random
population growth model.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 95
Illustration 3
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters c
i
= c
ij
=
2, β
i
= 1, K
i
= 1, γ
ij
= 1/8, ρ
i
= 1/21 and g
i
= 0 for all i, j. This system has multiple
stable equilibrium points (see Figure (8.16)).
The corresponding system of SDEs is as follows:
d[X1] =
_
[X
1
]
2
1 + [X
1
]
2
+
1
8
[X
2
]
2
+
1
8
[X
3
]
2
+
1
8
[X
4
]
2

1
21
[X
1
]
_
dt +σ
11
G
1
([X
1
])dW
1
d[X2] =
_
[X
2
]
2
1 +
1
8
[X
1
]
2
+ [X
2
]
2
+
1
8
[X
3
]
2
+
1
8
[X
4
]
2

1
21
[X
2
]
_
dt +σ
22
G
2
([X
2
])dW
2
(8.9)
d[X3] =
_
[X
3
]
2
1 +
1
8
[X
1
]
2
+
1
8
[X
2
]
2
+ [X
3
]
2
+
1
8
[X
4
]
2

1
21
[X
3
]
_
dt +σ
33
G
3
([X
3
])dW
3
d[X4] =
_
[X
4
]
2
1 +
1
8
[X
1
]
2
+
1
8
[X
2
]
2
+
1
8
[X
3
]
2
+ [X
4
]
2

1
21
[X
4
]
_
dt +σ
44
G
4
([X
4
])dW
4
.
We assume G
1
= G
2
= G
3
= G. Figures (8.11) to (8.15) show different realizations of
the corresponding SDE model with initial condition [X
i
]
0
= 2 for all i.
In the deterministic case, we expect that the solutions to [X
1
], [X
2
], [X
3
] and [X
4
] will
converge to an equilibrium point with equal components because our system is symmetric
and [X
1
]
0
= [X
2
]
0
= [X
3
]
0
= [X
4
]
0
. For a system with regulated noise, the solution to the
SDE may follow the behavior of the trajectory of the ODE. However, it is also possible
that [X
1
]

, [X
2
]

, [X
3
]

and [X
4
]

tend towards different values, especially when the effect
of the noise becomes significant. Sufficient amount of random noise may cause cells to
shift cell types, especially when the initial condition is near the boundary of the basins
of attraction of the equilibrium points. Figure (8.16) shows the possible values of [X
1
]

and [X
2
]

given varying initial condition [X
1
]
0
. To regulate the effect of noise, we can
change the values of some parameters such as degradation rate and effect of exogenous
stimulus to decrease the size of the basin of attraction of an undesirable equilibrium
point. Multistability in the presence of random noise represents stochastic differentiation
of cells.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 96
Furthermore, the presence of noise may induce abnormal fluctuations in the concen-
tration of the TFs specifically when using functions (8.2) and (8.3), as shown in Figure
(8.12).
Figure 8.11: For Illustration 3; ODE solution and SDE realization with G(X) = 1.
Figure 8.12: For Illustration 3; ODE solution and SDE realization with G(X) = X.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 97
Figure 8.13: For Illustration 3; ODE solution and SDE realization with G(X) =

X.
Figure 8.14: For Illustration 3; ODE solution and SDE realization with G(X) = F(X).
Chapter 8. Results and Discussion Introduction of Stochastic Noise 98
Figure 8.15: For Illustration 3; ODE solution and SDE realization using the random
population growth model.
Figure 8.16: Phase portrait of [X
1
] and [X
2
].
Chapter 8. Results and Discussion Introduction of Stochastic Noise 99
Illustration 4
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters c
i
= c
ij
=
2, β
i
= 1, K
i
= 1, γ
ij
= 1, ρ
i
= 1 and g
i
= 0 for all i, j. Suppose the initial condition
is [X
i
]
0
= 0 for all i, which means that all TFs are switched-off. Figure (8.17) shows
that with the presence of noise, the TFs can be reactivated again. However, inactive TFs
cannot be activated by using any of the functions (8.2), (8.3), (8.4) or (8.5).
Figure 8.17: Reactivating switched-off TFs by introducing random noise where
G(X) = 1.
Chapter 9
Summary and Recommendations
We simplify the gene regulatory network (GRN) model of MacArthur et al. [113] to
study the mesenchymal cell differentiation system. The simplified MacArthur GRN is
given in the following figure:
Figure 9.1: The simplified MacArthur et al. GRN
We translate the simplified network model into a system of Ordinary Differential
Equations (ODEs) using a generalized Cinquin-Demongeot model [38]. We generalize
the Cinquin-Demongeot ODE model as:
100
Chapter 9. Summary and Recommendations 101
d[X
i
]
dt
=
β
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n

j=1,j=i
γ
ij
[X
j
]
c
ij
+g
i
−ρ
i
[X
i
]
where i = 1, 2, ..., n.
The state variables of the ODE model represent the concentration of the transcription
factors (TFs) involved in gene expression. For our simplified network, [X
1
] := [RUNX2],
[X
2
] := [SOX9], [X
3
] := [PPAR−γ] and [X
4
] := [sTF]. Some of our results are appli-
cable not only to n = 4 but to any dimension.
An asymptotically stable equilibrium point is associated with a certain cell type,
e.g., tripotent, bipotent, unipotent or terminal state. If [X
i
] sufficiently dominates the
concentrations of the other TFs then the chosen lineage is towards the i-th cell type.
For an ODE model to be useful, it is necessary that it has a solution. It is difficult to
predict the behavior of our system if the solution is not unique. We are able to prove that
there exists a unique solution to our model for some values of c
i
and c
ij
. The exponents
c
i
and c
ij
represent cooperativity among binding sites.
We propose two additional methods for determining the behavior of equilibrium points
other than the usual numerical methods for solving ODEs, and other than using the
Jacobian — (1) using ad hoc geometric analysis; and (2) using multivariate fixed point
algorithm.
The geometry of the Hill function H
i
is essential in understanding the behavior of
the equilibrium points of our ODE system. From the geometric analysis, we are able to
prove that our state variables [X
i
] will never be negative (i.e., R

n
is positively invariant
with respect to the flow of our ODE model) and that our ODE model always has a stable
equilibrium point. Any trajectory of the model will converge to a stable equilibrium
point.
A stable equilibrium point (0, 0, . . . , 0) is trivial because this state neither represents
a pluripotent cell nor a cell differentiating into bone, cartilage or fat. In our case, the cell
Chapter 9. Summary and Recommendations 102
may differentiate to other cell types which are not in the domain of our GRN. The zero
state may also represent a cell that is in quiescent stage. We are able to prove theorems
associated to the existence of a stable zero state, such as
• Our system has an equilibrium point with i-th component equal to zero if and
only if g
i
= 0. Moreover, if ρ
i
> 0 and c
i
> 1 then this zero i-th component
is always stable.
• The zero state (0, 0, . . . , 0) can only be an equilibrium point if and only if
g
i
= 0 ∀i. If ρ
i
> 0 and c
i
> 1 for all i then the zero state is stable.
• Suppose g
i
= 0 for all i. The only stable equilibrium point of our ODE model
is the origin if and only if the univariate Hill curve Y = H
i
([X
i
]) essentially
lies below the decay line Y = ρ
i
[X
i
] for all i.
If converging to the zero state is undesirable, we can decrease the size of the basin
of attraction of the zero state by sufficiently increasing the value of β
i
or by sufficiently
decreasing the value of K
i
and ρ
i
. In addition, we can add g
i
> 0 to escape a stable zero
state.
In the case where c
i
> 1 and g
i
= 0, if the TF associated to [X
i
] is switched-off, then
it can never be switched-on again because the zero i-th component of an equilibrium
point is stable. Two possible strategies for escaping an inactive state is to increase the
value of g
i
or to introduce some random noise.
The following theorems give us ideas regarding the location and number of equilibrium
points ([X
1
]

, [X
2
]

, . . . , [X
n
]

):
• Suppose ρ
i
> 0. The equilibrium points of our system lie in the hyperspace
_
g
1
ρ
1
,
g
1

1
ρ
1
_
×
_
g
2
ρ
2
,
g
2

2
ρ
2
_
×. . . ×
_
gn
ρn
,
gn+βn
ρn
_
.
• The generalized Cinquin-Demongeot ODE model (where c
i
and c
ij
are inte-
gers) has a finite number of equilibrium points except when all of the follow-
ing conditions are satisfied: c
i
= c
ij
= 1, g
i
= 0, γ
ij
= 1, β
i
= β
j
= β > 0,
ρ
i
= ρ
j
= ρ > 0, K
i
= K
j
= K > 0 and β > ρK, for all i and j.
• If the generalized Cinquin-Demongeot ODE model (where c
i
and c
ij
are in-
tegers) has a finite number of equilibrium points then the possible number
Chapter 9. Summary and Recommendations 103
of equilibrium points is at most max{c
1
+ 1, c
1j
+ 1 ∀j} ×max{c
2
+ 1, c
2j
+
1 ∀j} ×. . . ×max{c
n
+ 1, c
nj
+ 1 ∀j}.
We are able to find one case where there are infinitely many stable non-isolated
equilibrium points. This happens in a symmetric Michaelis-Menten-type system. In
GRNs, the existence of infinitely many non-isolated equilibrium points is biologically
volatile. A small perturbation in the initial value of the system may lead the trajectory
of the system to converge to a different equilibrium point that may result to a change in
the phenotype of the cell. This special phenomenon represents a competition where the
co-expression, extinction and domination of the TFs continuously depend on the value
of each TF.
If g
n
= 0 then the n-dimensional system is more general than the (n−1)-dimensional
system. That is, we can derive the equilibrium points of the (n −1)-dimensional system
by getting the equilibrium points of the n-dimensional system where [X
n
]

= 0. It is
clear that when [X
n
]

= 0 and g
n
= 0, the n-dimensional system reduces to an (n − 1)-
dimensional system.
Furthermore, we are able to prove an additional theorem related to the behavior of
the solution of our ODE model. The theorem states that if ρ
i
> 0 for all i, then our
system never converges to a center, to an ω-limit cycle or to a strange attractor. The
existence of a center, ω-limit cycle or strange attractor that would result to recurring
changes in phenotype is abnormal for a natural fully differentiated cell.
The parameters β
i
, K
i
, ρ
i
, γ
ij
, c
i
, c
ij
and g
i
affect the number, size of the basins
of attraction and behavior of equilibrium points. We can make the i-th component of
an equilibrium point dominate other components by increasing β
i
, by increasing g
i
or
sometimes by decreasing ρ
i
. Decreasing the value of K
i
or increasing the value of c
i
may
also increase the chance of having an [X
i
]

dominating the steady-state concentration of
the other TFs. However, the effect of K
i
and c
i
in increasing the value of [X
i
]

is not as
drastic compared to that of β
i
, g
i
and ρ
i
, since K
i
and c
i
do not affect the upper bound
of [X
i
]

. In some instances, varying γ
ij
may also induce the system to have a steady state
Chapter 9. Summary and Recommendations 104
where some TFs dominates the other TFs. Furthermore, increasing the value of c
i
or of
c
ij
may result in multistability.
We focus on manipulating the effect of the exogenous stimulus because this is exper-
imentally feasible. It is possible that changing the value of g
i
can result to a sole stable
equilibrium point where the i-th component dominates the others. That is, we can steer
our system towards a desired state given any initial condition by just introducing an
adequate amount of exogenous stimulus. However, this strategy is not applicable to the
pluripotency module (sTF node) when g
4
= 0. If g
4
= 0, the only possible strategy for a
switched-off sTF node to be reactivated is to introduce random noise.
We also consider the case where g
i
changes through time. It is sometimes impractical
or infeasible to continuously introduce a constant amount of exogenous stimulus to control
cell fates, that is why we consider an amount of g
i
that degrades through time. We
consider two types of functions to represent g
i
— a linear function with negative slope, and
an exponential function with negative exponent. The idea of initially adding a sufficient
amount of g
i
> 0 is to make the solution of our system escape a certain equilibrium point.
However, this strategy is only applicable to systems with multiple stable equilibrium
points where the convergence of the trajectories is sensitive to initial conditions.
Multistability in the presence of random noise represents stochastic differentiation of
cells. With the presence of random noise, it is possible that the solutions tend towards
different attractors. Random noise may cause cells to shift cell types, especially when
equilibrium points are near each other, or when the initial condition is near the boundary
of the basins of attraction of the equilibrium points. However, we can increase the
robustness of our system against the effect of moderate random noise by increasing the
size of the basin of attraction of the desired equilibrium point, or by having only one
stable equilibrium point.
Suppose we only have one stable equilibrium point. Since for any initial condition,
the solution to our ODE system tends toward only one attractor, then we can expect
the realization of the corresponding SDE model to (approximately) follow the determin-
istic trajectory. One possible strategy to ensure that our system has only one stable
Chapter 9. Summary and Recommendations 105
equilibrium point is to introduce an adequate amount of exogenous stimulus.
For validation, we recommend comparing our results with other models of GRN dy-
namics and with existing information gathered in actual experiments. We can extend the
results of this thesis by considering other kinds of GRNs, possibly with more cell lineages
involved. We can also include cell division and intercellular interactions.
Appendix A
More on Equilibrium Points: Illustrations
In our numerical computations, “difficult and computationally expensive” means the
problem is not efficiently solvable using Scientific Workplace [116] run in a laptop with
Intel Pentium P6200 2.13GHz processor and 2GB RAM. In determining the values of
equilibrium points, we need to check if the derived solutions are really solutions of the
system because we may have encountered approximation errors during our numerical
computations.
Solving the system F
i
(X) = 0, i = 1, 2, . . . , n can be interpreted as finding the
intersections of the (n + 1)-dimensional curves induced by each F
i
(X) and the (n + 1)-
dimensional zero-hyperplane. Figure (A.1) shows an illustration for n = 2.
Figure A.1: Intersections of F
1
, F
2
and zero-plane, an example.
We give some cases where we use Sylvester matrix in finding equilibrium points.
106
Appendix A. More on Equilibrium Points: Illustrations 107
A.1 Assume n = 2, c
i
= 1, c
ij
= 1
We determine the equilibrium points when c
i
= 1 and c
ij
= 1 for all i and j in a
two-dimensional system. The system of polynomial equations is as follows:
P
1
([X
1
], [X
2
]) =−ρ
1
[X
1
]
2
+ (β
1
+g
1
) [X
1
] −(K
1

12
[X
2
]) (ρ
1
[X
1
])
+g
1
γ
12
[X
2
] +g
1
K
1
= 0 (A.1)
P
2
([X
1
], [X
2
]) =−ρ
2
[X
2
]
2
+ (β
2
+g
2
) [X
2
] −(K
2

21
[X
1
]) (ρ
2
[X
2
])
+g
2
γ
21
[X
1
] +g
2
K
2
= 0
If P
1
and P
2
have no common factors then by Theorem (6.6), the number of complex
solutions to the polynomial system (A.1) is at most 4.
The corresponding Sylvester matrix of P
1
and P
2
with X
1
as variable is
_
¸
¸
_
a
11
a
12
a
13
a
21
a
22
0
0 a
21
a
22
_
¸
¸
_
(A.2)
where a
11
= −ρ
1
, a
12
= β
1
+ g
1
− K
1
ρ
1
− γ
12
ρ
1
[X
2
], a
13
= g
1
γ
12
[X
2
] + g
1
K
1
, a
21
=
−γ
21
ρ
2
[X
2
] + g
2
γ
21
and a
22
= −ρ
2
[X
2
]
2
+ (β
2
+ g
2
− K
2
ρ
2
)[X
2
] + g
2
K
2
. The Sylvester
resultant res(P
1
, P
2
; X
1
) is a polynomial in the variable X
2
and is of degree at most 4. By
Fundamental Theorem of Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 4 complex solutions
which is consistent with Theorem (6.6).
It is difficult and computationally expensive to find the exact solutions to res(P
1
, P
2
;
X
1
) = 0 in terms of the arbitrary parameters. We investigate specific cases where we
assign values to some parameters.
Appendix A. More on Equilibrium Points: Illustrations 108
A.1.1 Illustration 1
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary g
1
and
arbitrary g
2
. Assume g
1
> 0 or g
2
> 0. The Sylvester matrix with [X
1
] as variable is as
follows:
_
¸
¸
_
−1 g
1
−[X
2
] g
1
([X
2
] + 1)
g
2
−[X
2
] −[X
2
]
2
+g
2
[X
2
] +g
2
0
0 g
2
−[X
2
] −[X
2
]
2
+g
2
[X
2
] +g
2
_
¸
¸
_
(A.3)
It follows that
res(P
1
, P
2
; X
1
) = (g
1
+g
2
)[X
2
]
2
−(g
1
g
2
+g
2
2
)[X
2
] −g
2
2
. (A.4)
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] =
g
1
g
2
+g
2
2
±
_
(g
1
g
2
+g
2
2
)
2
+ 4(g
1
+g
2
)g
2
2
2(g
1
+g
2
)
. (A.5)
By the same procedure as above, the root of res(P
1
, P
2
; X
2
) = 0 is
[X
1
] =
g
1
g
2
+g
2
1
±
_
(g
1
g
2
+g
2
1
)
2
+ 4(g
1
+g
2
)g
2
1
2(g
1
+g
2
)
. (A.6)
Since g
1
> 0 or g
2
> 0 then g
1
g
2
+g
2
2

_
(g
1
g
2
+g
2
2
)
2
+ 4(g
1
+g
2
)g
2
2
and g
1
g
2
+g
2
1

_
(g
1
g
2
+g
2
1
)
2
+ 4(g
1
+g
2
)g
2
1
. Hence, we have equilibrium point ([X
1
]

, [X
2
]

) equal to
_
g
1
g
2
+g
2
1
+
_
(g
1
g
2
+g
2
1
)
2
+ 4(g
1
+g
2
)g
2
1
2(g
1
+g
2
)
,
g
1
g
2
+g
2
2
+
_
(g
1
g
2
+g
2
2
)
2
+ 4(g
1
+g
2
)g
2
2
2(g
1
+g
2
)
_
.
Therefore, for this example, we have exactly one equilibrium point.
Appendix A. More on Equilibrium Points: Illustrations 109
Now, observe that if g
1
> g
2
then [X
1
]

> [X
2
]

and if if g
2
> g
1
then [X
2
]

> [X
1
]

.
For example, assume g
1
= 2g
2
> 0, that is the ratio of g
1
to g
2
is 1 : 2. Then the
equilibrium point will be
_
2g
2
2
+ 4g
2
2
+
_
(2g
2
2
+ 4g
2
2
)
2
+ 4(2g
2
+g
2
)4g
2
2
2(2g
2
+g
2
)
,
2g
2
2
+g
2
2
+
_
(2g
2
2
+g
2
2
)
2
+ 4(2g
2
+g
2
)g
2
2
2(2g
2
+g
2
)
_

_
6g
2
2
+
_
36g
4
2
+ 48g
3
2
6g
2
,
3g
2
2
+
_
9g
4
2
+ 12g
3
2
6g
2
_

_
6g
2
+ 2
_
9g
2
2
+ 12g
2
6
,
3g
2
+
_
9g
2
2
+ 12g
2
6
_
.
Clearly, [X
1
]

=
6g
2
+2

9g
2
2
+12g
2
6
> [X
2
]

=
3g
2
+

9g
2
2
+12g
2
6
.
In addition, if g
1
= g
2
= g > 0, then [X
1
]

= [X
2
]

=
g+g

g
2
+2g
2
.
On the other hand, if g
1
and g
2
are both zero, then, by Theorem (6.7), the only
equilibrium point is (0, 0).
A.1.2 Illustration 2
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary β
1
= β
2
=
β, arbitrary g
1
and g
2
= 0. The Sylvester matrix with [X
1
] as variable is as follows:
_
¸
¸
_
−1 β +g
1
−1 −[X
2
] g
1
([X
2
] + 1)
−[X
2
] −[X
2
]
2
+ (β −1)[X
2
] 0
0 −[X
2
] −[X
2
]
2
+ (β −1)[X
2
]
_
¸
¸
_
(A.7)
It follows that
res(P
1
, P
2
; X
1
) = βg
1
[X
2
]
2
. (A.8)
Appendix A. More on Equilibrium Points: Illustrations 110
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] = 0. (A.9)
Substituting [X
2
] = 0 to the polynomial system (A.1) with the assumed parameter
values, we have
P
1
([X
1
], 0) = −[X
1
]
2
+ (β +g
1
)[X
1
] −[X
1
] +g
1
= 0 (A.10)
P
2
([X
1
], 0) = 0.
Thus,
[X
1
] =
−(β +g
1
−1) ±
_
(β +g
1
−1)
2
+ 4g
1
−2
=
(β +g
1
−1) ∓
_
(β +g
1
−1)
2
+ 4g
1
2
. (A.11)
Suppose g
1
> 0. Since β + g
1
− 1 <
_
(β +g
1
−1)
2
+ 4g
1
then we have equilibrium
point ([X
1
]

, [X
2
]

) equal to
_
(β +g
1
−1) +
_
(β +g
1
−1)
2
+ 4g
1
2
, 0
_
.
Therefore, we have exactly one equilibrium point where [X
1
]

> [X
2
]

when g
1
> 0.
If g
1
= 0 and β > 1 then we have two equilibrium points: (0, 0) and (β − 1, 0). If
g
1
= 0 and β ≤ 1 then the only equilibrium point is (0, 0).
Appendix A. More on Equilibrium Points: Illustrations 111
A.1.3 Illustration 3
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary K
1
=
K
2
= K, arbitrary g
1
and g
2
= 0. The Sylvester matrix with [X
1
] as variable is as follows:
_
¸
¸
_
−1 1 +g
1
−K −[X
2
] g
1
([X
2
] +K)
−[X
2
] −[X
2
]
2
+ (1 −K)[X
2
] 0
0 −[X
2
] −[X
2
]
2
+ (1 −K)[X
2
]
_
¸
¸
_
(A.12)
It follows that
res(P
1
, P
2
; X
1
) = g
1
[X
2
]
2
. (A.13)
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] = 0. (A.14)
Substituting [X
2
] = 0 to the polynomial system (A.1) with the assumed parameter
values, we have
P
1
([X
1
], 0) = −[X
1
]
2
+ (1 +g
1
)[X
1
] −K[X
1
] +g
1
K = 0 (A.15)
P
2
([X
1
], 0) = 0.
Thus,
[X
1
] =
−(1 +g
1
−K) ±
_
(1 +g
1
−K)
2
+ 4g
1
K
−2
=
(1 +g
1
−K) ∓
_
(1 +g
1
−K)
2
+ 4g
1
K
2
. (A.16)
Suppose g
1
> 0. Since 1+g
1
−K <
_
(1 +g
1
−K)
2
+ 4g
1
K then we have equilibrium
point ([X
1
]

, [X
2
]

) equal to
Appendix A. More on Equilibrium Points: Illustrations 112
_
(1 +g
1
−K) +
_
(1 +g
1
−K)
2
+ 4g
1
K
2
, 0
_
.
Therefore, we have exactly one equilibrium point where [X
1
]

> [X
2
]

when g
1
> 0.
If g
1
= 0 and K < 1 then we have two equilibrium points: (0, 0) and (1 − K, 0). If
g
1
= 0 and K ≥ 1 then the only equilibrium point is (0, 0).
A.1.4 Illustration 4
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary ρ
1
= ρ
2
=
ρ, arbitrary g
1
and g
2
= 0. Assume ρ > 0. The Sylvester matrix with [X
1
] as variable is
as follows:
_
¸
¸
_
−ρ 1 +g
1
−ρ −ρ[X
2
] g
1
([X
2
] + 1)
−ρ[X
2
] −ρ[X
2
]
2
+ (1 −ρ)[X
2
] 0
0 −ρ[X
2
] −ρ[X
2
]
2
+ (1 −ρ)[X
2
]
_
¸
¸
_
(A.17)
It follows that
res(P
1
, P
2
; X
1
) = g
1
ρ[X
2
]
2
. (A.18)
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] = 0. (A.19)
Substituting [X
2
] = 0 to the polynomial system (A.1) with the assumed parameter
values, we have
P
1
([X
1
], 0) = −ρ[X
1
]
2
+ (1 +g
1
)[X
1
] −ρ[X
1
] +g
1
= 0 (A.20)
P
2
([X
1
], 0) = 0.
Appendix A. More on Equilibrium Points: Illustrations 113
Thus,
[X
1
] =
−(1 +g
1
−ρ) ±
_
(1 +g
1
−ρ)
2
+ 4ρg
1
−2ρ
=
(1 +g
1
−ρ) ∓
_
(1 +g
1
−ρ)
2
+ 4ρg
1

. (A.21)
Suppose g
1
> 0. Since 1 + g
1
−ρ <
_
(1 +g
1
−ρ)
2
+ 4ρg
1
then we have equilibrium
point ([X
1
]

, [X
2
]

) equal to
_
(1 +g
1
−ρ) +
_
(1 +g
1
−ρ)
2
+ 4ρg
1

, 0
_
.
Therefore, we have exactly one equilibrium point where [X
1
]

> [X
2
]

when g
1
> 0.
If g
1
= 0 and ρ < 1 then we have two equilibrium points: (0, 0) and (1 − ρ, 0). If
g
1
= 0 and ρ ≥ 1 then the only equilibrium point is (0, 0).
A.2 Assume n = 2, c
i
= 2
A.2.1 Illustration 1
Consider c
i
= 2 and c
ij
= 1 for all i and j. The system of polynomial equations is as
follows:
P
1
([X
1
], [X
2
]) =−ρ
1
[X
1
]
3
+ (β
1
+g
1
) [X
1
]
2
−(K
1

12
[X
2
]) (ρ
1
[X
1
])
+g
1
γ
12
[X
2
] +g
1
K
1
= 0 (A.22)
P
2
([X
1
], [X
2
]) =−ρ
2
[X
2
]
3
+ (β
2
+g
2
) [X
2
]
2
−(K
2

21
[X
1
]) (ρ
2
[X
2
])
+g
2
γ
21
[X
1
] +g
2
K
2
= 0
Appendix A. More on Equilibrium Points: Illustrations 114
By Theorem (6.6), the number of complex solutions to the polynomial system (A.22)
is at most 9.
The corresponding Sylvester matrix of P
1
and P
2
with X
1
as variable is
_
¸
¸
¸
¸
¸
_
a
11
a
12
a
13
a
14
a
21
a
22
0 0
0 a
21
a
22
0
0 0 a
21
a
22
_
¸
¸
¸
¸
¸
_
(A.23)
where a
11
= −ρ
1
, a
12
= β
1
+ g
1
, a
13
= −K
1
ρ
1
− γ
12
ρ
1
[X
2
], a
14
= g
1
γ
12
[X
2
] + g
1
K
1
,
a
21
= −γ
21
ρ
2
[X
2
] + g
2
γ
21
and a
22
= −ρ
2
[X
2
]
3
+ (β
2
+ g
2
)[X
2
]
2
− K
2
ρ
2
[X
2
] + g
2
K
2
. The
Sylvester resultant res(P
1
, P
2
; X
1
) is a polynomial with variable X
2
and degree of at most
9. By Fundamental Theorem of Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 9 complex
solutions which is consistent with Theorem (6.6).
It is difficult and computationally expensive to find the exact solutions to res(P
1
, P
2
;
X
1
) = 0 in terms of the arbitrary parameters. We investigate specific cases where we
assign values to some parameters.
Suppose all parameters in the system (A.22) are equal to 1, except for arbitrary g
1
and arbitrary g
2
. The Sylvester matrix is as follows:
_
¸
¸
¸
¸
¸
_
a
11
a
12
a
13
a
14
a
21
a
22
0 0
0 a
21
a
22
0
0 0 a
21
a
22
_
¸
¸
¸
¸
¸
_
(A.24)
where a
11
= −1, a
12
= 1 + g
1
, a
13
= −1 − [X
2
], a
14
= g
1
([X
2
] + 1), a
21
= g
2
− [X
2
]
and a
22
= −[X
2
]
3
+ (1 + g
2
)[X
2
]
2
−[X
2
] + g
2
. The Sylvester resultant res(P
1
, P
2
; X
1
) is
a polynomial with variable X
2
and degree of at most 9. By Fundamental Theorem of
Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 9 complex solutions. But it is still difficult and
computationally expensive to find the exact solutions to res(P
1
, P
2
; X
1
) = 0 in terms of
the arbitrary g
1
and g
2
.
Appendix A. More on Equilibrium Points: Illustrations 115
If we add another assumption g
1
= 2g
2
(thus, we only have one arbitrary parameter),
the Sylvester matrix is as follows:
_
¸
¸
¸
¸
¸
_
a
11
a
12
a
13
a
14
a
21
a
22
0 0
0 a
21
a
22
0
0 0 a
21
a
22
_
¸
¸
¸
¸
¸
_
(A.25)
where a
11
= −1, a
12
= 1 +2g
2
, a
13
= −1 −[X
2
], a
14
= 2g
2
([X
2
] +1), a
21
= g
2
−[X
2
] and
a
22
= −[X
2
]
3
+ (1 + g
2
)[X
2
]
2
− [X
2
] + g
2
. The Sylvester resultant res(P
1
, P
2
; X
1
) of the
above matrix is a polynomial with variable X
2
and degree of at most 9. By Fundamental
Theorem of Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 9 complex solutions. Notice that
we only know the upper bound of the number of equilibrium points and not the exact
values. Even with only one arbitrary parameter, it is still difficult and computationally
expensive to find the exact solutions to res(P
1
, P
2
; X
1
) = 0.
Hence, we deem not to continue finding the exact value of all the equilibrium points
using Sylvester resultant method for systems that is more complicated than a system
with n = 2, c
i
= 2, c
ij
= 1 and at least one arbitrary parameter. Notice that for the
above Sylvester matrix, we only have 4 ×4 dimension which means finding the solution
to res(P
1
, P
2
; X
1
) = 0 for a larger Sylvester matrix with at least one arbitrary parameter
may be more difficult.
Nevertheless, in some instances where we do not have any arbitrary parameter, solving
for res(P
1
, P
2
; X
1
) = 0 is easy. For example, if we further assume that g
1
= 2g
2
where
g
2
= 1 then res(P
1
, P
2
; X
1
) = 0 has only one real nonnegative solution: [X
2
]

≈ 1.3143.
A.2.2 Illustration 2
If c
i
= c
ij
= 2, according to Theorem (6.6), the upper bound of the number of equilibrium
points is 9.
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2 and g
i
= 0,
i, j = 1, 2. The only equilibrium point is the origin.
Appendix A. More on Equilibrium Points: Illustrations 116
A.2.3 Illustration 3
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2 (i, j = 1, 2) and g
2
= 0.
The only equilibrium point is ([X
1
]

≈ 1.7549, [X
2
]

= 0).
A.2.4 Illustration 4
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2, g
i
= 0 and β
i
= 3,
i, j = 1, 2. There are seven equilibrium points (the following values are approximates):
• ([X
1
]

= 2.618, [X
2
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 2.618),
• ([X
1
]

= 0.38197, [X
2
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 0.38197),
• ([X
1
]

= 0.5, [X
2
]

= 0.5),
• ([X
1
]

= 1, [X
2
]

= 1), and
• ([X
1
]

= 0, [X
2
]

= 0).
A.2.5 Illustration 5
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2, g
2
= 0, β
i
= 20
and ρ
i
= 10, i, j = 1, 2. There are three equilibrium points (the following values are
approximates):
• ([X
1
]

= 1.4633, [X
2
]

= 0),
• ([X
1
]

= 0.5, [X
2
]

= 0), and
• ([X
1
]

= 0.13668, [X
2
]

= 0).
A.3 Assume n = 3
A.3.1 Illustration 1
If c
i
= c
ij
= 1, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the
number of equilibrium points is 8.
Appendix A. More on Equilibrium Points: Illustrations 117
Consider that all parameters are equal to 1 except for g
2
= 0 and g
3
= 0. The only
equilibrium point is ([X
1
]

≈ 1.618, [X
2
]

= 0, [X
3
]

= 0).
A.3.2 Illustration 2
If c
i
= c
ij
= 2, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the
number of equilibrium points is 27.
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2 (i, j = 1, 2, 3),
g
2
= 0 and g
3
= 0. The only equilibrium point is ([X
1
]

≈ 1.7549, [X
2
]

= 0, [X
3
]

= 0).
A.3.3 Illustration 3
If c
i
= c
ij
= 3, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the
number of equilibrium points is 64.
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3 and g
i
= 0,
i, j = 1, 2, 3. The only equilibrium point is the origin.
A.3.4 Illustration 4
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3 (i, j = 1, 2, 3), g
2
= 0
and g
3
= 0. The only equilibrium point is ([X
1
]

≈ 1.8668, [X
2
]

= 0, [X
3
]

= 0).
A.3.5 Illustration 5
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3, β
i
= 3 and g
i
= 0,
i, j = 1, 2, 3. There are ten equilibrium points (the following values are approximates):
• ([X
1
]

= 1.0097 ×10
−28
≈ 0, [X
2
]

= 0.6527, [X
3
]

= 0),
• ([X
1
]

= 1.5510 ×10
−25
≈ 0, [X
2
]

= 2.8794, [X
3
]

= 0),
• ([X
1
]

= 0.6527, [X
2
]

= 0, [X
3
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 0.6527),
Appendix A. More on Equilibrium Points: Illustrations 118
• ([X
1
]

= 2.8794, [X
2
]

= 0, [X
3
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 2.8794),
• ([X
1
]

= 1, [X
2
]

= 1, [X
3
]

= 0),
• ([X
1
]

= 1, [X
2
]

= 0, [X
3
]

= 1),
• ([X
1
]

= 0, [X
2
]

= 1, [X
3
]

= 1), and
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 0).
A.3.6 Illustration 6
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3, β
i
= 20, ρ
i
= 10,
g
2
= 0 and g
3
= 0, i, j = 1, 2, 3. There are seven equilibrium points (the following values
are approximates):
• ([X
1
]

= 0.10103, [X
2
]

= 1.001, [X
3
]

= 0),
• ([X
1
]

= 0.10103, [X
2
]

= 0, [X
3
]

= 1.001),
• ([X
1
]

= 0.10039, [X
2
]

= 1.6173, [X
3
]

= 0),
• ([X
1
]

= 0.10039, [X
2
]

= 0, [X
3
]

= 1.6173),
• ([X
1
]

= 0.10213, [X
2
]

= 0, [X
3
]

= 0),
• ([X
1
]

= 0.83362, [X
2
]

= 0, [X
3
]

= 0), and
• ([X
1
]

= 1.8123, [X
2
]

= 0, [X
3
]

= 0).
A.3.7 Illustration 7
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2, γ
ij
= γ, ρ
i
= ρ and
g
i
= 0, i, j = 1, 2, 3. Notice that this system is the system used by MacArthur et al. in
[113] (refer to system (3.8)). The nonlinear system (3.8) is of the form:
[X
1
]
2
1 + [X
1
]
2
+γ[X
2
]
2
+γ[X
3
]
2
−ρ[X
1
] = 0
[X
2
]
2
1 + [X
2
]
2
+γ[X
1
]
2
+γ[X
3
]
2
−ρ[X
2
] = 0 (A.26)
[X
3
]
2
1 + [X
3
]
2
+γ[X
1
]
2
+γ[X
2
]
2
−ρ[X
3
] = 0.
Appendix A. More on Equilibrium Points: Illustrations 119
The corresponding polynomial system is
P
1
([X
1
], [X
2
], [X
3
]) = [X
1
]
2
−ρ[X
1
] −ρ[X
1
]
3
−γρ[X
1
][X
2
]
2
−γρ[X
1
][X
3
]
2
= 0
P
2
([X
1
], [X
2
], [X
3
]) = [X
2
]
2
−ρ[X
2
] −ρ[X
2
]
3
−γρ[X
1
]
2
[X
2
] −γρ[X
2
][X
3
]
2
= 0 (A.27)
P
3
([X
1
], [X
2
], [X
3
]) = [X
3
]
2
−ρ[X
3
] −ρ[X
3
]
3
−γρ[X
1
]
2
[X
3
] −γρ[X
2
]
2
[X
3
] = 0.
The Sylvester matrix associated to P
1
and P
2
with [X
1
] as variable is as follows:
_
¸
¸
¸
¸
¸
¸
¸
¸
_
a
11
a
12
a
13
0 0
0 a
11
a
12
a
13
0
a
31
0 a
33
0 0
0 a
31
0 a
33
0
0 0 a
31
0 a
33
_
¸
¸
¸
¸
¸
¸
¸
¸
_
(A.28)
where a
11
= −ρ, a
12
= 1, a
13
= −ρ − γρ[X
2
]
2
− γρ[X
3
]
2
, a
31
= −γρ[X
2
] and a
33
=
[X
2
]
2
−ρ[X
2
] −ρ[X
2
]
3
−γρ[X
2
][X
3
]
2
.
The Sylvester matrix associated to P
1
and P
3
with [X
1
] as variable is as follows:
_
¸
¸
¸
¸
¸
¸
¸
¸
_
a
11
a
12
a
13
0 0
0 a
11
a
12
a
13
0
a
31
0 a
33
0 0
0 a
31
0 a
33
0
0 0 a
31
0 a
33
_
¸
¸
¸
¸
¸
¸
¸
¸
_
(A.29)
where a
11
= −ρ, a
12
= 1, a
13
= −ρ − γρ[X
2
]
2
− γρ[X
3
]
2
, a
31
= −γρ[X
3
] and a
33
=
[X
3
]
2
−ρ[X
3
] −ρ[X
3
]
3
−γρ[X
2
]
2
[X
3
].
The following are the Sylvester resultants (with [X
1
] as variable) associated to the
polynomial system (A.27):
Appendix A. More on Equilibrium Points: Illustrations 120
• res(P
1
, P
2
; [X
1
]) = −ρ[X
2
]
3
(ρ −[X
2
] +ρ[X
2
]
2
+γρ[X
3
]
2
) (ρ −[X
2
] +ρ[X
2
]
2
+
γρ[X
2
]
2
+ γρ[X
3
]
2
) (γ − 2γρ
2
+ γ
2
ρ
2
+ ρ
2
[X
2
]
2
− ρ[X
2
] + ρ
2
− γ
2
ρ
2
[X
2
]
2
+
γ
3
ρ
2
[X
2
]
2
−2γ
2
ρ
2
[X
3
]
2

3
ρ
2
[X
3
]
2

2
ρ[X
2
] −γρ
2
[X
2
]
2
+γρ
2
[X
3
]
2
)
• res(P
1
, P
3
; [X
1
]) = −ρ[X
3
]
3
(ρ −[X
3
] +ρ[X
3
]
2
+γρ[X
2
]
2
) (ρ −[X
3
] +ρ[X
3
]
2
+
γρ[X
3
]
2
+ γρ[X
2
]
2
) (γ − 2γρ
2
+ γ
2
ρ
2
+ ρ
2
[X
3
]
2
− ρ[X
3
] + ρ
2
− γ
2
ρ
2
[X
3
]
2
+
γ
3
ρ
2
[X
3
]
2
−2γ
2
ρ
2
[X
2
]
2

3
ρ
2
[X
2
]
2

2
ρ[X
3
] −γρ
2
[X
3
]
2
+γρ
2
[X
2
]
2
).
We investigate all possible combination of the factors of res(P
1
, P
2
; [X
1
]) and res(P
1
,
P
3
; [X
1
]) and their simultaneous nonnegative real zeros. For example, the factors −ρ[X
2
]
3
in res(P
1
, P
2
; [X
1
]) and −ρ[X
3
]
3
in res(P
1
, P
3
; [X
1
]) have a simultaneous nonnegative
real zero which is [X
2
]

= [X
3
]

= 0. However, it is also possible that a factor of
res(P
1
, P
2
; [X
1
]) and a factor of res(P
1
, P
3
; [X
1
]) do not have a simultaneous zero.
Suppose the factors of res(P
1
, P
2
; [X
1
]) and res(P
1
, P
3
; [X
1
]) have a simultaneous non-
negative real zero. An interesting property of such zero is that it satisfies one of the
following characteristics:
1. [X
2
]

= [X
3
]

= 0;
2. [X
2
]

= 0 and [X
3
]

> [X
2
]

;
3. [X
3
]

= 0 and [X
2
]

> [X
3
]

;
4. [X
2
]

= [X
3
]

> 0;
5. [X
2
]

> [X
3
]

> 0; and
6. [X
3
]

> [X
2
]

> 0.
Since the structure of each equation in the nonlinear system (A.26) are similar, then
the above enumeration of characteristics of solutions also apply to the relationship be-
tween [X
1
] and [X
2
] as well as to the relationship between [X
1
] and [X
3
].
We can conclude that an equilibrium point satisfies one of the following characteristics
(depending on the value of γ and ρ):
Appendix A. More on Equilibrium Points: Illustrations 121
1. [X
1
]

= [X
2
]

= [X
3
]

;
2. [X
1
]

> [X
2
]

= [X
3
]

;
3. [X
2
]

> [X
1
]

= [X
3
]

;
4. [X
3
]

> [X
1
]

= [X
2
]

;
5. [X
1
]

< [X
2
]

= [X
3
]

;
6. [X
2
]

< [X
1
]

= [X
3
]

;
7. [X
3
]

< [X
1
]

= [X
2
]

;
8. [X
1
]

> [X
2
]

> [X
3
]

;
9. [X
2
]

> [X
3
]

> [X
1
]

;
10. [X
3
]

> [X
1
]

> [X
2
]

;
11. [X
1
]

> [X
3
]

> [X
2
]

;
12. [X
2
]

> [X
1
]

> [X
3
]

; and
13. [X
3
]

> [X
2
]

> [X
1
]

.
Each characteristic may represent a cell that is tripotent (primed), bipotent, unipotent
or in terminal state. However, it is also possible to have the origin as the equilibrium
point which is a trivial case. Our observation regarding these possible characteristics of
equilibrium points is also consistent with the findings of MacArthur et al. [113].
Based from Theorem (6.6), our system may have at most 27 equilibrium points.
Appendix A. More on Equilibrium Points: Illustrations 122
A.3.8 Illustration 8
Consider the system in (A.3.7) (Illustration 7), where γ =
1
8
and ρ =
1
21
. This system has
equilibrium points satisfying all the characteristics enumerated in Illustration 7 (A.3.7).
Moreover, this system has 27 equilibrium points which is equal to the upper bound of the
number of possible equilibrium points. The equilibrium points are (the following values
are approximates):
• ([X
1
]

= 1.3235 ×10
−23
≈ 0, [X
2
]

= 20.952, [X

3
] = 0),
• ([X
1
]

= 18.619, [X
2
] = 0, [X
3
] = 18.619),
• ([X
1
]

= 18.619, [X
2
]

= 18.619, [X
3
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 18.619, [X
3
]

= 18.619),
• ([X
1
]

= 20.832, [X
2
]

= 3.1685, [X
3
]

= 3.1685),
• ([X
1
]

= 3.1685, [X
2
]

= 20.832, [X
3
]

= 3.1685),
• ([X
1
]

= 3.1685, [X
2
]

= 3.1685, [X
3
]

= 20.832),
• ([X
1
]

= 4.7755 ×10
−2
, [X
2
]

= 4.7755 ×10
−2
, [X
3
]

= 4.7755 ×10
−2
),
• ([X
1
]

= 20.894, [X
2
]

= 3.1056, [X
3
]

= 0),
• ([X
1
]

= 20.894, [X
2
]

= 0, [X
3
]

= 3.1056),
• ([X
1
]

= 3.1056, [X
2
]

= 20.894, [X
3
]

= 0),
• ([X
1
]

= 3.1056, [X
2
]

= 0, [X
3
]

= 20.894),
• ([X
1
]

= 0, [X
2
]

= 20.894, [X
3
]

= 3.1056),
• ([X
1
]

= 0, [X
2
]

= 3.1056, [X
3
]

= 20.894),
• ([X
1
]

= 4.7741 ×10
−2
, [X
2
]

= 0, [X
3
]

= 4.7741 ×10
−2
),
• ([X
1
]

= 4.7741 ×10
−2
, [X
2
]

= 4.7741 ×10
−2
, [X
3
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 4.7741 ×10
−2
, [X
3
]

= 4.7741 ×10
−2
),
• ([X
1
]

= 16.752, [X
2
]

= 16.752, [X
3
]

= 16.752),
• ([X
1
]

= 20.952, [X
2
]

= 0, [X
3
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 20.952),
• ([X
1
]

= 2.0033 ×10
−25
≈ 0, [X
2
]

= 4.7728 ×10
−2
, [X
3
]

= 0),
• ([X
1
]

= 4.7728 ×10
−2
, [X
2
]

= 0, [X
3
]

= 0),
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 4.7728 ×10
−2
),
• ([X
1
]

= 18.432, [X
2
]

= 18.432, [X
3
]

= 5.5685),
Appendix A. More on Equilibrium Points: Illustrations 123
• ([X
1
]

= 18.432, [X
2
]

= 5.5685, [X
3
]

= 18.432),
• ([X
1
]

= 5.5685, [X
2
]

= 18.432, [X
3
]

= 18.432), and
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 0).
The equilibrium points where [X
3
]

= 0 are the equilibrium points of the system when
n = 2. Also, the equilibrium points where [X
2
]

= [X
3
]

= 0 are the equilibrium points
of the system when n = 1.
Remark: Given fixed values of [X
j
], j = i, the univariate Hill curve Y = H
i
([X
i
]) and
Y = ρ
i
[X
i
] have the following possible number of intersections (see Figures (5.11) to
(5.14)):
• two intersections where one is stable;
• one intersection which is stable; and
• three intersections where two are stable.
Notice that the number of stable intersections is greater than or equal to the number of
unstable. However, when we collect all possible equilibrium points of the ODE model
(5.1) (where values of [X
j
], j = i are not fixed), the number of stable equilibrium points
is not necessarily greater than or equal to the number of unstable equilibrium points.
For example, in the previous illustration (A.3.8) (Illustration 8), we have 27 equi-
librium points where only 8 are stable. The following is the list of stable and unstable
equilibrium points:
• ([X
1
]

= 1.3235 × 10
−23
≈ 0, [X
2
]

= 20.952, [X

3
] = 0) — stable (terminal
state),
• ([X
1
]

= 18.619, [X
2
] = 0, [X
3
] = 18.619) — stable (bipotent),
• ([X
1
]

= 18.619, [X
2
]

= 18.619, [X
3
]

= 0) — stable (bipotent),
• ([X
1
]

= 0, [X
2
]

= 18.619, [X
3
]

= 18.619) — stable (bipotent),
• ([X
1
]

= 20.832, [X
2
]

= 3.1685, [X
3
]

= 3.1685) — unstable,
• ([X
1
]

= 3.1685, [X
2
]

= 20.832, [X
3
]

= 3.1685) — unstable,
• ([X
1
]

= 3.1685, [X
2
]

= 3.1685, [X
3
]

= 20.832) — unstable,
• ([X
1
]

= 4.7755 × 10
−2
, [X
2
]

= 4.7755 × 10
−2
, [X
3
]

= 4.7755 × 10
−2
) —
unstable,
Appendix A. More on Equilibrium Points: Illustrations 124
• ([X
1
]

= 20.894, [X
2
]

= 3.1056, [X
3
]

= 0) — unstable,
• ([X
1
]

= 20.894, [X
2
]

= 0, [X
3
]

= 3.1056) — unstable,
• ([X
1
]

= 3.1056, [X
2
]

= 20.894, [X
3
]

= 0) — unstable,
• ([X
1
]

= 3.1056, [X
2
]

= 0, [X
3
]

= 20.894) — unstable,
• ([X
1
]

= 0, [X
2
]

= 20.894, [X
3
]

= 3.1056) — unstable,
• ([X
1
]

= 0, [X
2
]

= 3.1056, [X
3
]

= 20.894) — unstable,
• ([X
1
]

= 4.7741 ×10
−2
, [X
2
]

= 0, [X
3
]

= 4.7741 ×10
−2
) — unstable,
• ([X
1
]

= 4.7741 ×10
−2
, [X
2
]

= 4.7741 ×10
−2
, [X
3
]

= 0) — unstable,
• ([X
1
]

= 0, [X
2
]

= 4.7741 ×10
−2
, [X
3
]

= 4.7741 ×10
−2
) — unstable,
• ([X
1
]

= 16.752, [X
2
]

= 16.752, [X
3
]

= 16.752) — stable (tripotent),
• ([X
1
]

= 20.952, [X
2
]

= 0, [X
3
]

= 0) — stable (terminal state),
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 20.952) — stable (terminal state)),
• ([X
1
]

= 2.0033 ×10
−25
≈ 0, [X
2
]

= 4.7728 ×10
−2
, [X
3
]

= 0) — unstable,
• ([X
1
]

= 4.7728 ×10
−2
, [X
2
]

= 0, [X
3
]

= 0) — unstable,
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 4.7728 ×10
−2
) — unstable,
• ([X
1
]

= 18.432, [X
2
]

= 18.432, [X
3
]

= 5.5685) — unstable,
• ([X
1
]

= 18.432, [X
2
]

= 5.5685, [X
3
]

= 18.432) — unstable,
• ([X
1
]

= 5.5685, [X
2
]

= 18.432, [X
3
]

= 18.432) — unstable, and
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 0) — stable (trivial case).
In the following section, we illustrate how to use ad hoc geometric analysis in deter-
mining the stability of equilibrium points.
A.4 Ad hoc geometric analysis
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3, β
i
= 20, ρ
i
= 10,
g
2
= 0 and g
3
= 0, i, j = 1, 2, 3 (see Illustration 6 (A.3.6)).
One of the equilibrium points is ([X
1
]

= 0.10103, [X
2
]

= 1.001, [X
3
]

= 0). To
determine the stability of this equilibrium point we use ad hoc geometric analysis. First,
we look at the intersection of Y = H
1
([X
1
]) + 1 and Y = 10[X
1
] with [X
2
] = 1.001 and
Appendix A. More on Equilibrium Points: Illustrations 125
[X
3
] = 0. Then we determine if [X
1
]

= 0.10103 is stable or not. As shown in Figure
(A.2), we conclude that [X
1
]

= 0.10103 is stable.
Now, we test if [X
2
]

= 1.001 is stable or not by looking at the intersection of Y =
H
2
([X
2
]) and Y = 10[X
2
] with [X
1
] = 0.10103 and [X
3
] = 0. Also, we test if [X
3
]

= 0
is stable or not by looking at the intersection of Y = H
3
([X
3
]) and Y = 10[X
3
] with
[X
1
] = 0.10103 and [X
2
] = 1.001. As shown in Figure (A.3) and (A.4), we conclude that
[X
2
]

= 1.001 is unstable and [X
3
] = 0 is stable.
Because of the presence of an unstable component (which is [X
2
]

= 1.001), hence,
the equilibrium point ([X
1
]

= 0.10103, [X
2
]

= 1.001, [X
3
]

= 0) is unstable.
Figure A.2: The intersection of Y = H
1
([X
1
]) + 1 and Y = 10[X
1
] with [X
2
] = 1.001
and [X
3
] = 0.
Appendix A. More on Equilibrium Points: Illustrations 126
Figure A.3: The intersection of Y = H
2
([X
2
]) and Y = 10[X
2
] with [X
1
] = 0.10103 and
[X
3
] = 0.
Figure A.4: The intersection of Y = H
3
([X
3
]) and Y = 10[X
3
] with [X
1
] = 0.10103 and
[X
2
] = 1.001.
Appendix A. More on Equilibrium Points: Illustrations 127
A.5 Phase portrait with infinitely many equilibrium
points
For example, the phase portrait of the system
d[X
1
]
dt
=
5[X
1
]
1 + [X
1
] + [X
2
]
−[X
1
] (A.30)
d[X
2
]
dt
=
5[X
2
]
1 + [X
1
] + [X
2
]
−[X
2
]
is shown in Figure (A.5). The phase portrait was graphed using the Java applet in
http://www.scottsarra.org/applets/dirField2/dirField2.html [145].
Figure A.5: A sample phase portrait of the system with infinitely many non-isolated
equilibrium points.
Appendix B
Multivariate Fixed Point Algorithm
We have written a Scilab [150] program for finding approximate values of stable equilib-
rium points. This program employs the Fixed Point Iteration method. However, if we
do numerical computations, we need to be cautious about the possible round-off errors
that we may encounter.
Algorithm 3 Multivariate fixed point algorithm (1st Part)
//input
n=input("Input n=")
for i=1:n
disp(i, "FOR EQUATION")
coeffbeta(i)=input("beta=")
K(i)=input("K=")
rho(i)=input("positive rho=")
g(i)=input("g=")
disp(i, "exponent of x")
c(i)=input("ci=")
for m=1:n
if m~=i then
disp(m, "coefficient of x")
gam(i,m)=input("gamma=")
disp(m, "exponent of x")
z(i,m)=input("cij=")
else
gam(i,m)=1
end
end
end
for i=1:n
disp(i, "initial value for x")
x(i,1)=input("=")
end
128
Appendix B. Multivariate Fixed Point Algorithm 129
Algorithm 4 Multivariate fixed point algorithm (2nd Part)
//fixed point iteration process
tol=input("tolerance error=")
j=1
y(1)=1000
while (y(j)>tol)&(j<500002) then //500,000 max number of steps
for i=1:n
summ(i)=0
for k=1:n
if k~=i then
summ(i)=summ(i)+gam(i,k)*x(k,j)^z(i,k)
end
end
x(i,j+1)=((coeffbeta(i)*x(i,j)^c(i))/(K(i)+x(i,j)^c(i)+summ(i))..
+g(i))/rho(i)
end
j=j+1
y(j)=max(abs(x(:,j)-x(:,j-1)))
end
q=input("q for test of Q-convergence=")
for i=1:j-2
lambda(i)=(norm(x(:,i+2)-x(:,i+1)))/((norm(x(:,i+1)-x(:,i)))^q)
lambdalim=lambda(i)
end
//output
disp(x(:,j), "The approx equilibrium point is ")
disp(j-1, "number of iterations is ")
disp(lambdalim, "when q=1, the approx asymptotic error constant is ")
Appendix B. Multivariate Fixed Point Algorithm 130
We illustrate how to use multivariate fixed point algorithm in determining the stability
of equilibrium points.
Consider the case where all parameters are equal to 1 except for c
i
= c
ij
= 3, β
i
= 20,
ρ
i
= 10, g
2
= 0 and g
3
= 0, i, j = 1, 2, 3.
One of the equilibrium points is ([X
1
]

≈ 0.10039, [X
2
]

≈ 1.6173, [X
3
]

= 0). We
perturb this equilibrium point by adding and subtracting a small positive number per
component (but note that the components should not be negative). Suppose we use
the small positive number 0.00001, then we have ([X
1
] = 0.10039 + 0.00001, [X
2
] =
1.6173 + 0.00001, [X
3
] = 0 + 0.00001) and ([X
1
] = 0.10039 − 0.00001, [X
2
] = 1.6173 −
0.00001, [X
3
] = 0). We use these two perturbed points as the initial conditions in the
multivariate fixed point algorithm.
If both the sequences of points generated by the multivariate fixed point algorithm (us-
ing the two initial conditions) converge to the equilibrium point ([X
1
]

≈ 0.10039, [X
2
]


1.6173, [X
3
]

= 0), then we can conclude that this equilibrium point is stable. If at least
one of the two sequences of points tends away from the equilibrium point then we can
“approximately” conclude that the equilibrium point is unstable. We need to use the
two perturbed points to minimize the probability of converging to a saddle point. We
say “approximately conclude” because a point may seem unstable yet it might be stable
with only very small basin of attraction.
Assuming a tolerance error of 10
−5
, then by the multivariate fixed point algorithm,
the equilibrium point ([X
1
]

= 0.10039, [X
2
]

= 1.6173, [X
3
]

= 0) is stable. Moreover, in
this example, the sequences of points generated by the multivariate fixed point algorithm
converges linearly.
Appendix C
More on Bifurcation of Parameters:
Illustrations
C.1 Adding g
i
> 0, Illustration 1
Consider the following system
d[X
1
]
dt
=
3[X
1
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
1
]
d[X
2
]
dt
=
3[X
2
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
2
] (C.1)
d[X
3
]
dt
=
3[X
3
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
3
].
This system has the following equilibrium points (the following are approximate values):
• ([X
1
]

= 0, [X
2
]

= 0.6527, [X
3
]

= 0) — unstable,
• ([X
1
]

= 0, [X
2
]

= 2.8794, [X
3
]

= 0) — stable (terminal state),
• ([X
1
]

= 0.6527, [X
2
]

= 0, [X
3
]

= 0) — unstable,
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 0.6527) — unstable,
• ([X
1
]

= 2.8794, [X
2
]

= 0, [X
3
]

= 0) — stable (terminal state),
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 2.8794) — stable (terminal state),
• ([X
1
]

= 1, [X
2
]

= 1, [X
3
]

= 0) — stable (bipotent),
• ([X
1
]

= 1, [X
2
]

= 0, [X
3
]

= 1) — stable (bipotent),
• ([X
1
]

= 0, [X
2
]

= 1, [X
3
]

= 1) — stable (bipotent), and
• ([X
1
]

= 0, [X
2
]

= 0, [X
3
]

= 0) — stable (trivial case).
131
Appendix C. More on Bifurcation of Parameters: Illustrations 132
Now, let us add g
1
= 1, that is, consider the following system
d[X
1
]
dt
=
3[X
1
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
1
] + 1
d[X
2
]
dt
=
3[X
2
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
2
] (C.2)
d[X
3
]
dt
=
3[X
3
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
3
].
This system has the following sole equilibrium point: ([X
1
]

≈ 3.9522, [X
2
]

= 0, [X
3
]

=
0), which is stable.
If we also add g
2
= 1, that is, consider the following system
d[X
1
]
dt
=
3[X
1
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
1
] + 1
d[X
2
]
dt
=
3[X
2
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
2
] + 1 (C.3)
d[X
3
]
dt
=
3[X
3
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
−[X
3
],
then we have the following equilibrium points (the following are approximate values):
• ([X
1
]

= 2.4507, [X
2
]

= 2.4507, [X
3
]

= 0) — stable (bipotent),
• ([X
1
]

= 1.0581, [X
2
]

= 3.8929, [X
3
]

= 0) — stable (unipotent), and
• ([X
1
]

= 3.8929, [X
2
]

= 1.0581, [X
3
]

= 0) — stable (unipotent).
C.2 Adding g
i
> 0, Illustration 2
Consider the following system
d[X
1
]
dt
=
5[X
1
]
2
1 + [X
1
]
2
+ [X
2
]
2
−2[X
1
] (C.4)
d[X
2
]
dt
=
5[X
2
]
2
1 + [X
1
]
2
+ [X
2
]
2
−2[X
2
].
We can add g
1
> 0 to the system if we want [X
1
]

to sufficiently dominate [X
2
]

. We
can do ad hoc geometric analysis to determine if the value of g
1
is enough to drive the
system to have sole equilibrium point where [X
1
]

> [X
2
]

.
Appendix C. More on Bifurcation of Parameters: Illustrations 133
We first graph
Y =
5[X
1
]
2
1 + [X
1
]
2
+g
1
and
Y = 2[X
1
],
then we determine if the two curves have a sole intersection. If they do have more than
one intersection, we increase the value of g
1
. We find the value of the sole intersection
and denote it by [X
1
]
(∗)
.
We substitute [X
1
]
(∗)
to [X
1
] in Y =
5[X
2
]
2
1+[X
1
]
2
+[X
2
]
2
. Then we determine if
Y =
5[X
2
]
2
1 + [X
1
]
(∗)
2
+ [X
2
]
2
and
Y = 2[X
2
]
intersect only at one point. If there are more than one intersections, we increase g
1
and
adjust [X
1
]
(∗)
. If there is only one intersection then [X
2
]

= 0.
Figure C.1: Determining the adequate g
1
> 0 that would give rise to a sole equilibrium
point where [X
1
]

> [X
2
]

.
The sole stable equilibrium point of the system with adequate g
i
> 0 is the computed
Appendix C. More on Bifurcation of Parameters: Illustrations 134
([X
1
]

= [X
1
]
(∗)
, [X
2
]

= 0). See Figure (C.1) for illustration. If we suppose g
1
= 1, the
sole equilibrium point is ([X
1
]

≈ 2.698, [X
2
]

= 0).
C.3 g
i
as a function of time
Making g
i
as a function of time (i.e., g
i
changes through time) means that we are adding
another equation and state variable to our system of ODEs. We can think g
i
as an
additional node to our GRN and we call it as the injection node. In this thesis, we
consider two types of functions — linear function with negative slope and exponential
function with negative exponent. Notice that the two functions represent a g
i
that
degrades through time.
Suppose given a specific initial condition, the solution to our system tends to an
equilibrium point with [X
i
]

= 0. If we want our solution to escape [X
i
]

= 0 then
one strategy is to add g
i
> 0. The idea of adding an enough amount of g
i
> 0 is
to make the solution of our system escape a certain equilibrium point. However, it
is sometimes impractical or infeasible to continuously introduce a constant amount of
exogenous stimulus to control cell fates, that is why we can rather consider introducing
g
i
that degrades through time.
We numerically investigate the case where adding a degrading amount of g
i
affects cell
fate. However, this strategy is only applicable to systems with multiple stable equilibrium
points where convergence of trajectories is sensitive to initial conditions.
C.3.1 As a linear function
Suppose
g
i
(t) = −υ
i
t +g
i
(0) or (C.5)
dg
i
dt
= −υ
i
where the degradation rate υ
i
is positive. We define g
i
(t) < 0 as g
i
(t) = 0.
Appendix C. More on Bifurcation of Parameters: Illustrations 135
Here is an example where without g
1
, [X
1
]

= 0 (see Figure (C.2)); but by adding
g
1
(t) = −t + 5, the solution converges to a new equilibrium point with [X
1
]

≈ 2.98745
(see Figure (C.3)). Figure (C.2) shows the numerical solution to the system
d[X
1
]
dt
=
3[X
1
]
5
1 + [X
1
]
5
+ [X
2
]
5
−[X
1
] (C.6)
d[X
2
]
dt
=
3[X
2
]
5
1 + [X
1
]
5
+ [X
2
]
5
−[X
2
].
While, Figure (C.3) shows the numerical solution to the system (with g
i
(t))
d[X
1
]
dt
=
3[X
1
]
5
1 + [X
1
]
5
+ [X
2
]
5
−[X
1
] +g
1
d[X
2
]
dt
=
3[X
2
]
5
1 + [X
1
]
5
+ [X
2
]
5
−[X
2
] (C.7)
dg
1
dt
= −1
Limit g
1
≥ 0.
The initial values are [X
1
]
0
= 0.5, [X
2
]
0
= 1 and g
i
(0) = 5.
By looking at Figure (C.4), we can see that [X
1
]

shifted from a lower stable compo-
nent to a higher stable component.
Figure C.2: An example where without g
1
, [X
1
]

= 0.
Appendix C. More on Bifurcation of Parameters: Illustrations 136
Figure C.3: [X
1
]

escaped the zero state because of the introduction of g
1
which is a
decaying linear function.
Figure C.4: An example of shifting from a lower stable component to a higher stable
component through adding g
i
(t) = −υ
i
t +g
i
(0).
Appendix C. More on Bifurcation of Parameters: Illustrations 137
C.3.2 As an exponential function
Suppose
g
i
(t) = (g
i
(0))e
−υ
i
t
or (C.8)
dg
i
dt
= −υ
i
g
i
where the degradation rate υ
i
is positive. We define g
i
(t) < 0 as g
i
(t) = 0.
Consider the system used in the previous subsection (C.3.1). The time series in Figure
(C.5) has the same behavior as Figure (C.3). Figure (C.5) shows the numerical simulation
to the system
d[X
1
]
dt
=
3[X
1
]
5
1 + [X
1
]
5
+ [X
2
]
5
−[X
1
] +g
1
d[X
2
]
dt
=
3[X
2
]
5
1 + [X
1
]
5
+ [X
2
]
5
−[X
2
] (C.9)
dg
1
dt
= −g
1
Limit g
1
≥ 0
where the initial values are [X
1
]
0
= 0.5, [X
2
]
0
= 1 and g
i
(0) = 5.
Figure C.5: [X
1
]

escaped the zero state because of the introduction of g
1
which is a
decaying exponential function.
Appendix C. More on Bifurcation of Parameters: Illustrations 138
C.4 The effect of γ
ij
The parameter γ
ij
can also affect the equilibrium points. Consider the system
d[X
1
]
dt
=
[X
1
]
2
1 + [X
1
]
2
+γ[X
2
]
2
+γ[X
3
]
2

1
21
[X
1
]
d[X
2
]
dt
=
[X
2
]
2
1 +γ[X
1
]
2
+ [X
2
]
2
+γ[X
3
]
2

1
21
[X
2
] (C.10)
d[X
3
]
dt
=
[X
3
]
2
1 +γ[X
1
]
2
+γ[X
2
]
2
+ [X
3
]
2

1
21
[X
3
].
Figure (C.6) shows a case where varying the value of γ affects the value of the equilibrium
points. Varying γ may induce cell differentiation (also refer to [113]).
Figure C.6: Parameter plot of γ, an example.
C.5 Bifurcation diagrams
The following are some of the bifurcation types that we have found during our simulation.
We use the software Oscill8 [40] to draw the bifurcation diagrams. Note that the following
bifurcation diagrams are dependent on the given parameter values and initial values.
Appendix C. More on Bifurcation of Parameters: Illustrations 139
For one-parameter bifurcation diagram, the horizontal axis represents the parameter
and the vertical axis represents [X
i
]. Thick lines denote stable equilibrium points and
thin lines denote unstable equilibrium points.
For two-parameter bifurcation diagram, the horizontal axis represents the first pa-
rameter and the vertical axis represents the second parameter. The lines in the diagram
indicate the presence of bifurcation.
C.5.1 Illustration 1
Suppose n = 1. Bifurcation may arise due to the sigmoidal shape of the Hill curve (see
Figures (C.7) for illustration). The system initially has a sole equilibrium point which
is stable, then varying a parameter may result to a change in the number of equilibrium
points, e.g. there will be three equilibrium points where one is unstable and the other
two are stable. The bifurcation happens when there are two equilibrium points where
one is stable and the other is unstable.
Figure C.7: Intersections of Y = ρ
i
[X
i
] and Y = H
i
([X
i
]) + g
i
where c > 1 and g = 0;
and an event of bifurcation.
Figures (C.8) to (C.10) illustrate the occurrence of saddle node bifurcations given the
Appendix C. More on Bifurcation of Parameters: Illustrations 140
equation
d[X1]
dt
=
β
1
[X
1
]
c
K
1
+ [X
1
]
c
−ρ
1
[X
1
] +g
1
(C.11)
with initial condition [X
1
]
0
= 5 as well as parameter values c = 2, β
1
= 3, K
1
= 1, ρ
1
= 1
and g
1
= 0. Moreover, figures (C.11) to (C.14) illustrate that cusp bifurcations may also
arise.
Figure C.8: Saddle node bifurcation; β
1
is varied.
Appendix C. More on Bifurcation of Parameters: Illustrations 141
Figure C.9: Saddle node bifurcation; K
1
is varied.
Figure C.10: Saddle node bifurcation; ρ
1
is varied.
Appendix C. More on Bifurcation of Parameters: Illustrations 142
Figure C.11: Cusp bifurcation; β
1
and g
1
are varied.
Figure C.12: Cusp bifurcation; K
1
and c are varied.
Appendix C. More on Bifurcation of Parameters: Illustrations 143
Figure C.13: Cusp bifurcation; K
1
and g
1
are varied.
Figure C.14: Cusp bifurcation; ρ
1
and g
1
are varied.
Appendix C. More on Bifurcation of Parameters: Illustrations 144
C.5.2 Illustration 2
Suppose we have the system
d[X1]
dt
=
β
1
[X
1
]
c
K
1
+ [X
1
]
c

12
[X
2
]
c
−ρ
1
[X
1
] +g
1
(C.12)
d[X2]
dt
=
β
2
[X
2
]
c
K
2
+ [X
2
]
c

21
[X
1
]
c
−ρ
2
[X
2
] +g
2
with initial conditions [X
1
]
0
= 5 and [X
2
]
0
= 5 as well as parameter values c = 2, β
1
= 3,
β
2
= 3, γ
12
= 1, γ
21
= 1, K
1
= 1, K
2
= 1, ρ
1
= 1, ρ
2
= 1, g
1
= 1 and g
2
= 0.
A saddle node bifurcation arises when we vary the parameter ρ
2
— the bifurcation
diagram is shown in Figure (C.15). Also, a saddle node bifurcation arises when we vary
the parameter g
2
— the bifurcation diagram is shown in Figure (C.16).
Figure C.15: Saddle node bifurcation; ρ
2
is varied.
Appendix C. More on Bifurcation of Parameters: Illustrations 145
Figure C.16: Saddle node bifurcation; g
2
is varied.
C.5.3 Illustration 3
Suppose we have the system
d[X
1
]
dt
=
β
1
[X
1
]
c
K
1
+ [X
1
]
c
+γ[X
2
]
c
+γ[X
3
]
c
+γ[X
4
]
c
−ρ
1
[X
1
] +g
1
d[X
2
]
dt
=
β
2
[X
2
]
c
K
2
+ [X
2
]
c
+γ[X
1
]
c
+γ[X
3
]
c
+γ[X
4
]
c
−ρ
2
[X
2
] +g
2
(C.13)
d[X
3
]
dt
=
β
3
[X
3
]
c
K
3
+ [X
3
]
c
+γ[X
1
]
c
+γ[X
2
]
c
+γ[X
4
]
c
−ρ
3
[X
3
] +g
3
d[X
4
]
dt
=
β
4
[X
4
]
c
K
4
+ [X
4
]
c
+γ[X
1
]
c
+γ[X
2
]
c
+γ[X
3
]
c
−ρ
4
[X
4
] +g
4
with initial condition X
0
= (5, 5, 5, 5) as well as parameter values c = 2, β
i
= 3 (i =
1, 2, 3, 4), γ = 1, K
i
= 1 (i = 1, 2, 3, 4), ρ
i
= 1 (i = 1, 2, 3, 4), g
1
= 1, g
2
= 0, g
3
= 0 and
g
4
= 0.
Figures (C.17) and (C.18) illustrate the possible occurrence of saddle node bifurcation.
Appendix C. More on Bifurcation of Parameters: Illustrations 146
Figure C.17: Saddle node bifurcation; ρ
2
is varied.
Figure C.18: Saddle node bifurcation; g
2
is varied.
Appendix D
Scilab Program for Euler-Maruyama
Algorithm 5 Euler-Maruyama with Euler method (1st Part)
//input parameters
n=input("Input n=")
for i=1:n
disp(i, "FOR EQUATION")
coeffbeta(i)=input("beta=")
K(i)=input("K=")
rho(i)=input("rho=")
g(i)=input("g=")
disp(i, "exponent of x")
c(i)=input("ci=")
for m=1:n
if m~=i then
disp(m, "coefficient of x")
gam(i,m)=input("gamma=")
disp(m, "exponent of x")
z(i,m)=input("cij=")
else
gam(i,m)=1
end
end
sig(i)=input("sigma=")
end
for i=1:n
disp(i, "initial value for x")
y(i,1)=input("=")
x(i,1)=y(i,1)
end
147
Appendix D. Scilab Program for Euler-Maruyama 148
Algorithm 6 Euler-Maruyama with Euler method (2nd Part)
//Euler-Maruyama process
tend=input("end time of simulation t_end=")
hstep=input("step size=")
j=1
while (j<tend/hstep+1) then
for i=1:n
summ(i)=0
summx(i)=0
for k=1:n
if k~=i then
summ(i)=summ(i)+gam(i,k)*y(k,j)^z(i,k)
summx(i)=summx(i)+gam(i,k)*x(k,j)^z(i,k)
end
end
G=sqrt(y(i,j)) //You can change G
rand(’normal’)
y(i,j+1)=y(i,j)+((coeffbeta(i)*y(i,j)^c(i))/(K(i)+y(i,j)^c(i)+..
summ(i))+g(i)-rho(i)*y(i,j))*hstep+sig(i)*(G)*..
((sqrt(hstep))*rand())
x(i,j+1)=x(i,j)+((coeffbeta(i)*x(i,j)^c(i))/(K(i)+x(i,j)^c(i)+..
summx(i))+g(i)-rho(i)*x(i,j))*hstep
if y(i,j+1)<0 then
y(i,j+1)=0
end
t(1)=0
t(j+1)=t(j)+hstep
end
j=j+1
end
plot(t,y)
plot(t,x,".")
List of References
[1] Understanding stem cells: An overview of the science and issues from the na-
tional academies. Published by The National Academies, 2006. Available at
www.nationalacademies.org/stemcells.
[2] Reflection paper on stem cell-based medicinal products, Tech. Report
EMA/CAT/571134/2009, Committee for Advanced Therapies, European
Medicines Agency, London, United Kingdom, 2011.
[3] B. D. Aguda and A. Friedman, Models of Cellular Regulation, Oxford Univer-
sity Press, NY, 2008.
[4] R. Albert and A.-L. Barab´ asi, Statistical mechanics of complex networks,
Reviews of Modern Physics, 74 (2002), pp. 47–95.
[5] E. Allen, A practical introduction to SDEs and SPDEs in mathematical biol-
ogy. Lecture notes for Joint 2011 MBI-NIMBioS-CAMBAM Summer Graduate
Workshop; available at http://www.mbi.osu.edu/eduprograms/2011materials/
eallenMBI2011.pdf, July-August 2011.
[6] M. Andrecut, Monte-carlo simulation of a multi-dimensional switch-like model
of stem cell differentiation, in Applications of Monte Carlo Methods in Biology,
Medicine and Other Fields of Science, InTech, 2011.
[7] M. Andrecut et al., A general model for binary cell fate decision gene circuits
with degeneracy: Indeterminacy and switch behavior in the absence of cooperativity,
PLoS ONE, 6 (2011), p. e19358.
[8] D. Angeli, J. E. Ferrel, Jr., and E. D. Sontag, Detection of multistability,
bifurcations, and hysteresis in a large class of biological positive-feedback systems,
PNAS, 101 (2004), pp. 1822–1827.
149
List of References 150
[9] J. Ansel et al., Cell-to-cell stochastic variation in gene expression is a complex
genetic trait, PLoS Genetics, 4 (2008), p. e1000049.
[10] A. Arbel´ aez et al., A generic framework to model, simulate and verify ge-
netic regulatory networks, in Conferencia Latinoamericana de Informatica (CLEI),
Santiago de Chile, Chile, August 2006.
[11] M. N. Artyomov, A. Meissner, and A. K. Chakraborty, A model for
genetic and epigenetic regulatory networks identifies rare pathways for transcription
factor induced pluripotency, PLoS Computational Biology, 6 (2010), p. e1000785.
[12] G. Bal´ azsi, A. van Oudenaarden, and J. J. Collins, Cellular decision
making and biological noise: From microbes to mammals, Cell, 144 (2011), pp. 910–
925.
[13] P. Baldi and G. W. Hatfield, DNA Microarrays and Gene Expression: From
Experiments to Data Analysis and Modeling, Cambridge University Press, Cam-
bridge, 2002.
[14] D. Bergmann, Genetic Network Modelling and Inference, PhD thesis, The Uni-
versity of Nottingham, 2010.
[15] W. J. Blake et al., Noise in eukaryotic gene expression, Nature, 422 (2003),
pp. 633–637.
[16] B. R. Blazar, P. A. Taylor, and D. A. Vallera, Adult bone marrow-derived
pluripotent hematopoietic stem cells are engraftable when transferred in utero into
moderately anemic fetal recipients, Blood, 85 (1995), pp. 833–841.
[17] I. W. M. Bleylevens, Algebraic Polynomial System Solving and Applications,
PhD thesis, Universiteit Maastricht, 2010.
[18] R. Blossey, L. Cardelli, and A. Phillips, A compositional approach to
the stochastic dynamics of gene networks, Lecture Notes in Computer Science,
3653/2005 (2005).
List of References 151
[19] K. R. Boheler, Stem cell pluripotency: A cellular trait that depends on tran-
scription factors, chromatin state and a checkpoint deficient cell cycle, Journal of
Cellular Physiology, 221 (2009), pp. 10–17.
[20] A. Bongso and E. H. Lee, Stem cells: Their definition, classification and
sources, in Stem Cells - From Bench to Bedside, World Scientific, 2005.
[21] L. Bortolussi, Constraint-based approaches to stochastic dynamics of biological
systems, PhD thesis, Universit` a degli Studi di Udine, 2006.
[22] T. Burdon, A. Smith, and P. Savatier, Signalling, cell cycle and pluripotency
in embryonic stem cells, Trends in Cell Biology, 12 (2002), pp. 432–438.
[23] L. A. Buttitta and B. A. Edgar, Mechanisms controlling cell cycle exit upon
terminal differentiation, Current Opinion in Cell Biology, 19 (2007), p. 697704.
[24] N. A. Campbell and J. B. Reece, Biology, Pearson, CA, sixth ed., 2002.
[25] H. H. Chang et al., Multistable and multistep dynamics in neutrophil differen-
tiation, BMC Cell Biology, 7:11 (2006).
[26] , Transcriptome-wide noise controls lineage choice in mammalian progenitor
cells, Nature, 453 (2008), p. 544548.
[27] M. Chaves, A. Sengupta, and E. D. Sontag, Geometry and topology of pa-
rameter space: investigating measures of robustness in regulatory networks, Journal
of Mathematical Biology, 59 (2009), pp. 315–358.
[28] B. Chen and C. Li, On the interplay between entropy and robustness of gene
regulatory networks, Entropy, 12 (2010), pp. 1071–1101.
[29] L. Chen et al., eds., Modeling Biomolecular Networks in Cells, Springer-Verlag,
London, 2010.
[30] L. Chen, R.-S. Wang, and X.-S. Zhang, Biomolecular Networks: Methods and
Applications in Systems Biology, Wiley, NJ, 2009.
List of References 152
[31] T. Chen, H. L. He, and G. M. Church, Modeling gene expression with differ-
ential equations, in Pacifc Symposium of Biocomputing, 1999.
[32] V. Chickarmane et al., Transcriptional dynamics of the embryonic stem cell
switch, PLoS Computational Biology, 2 (2006), p. e123.
[33] S. Choi, ed., Introduction to Systems Biology, Humana Press, NJ, 2007.
[34] B. Christen et al., Regeneration and reprogramming compared, BMC Biology,
8 (2010).
[35] O. Cinquin, Horloges, gradients, et r´eseaux mol´eculaires : mod`eles math´ematiques
de la morphogen`ese, PhD thesis, Universit´e Joseph Fourier - Grenoble, 2005.
[36] O. Cinquin and J. Demongeot, Positive and negative feedback: striking a bal-
ance between necessary antagonists, Journal of Theoretical Biology, 216 (2002),
pp. 229–241.
[37] , Roles of positive and negative feedback in biological systems, Comptes Rendus
Biologies, 325 (2002), pp. 1085–1095.
[38] , High-dimensional switches and the modelling of cellular differentiation, Jour-
nal of Theoretical Biology, 233 (2005), pp. 391–411.
[39] P. Collas and A. Hakelien, Teaching cells new tricks, Trends in Biotechnology,
21 (2003), p. 354361.
[40] E. Conrad, Oscill8. Software; version 2.0.11.24095.
[41] F. Crick, Central dogma of molecular biology, Nature, 227 (1970), p. 561563.
[42] A. Crombach and P. Hogeweg, Evolution of evolvability in gene regulatory
networks, PLoS Computational Biology, 4 (2008), p. e1000112.
[43] F. d’Alch´ e-Buc et al., A dynamic model of gene regulatory networks based on
inertia principle, in Studies in Fuzziness and Soft Computing, Springer, 2005.
List of References 153
[44] L. David et al., Looking into the black box: Insights into the mechanisms of
somatic cell reprogramming, Genes, 2 (2011), pp. 81–106.
[45] H. de Jong, Modeling and simulation of genetic regulatory systems: A literature
review, Journal of Computational Biology, 9 (2002), pp. 67–103.
[46] H. de Jong et al., Qualitative simulation of genetic regulatory networks: Method
and application, in Proceedings of the Seventeenth International Joint Conference
on Artificial Intelligence, IJCAI-01, San Mateo, CA, 2001, pp. 67–73.
[47] H. de Jong and J. Geiselmann, Modeling and simulation of genetic regula-
tory networks by ordinary differential equations, in Genomic Signal Processing and
Statistics, Hindawi, 2005.
[48] W. Deng, Stem cells and drug discovery: the time has never been better, Trends
in Bio/Pharmaceutical Industry, 6 (2010), pp. 12–20.
[49] A. Diaz et al., Algebraic algorithms, in Handbook on Algorithms and Theory of
Computation, CRC Press, 1998.
[50] P. J. Donovan and J. Gearhart, The end of the beginning for pluripotent stem
cells, Nature, 414 (2001), pp. 92–97.
[51] D. Egli, G. Birkhoff, and K. Eggan, Mediators of reprogramming: tran-
scription factors and transitions through mitosis, Nature Reviews: Molecular Cell
Biology, 9 (2008), pp. 505–516.
[52] H. El Samad et al., Stochastic modelling of gene regulatory networks, Interna-
tional Journal of Robust and Nonlinear Control, 15 (2005), pp. 691–711.
[53] H. El Samad and M. Khammash, Stochastic stability and its application to the
analysis of gene regulatory networks, in 43rd IEEE Conference on Decision and
Control, vol. 3, 2004, pp. 3001–3006.
[54] , Regulated degradation is a mechanism for suppressing stochastic fluctuations
in gene regulatory networks, Biophysical Journal, 90 (2006), pp. 3749–3761.
List of References 154
[55] R. Erban et al., Gene regulatory networks: A coarse-grained, equation-free ap-
proach to multiscale computation, The Journal of Chemical Physics, 124 (2006),
p. 084106.
[56] J. Farlow et al., Differential Equations and Linear Algebra, Pearson, NJ, sec-
ond ed., 2007.
[57] W. L. Farrar, ed., Cancer Stem Cells, Cambridge University Press, Cambridge,
2010.
[58] B. Feng et al., Reprogramming of fibroblasts into induced pluripotent stem cells
with orphan nuclear receptor esrrb, Nature Cell Biology, 11 (2008), pp. 197–203.
[59] S. Filip et al., Stem cells and the phenomena of plasticity and diversity: a lim-
iting property of carcinogenesis, Stem Cells and Development, 17 (2008), pp. 1031–
1038.
[60] T. Fournier, Stochastic Models of a Self Regulated Gene Network, PhD thesis,
Universit´e de Fribourg, 2008.
[61] P. Franc¸ois and V. Hakim, Design of genetic networks with specified functions
by evolution in silico, PNAS, 101 (2004), pp. 580–585.
[62] P. Fu and S. Panke, eds., Systems Biology and Synthetic Biology, Wiley, NJ,
2009.
[63] A. Funahashi et al., Celldesigner: A modeling tool for biochemical networks, in
Proceedings of the 2006 Winter Simulation Conference, 2006, pp. 1707–1712.
[64] C. Furusawa and K. Kaneko, Theory of robustness of irreversible differentia-
tion in a stem cell system: chaos hypothesis, Journal of Theoretical Biology, 209
(2001), pp. 395–416.
[65] R. Ganguly and I. K. Puri, Mathematical model for the cancer stem cell hy-
pothesis, Cell Proliferation, 39 (2006), pp. 3–14.
List of References 155
[66] T. S. Gardner and J. J. Faith, Reverse-engineering transcription control net-
works, Physics of Life Reviews, 2 (2005), pp. 65–88.
[67] A. Garg, Implicit Methods for Modeling Gene Regulatory Networks, PhD thesis,
´
Ecole Polytechnique F´ed´erale de Lausanne, 2009.
[68] N. Geard and J. Wiles, A gene network model for developing cell lineages,
Artificial Life, 11 (2005), pp. 249–267.
[69] J. Gebert, N. Radde, and G.-W. Weber, Modeling gene regulatory networks
with piecewise linear differential equations, European Journal of Operational Re-
search, 181 (2007), pp. 1148–1165.
[70] I. Glauche, Theoretical studies on the lineage specification of hematopoietic stem
cells, PhD thesis, University of Leipzig, Germany, 2010.
[71] I. Glauche, M. Herberg, and I. Roeder, Nanog variability and pluripotency
regulation of embryonic stem cells — insights from a mathematical model analysis,
PLoS ONE, 5 (2010), p. e11238.
[72] D. Gonze, Stochastic simulations: Application to molecular networks. Lecture
notes for Spring School on Dynamical Modelling of Biological Regulatory Net-
works, Les Houches, France; available at http://homepages.ulb.ac.be/
~
dgonze/
HOUCHES/noise.pdf, May 2007.
[73] S. Goutelle et al., The hill equation: a review of its capabilities in pharmaco-
logical modelling, Fundamental and Clinical Pharmacology, 22 (2008), pp. 633–648.
[74] T. Graf and T. Enver, Forcing cells to change lineages, Nature, 462 (2009),
pp. 587–594.
[75] R. Grima and S. Schnell, Modelling reaction kinetics inside cells, Essays in
Biochemistry, 45 (2008), pp. 41–56.
[76] R. Guantes and J. F. Poyatos, Multistable decision switches for flexible control
of epigenetic differentiation, PLoS Computational Biology, 4 (2008), p. e1000235.
List of References 156
[77] R. I. Hamed, S. I. Ahson, and R. Parveen, A new approach for modelling gene
regulatory networks using fuzzy petri nets, Journal of Integrative Bioinformatics, 7
(2010), p. 113.
[78] J. H. Hanna, K. Saha, and R. Jaenisch, Pluripotency and cellular reprogram-
ming: Facts, hypotheses, unresolved issues, Cell, 143 (2010), pp. 508–525.
[79] J. Hasty et al., Designer gene networks: Towards fundamental cellular control,
Chaos, 11 (2001), pp. 207–220.
[80] K. Hochedlinger and K. Plath, Epigenetic reprogramming and induced
pluripotency, Development, 136 (2009), pp. 509–523.
[81] M. Hoffmann et al., Noise-driven stem cell and progenitor population dynamics,
PLoS ONE, 3 (2008), p. e2922.
[82] S. Huang, Multistability and multicellularity: Cell fates as high-dimensional at-
tractors of gene regulatory networks, in Computational Systems Biology, 2006,
pp. 293–326.
[83] , Non-genetic heterogeneity of cells in development: more than just noise, De-
velopment, 136 (2009), pp. 3853–3862.
[84] , Reprogramming cell fates: reconciling rarity with robustness, BioEssays, 31
(2009), pp. 546–560.
[85] , Cell lineage determination in state space: A systems view brings flexibility to
dogmatic canonical rules, PLoS Biology, 8 (2010), p. e1000380.
[86] S. Huang et al., Bifurcation dynamics in lineage-commitment in bipotent pro-
genitor cells, Developmental Biology, 305 (2007), pp. 695–713.
[87] L. Ironi et al., Dynamics of actively regulated gene networks, Physica D, 240
(2011), pp. 779–794.
List of References 157
[88] L. Ironi and L. Panzeri, A computational framework for qualitative simulation
of nonlinear dynamical models of gene-regulatory networks, BMC Bioinformatics,
10(Suppl 12):S14 (2009).
[89] K. N. Ivey and D. Srivastava, Micrornas as regulators of differentiation and
cell fate decisions, Cell Stem Cell, 7 (2010), pp. 36–41.
[90] B. H. Junker and F. Schreiber, eds., Analysis of Biological Networks, Wiley-
Interscience, NJ, 2008.
[91] M. Kærn, W. J. Blake, and J. J. Collins, The engineering of gene regulatory
networks, Annual Review of Biomedical Engineering, 5 (2003), pp. 179–206.
[92] G. Karlebach and R. Shamir, Modelling and analysis of gene regulatory net-
works, Nature, 9 (2008), pp. 770–780.
[93] Y. N. Kaznessis, Multi-scale models for gene network engineering, Chemical En-
gineering Science, 61 (2006), pp. 940–953.
[94] K. H. Kim and H. M. Sauro, Adjusting phenotypes by noise control, PLoS
Computational Biology, 8 (2012), p. e1002344.
[95] F. C. Klebaner, Introduction to Stochastic Calculus with Applications, second ed.
[96] E. Klipp et al., Systems Biology in Practice, Wiley-VCH, Weinheim, 2005.
[97] G. K¨ ogler et al., A new human somatic stem cell from placental cord blood
with intrinsic pluripotent differentiation potential, The Journal of Experimental
Medicine, 200 (2004), pp. 123–135.
[98] I. S. Kotsireas, Homotopies and polynomial system solving i: Basic principles,
ACM SIGSAM Bulletin, 35 (2001), pp. 19–32.
[99] A. Kriete and R. Eils, eds., Computational Systems Biology, Elsevier, MA,
2006.
[100] D. Kulasiri et al., A review of systems biology perspective on genetic regulatory
networks with examples, Current Bioinformatics, 3 (2008), pp. 197–225.
List of References 158
[101] A. Kurakin, Self-organization vs watchmaker: stochastic gene expression and cell
differentiation, Development Genes and Evolution, 215 (2005), pp. 46–52.
[102] Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, Springer, NY, sec-
ond ed., 1998.
[103] U. Lakshmipathy and C. Verfaillie, Stem cell plasticity, Blood Reviews, 19
(2005), pp. 29–38.
[104] M. A. Leite and Y. Wang, Multistability, oscillations and bifurcations in feed-
back loops, Mathematical Biosciences and Engineering, 7 (2010), pp. 83–97.
[105] I. Lestas et al., Noise in gene regulatory networks, IEEE Special Issue on Sys-
tems Biology, (2008), pp. 189–200.
[106] M. Lewitzky and S. Yamanaka, Reprogramming somatic cells towards pluripo-
tency by defined factors, Current Opinion in Biotechnology, 18 (2007), pp. 467–473.
[107] L. Li et al., Signaling pathways involved in migration of mesenchymal stem cells,
Trends in Bio/Pharmaceutical Industry, 6 (2010), pp. 29–33.
[108] E. T. Liu and D. A. Lauffenburger, eds., Systems Biomedicine: Concepts
and Perspectives, Academic Press, CA, 2010.
[109] L. Liu and L. Wang, Induced pluripotent stem cells (ips): Reverse the course of
life, Trends in Bio/Pharmaceutical Industry, 6 (2010), pp. 26–28.
[110] N. A. Lobo, Y. Shimono, D. Qian, and M. F. Clarke, The biology of cancer
stem cells, Annual Review of Cell and Developmental Biology, 23 (2007), pp. 675–
699.
[111] R. Losick and C. Desplan, Stochasticity and cell fate, Science, 320 (2008),
pp. 65–68.
[112] B. D. MacArthur, A. Ma’ayan, and I. R. Lemischka, Systems biology of
stem cell fate and cellular reprogramming, Nature Reviews: Molecular Cell Biology,
10 (2009), pp. 672–681.
List of References 159
[113] B. D. MacArthur, C. P. Please, and R. O. C. Oreffo, Stochasticity and
the molecular mechanisms of induced pluripotency, PLoS ONE, 3 (2008), p. e3086.
[114] R. I. Macey and G. F. Oster, Berkeley Madonna. Software; version 8.3.18.
[115] J. Mac´ıa, S. Widder, and R. Sol´ e, Why are cellular switches boolean? gen-
eral conditions for multistable genetic circuits, Journal of Theoretical Biology, 261
(2009), pp. 126–135.
[116] MacKichan Software, Inc., Scientific WorkPlace. Software; version 5.50.
[117] T. Magnus et al., Stem cell myths, Philosophical Transactions of the Royal
Society B, 363 (2008), pp. 9–22.
[118] S. Mandal and R. Sarpeshkar, Circuit models of stochastic genetic networks,
in IEEE Symposium on Biological Circuits and Systems (BioCAS), Beijing, China,
November 2009, pp. 109–112.
[119] J. M. Mir´ o-Bueno and A. Rodr´ıguez-Pat´ on, A simple negative interaction in
the positive transcriptional feedback of a single gene is sufficient to produce reliable
oscillations, PLoS ONE, 6 (2011), p. e27414.
[120] B. Mourrain and M. Elkadi, Symbolic-numeric methods for solving polyno-
mial equations and applications, in Solving Polynomial Equations: Foundations,
Algorithms, and Applications, Springer, 2005.
[121] C. Mu˜ noz et al., Fuzzy logic in genetic regulatory network models, International
Journal of Computers, Communications and Control, 4 (2009), pp. 363–373.
[122] J. D. Murray, Mathematical Biology I. An Introduction, third ed., 2002.
[123] D. L. Myster and R. J. Duronio, Cell cycle: To differentiate or not to differ-
entiate?, Current Biology, 10 (2000), pp. R302–R304.
[124] H. Niwa, How is pluripotency determined and maintained?, Development, 134
(2007), pp. 635–646.
List of References 160
[125] R. Ogawa, The importance of adipose-derived stem cells and vascularized tissue
regeneration in the field of tissue transplantation, Current Stem Cell Research and
Therapy, 1 (2006), pp. 13–20.
[126] G. Orphanides and D. Reinberg, A unified theory of gene expression, Cell,
108 (2002), pp. 439–451.
[127] E. M. Ozbudak, Noise and Multistability in Gene Regulatory Networks, PhD
thesis, Massachusetts Institute of Technology, 2004.
[128] S. Palani and C. A. Sarkar, Integrating extrinsic and intrinsic cues into a
minimal model of lineage commitment for hematopoietic progenitors, PLoS Com-
putational Biology, 5 (2009), p. e1000518.
[129] G. Pan and J. A. Thomson, Nanog and transcriptional networks in embryonic
stem cell pluripotency, Cell Research, 17 (2007), pp. 42–49.
[130] P. R. Patnaik, External, extrinsic and intrinsic noise in cellular systems: analo-
gies and implications for protein synthesis, Biotechnology and Molecular Biology
Review, 1 (2006), pp. 121–127.
[131] J. Paulsson, Summing up the noise in gene networks, Nature, 427 (2004), pp. 415–
418.
[132] , Models of stochastic gene expression, Physics of Life Reviews, 2 (2005),
pp. 157–175.
[133] J. Peltier and D. V. Schaffer, Systems biology approaches to understanding
stem cell fate choice, IET Systems Biology, 4 (2010), pp. 1–11.
[134] L. Perko, Differential Equations and Dynamical Systems, Springer-Verlag, NY,
third ed., 2001.
[135] A. Polynikis, S. J. Hogan, and M. di Bernardo, Comparing different ode
modelling approaches for gene regulatory networks, Journal of Theoretical Biology,
261 (2009), pp. 511–530.
List of References 161
[136] C. W. Pouton and J. M. Haynes, Embryonic stem cells as a source of models
for drug discovery, Nature Reviews: Drug Discovery, 6 (2007), pp. 605–616.
[137] A. Ribeiro, R. Zhu, and S. A. Kauffman, A general modeling strategy for gene
regulatory networks with stochastic dynamics, Journal of Computational Biology,
13 (2006), pp. 1630–1639.
[138] J. P. Richard et al., Direct in vivo cellular reprogramming involves transition
through discrete, non-pluripotent steps, Development, 138 (2011), pp. 1483–1492.
[139] N. Rodi´ c, M. S. Rutenberg, and N. Terada, Cell fusion and reprogramming:
resolving our transdifferences, Trends in Molecular Medicine, 10 (2004), pp. 93–96.
[140] S. Rosenfeld, Characteristics of transcriptional activity in nonlinear dynamics
of genetic regulatory networks, Gene Regulation and Systems Biology, 3 (2009),
pp. 159–179.
[141] L. L. Rubin, Stem cells and drug discovery: The beginning of a new era?, Cell,
132 (2008), pp. 549–552.
[142] L. L. Rubin and K. M. Haston, Stem cell biology and drug discovery, BMC
Biology, 9:42 (2011).
[143] M. Sahimi, A. R. Mehrabi, and F. Naeim, Discrete stochastic model for
self-renewal and differentiation of progenitor cells, Physical Review E, 55 (1997),
pp. R2111–R2114.
[144] M. Santill´ an, On the use of the hill functions in mathematical models of gene
regulatory networks, Mathematical Modelling of Natural Phenomena, 3 (2008),
pp. 85–97.
[145] S. A. Sarra, Direction field, n=2. http://www.scottsarra.org/applets/
dirField2/dirField2.html. Department of Mathematics, Marshall University,
WV.
[146] S. Sastry, Nonlinear Systems: Analysis, Stability and Control, Springer-Verlag,
NY, 1999.
List of References 162
[147] T. Sauer, Numerical Analysis, Pearson, MA, 2006.
[148] W. Scheper and S. Copray, The molecular mechanism of induced pluripotency:
A two-stage switch, Stem Cell Reviews and Reports, 5 (2009), pp. 204–223.
[149] T. Schlitt and A. Brazma, Current approaches to gene regulatory network
modelling, BMC Bioinformatics, 8(Suppl 6):S9 (2007).
[150] Scilab Consortium, Scilab. Software; version 5.3.
[151] V. Selvaraj et al., Switching cell fate: the remarkable rise of induced pluripotent
stem cells and lineage reprogramming technologies, Trends in Biotechnology, 28
(2010), pp. 214–223.
[152] T. Shibata, Cell is noisy, in Networks of Interacting Machines: Production Or-
ganization in Complex Industrial Systems and Biological Cells, World Scientific,
2005.
[153] Y. Shin and L. Bleris, Linear control theory for gene network modeling, PLoS
ONE, 5 (2010), p. e12785.
[154] D. Siegal-Gaskins, E. Grotewold, and G. D. Smith, The capacity for
multistability in small gene regulatory networks, BMC Systems Biology, 3:96 (2009).
[155] P. Smolen, D. A. Baxter, and J. H. Bryne, Modeling transcriptional con-
trol in gene networks — methods, recent results, and future directions, Bulletin of
Mathematical Biology, 62 (2000), pp. 247–292.
[156] F. Sottile, Discrete and applicable algebraic geometry. Available at http://
www.math.tamu.edu/
~
sottile/conferences/Summer_IMA07/Sottile.pdf, Oc-
tober 2009.
[157] P. F. Stiller, An introduction to the theory of resultants. Available at http:
//www.isc.tamu.edu/publications-reports/tr/9602.pdf.
[158] G. J. Sullivan et al., Induced pluripotent stem cells: epigenetic memories and
practical implications, Molecular Human Reproduction, 16 (2010), pp. 880–885.
List of References 163
[159] N. Suzuki, C. Furusawa, and K. Kaneko, Oscillatory protein expression dy-
namics endows stem cells with robust differentiation potential, PLoS ONE, 6 (2011),
p. e27232.
[160] P. S. Swain, M. B. Elowitz, and E. D. Siggia, Intrinsic and extrinsic con-
tributions to stochasticity in gene expression, PNAS, 99 (2002), pp. 12795–12800.
[161] K. Takahashi et al., Induction of pluripotent stem cells from adult human fi-
broblasts by defined factors, Cell, 131 (2007), pp. 1–12.
[162] K. Takahashi and S. Yamanaka, Induction of pluripotent stem cells from
mouse embryonic and adult fibroblast cultures by defined factors, Cell, 126 (2006),
pp. 663–676.
[163] A. Takesue, A. Mochizuki, and Y. Iwasa, Cell-differentiation rules that gen-
erate regular mosaic patterns: Modelling motivated by cone mosaic formation in
fish retina, Journal of Theoretical Biology, 194 (1998), pp. 575–586.
[164] N. D. Theise and R. Harris, Postmodern biology: (adult) (stem) cells are
plastic, stochastic, complex, and uncertain, HEP, 174 (2006), pp. 389–408.
[165] M. Villani, A. Barbieri, and R. Serra, A dynamical model of genetic net-
works for cell differentiation, PLoS ONE, 6 (2011), p. e17703.
[166] J. Vohradsk´ y, Neural network model of gene expression, FASEB, 15 (2001),
pp. 846–854.
[167] E. O. Voit, Computational analysis of biochemical systems: a practical guide
for biochemists and molecular biologists, Cambridge University Press, Cambridge,
2000.
[168] C. H. Waddington, ed., The Strategy of the Genes, Geo Allen and Unwin, Lon-
don, 1957.
[169] A. J. Wagers, J. L. Christensen, and I. L. Weissman, Cell fate determi-
nation from stem cells, Gene Therapy, 9 (2002), pp. 606–612.
List of References 164
[170] A. J. Wagers and I. L. Weissman, Plasticity of adult stem cells, Cell, 116
(2004), pp. 639–648.
[171] F. M. Watt and R. R. Driskell, The therapeutic potential of stem cells, Philo-
sophical Transactions of the Royal Society B, 365 (2010), pp. 155–163.
[172] Y. Welte et al., Cancer stem cells in solid tumors: elusive or illusive?, Cell
Communication and Signaling, 8 (2010).
[173] L. F. A. Wessels, E. P. van Someren, and M. J. T. Reinders, A com-
parison of genetic network models, in Pacific Symposium on Biocomputing, vol. 6,
2001, pp. 508–519.
[174] Z. Xie, Modelling Genetic Regulatory Networks: A New Model for Circadian
Rhythms in Drosophila and Investigation of Genetic Noise in a Viral Infection
Process, PhD thesis, Lincoln University, New Zealand, 2007.
[175] R. Xu, G. K. Venayagamoorthy, and D. C. Wunsch II, Modeling of gene
regulatory networks with hybrid differential evolution and particle swarm optimiza-
tion, Neural Networks, 20 (2007), pp. 917–927.
[176] S. Yamanaka, Elite and stochastic models for induced pluripotent stem cell gen-
eration, Nature, 460 (2009), pp. 49–52.
[177] S. Yamanaka and H. M. Blau, Nuclear reprogramming to a pluripotent state
by three approaches, Nature, 465 (2010), pp. 704–712.
[178] C. K. Yap, Fundamental Problems of Algorithmic Algebra, Oxford University
Press, NY/Oxford, 2000.
[179] B. Yener et al., Multiway modeling and analysis in stem cell systems biology,
BMC Systems Biology, 2:63 (2008).
[180] C. Zhao, R. F. Xu, and R. Jiang, Tissue engineering and stem cell therapy,
Trends in Bio/Pharmaceutical Industry, 6 (2010), pp. 21–25.
List of References 165
[181] Y. Zhao, D. Glesne, and E. Huberman, A human peripheral blood monocyte-
derived subset acts as pluripotent stem cells, PNAS, 100 (2003), pp. 2426–2431.
[182] V. P. Zhdanov, Stochastic model of the formation of cancer metastases via cancer
stem cells, European Biophysics Journal, 37 (2008), pp. 1329–1334.