You are on page 1of 10

Differentiable Model of

Morphogenesis

Karen Natalia Pulido Rodríguez


Kpulidor1@ucentral.edu.co

Abstract
Most multicellular organisms begin their life as a single egg cell - a single cell whose
progeny reliably self-assemble into highly complex anatomies with many organs and
tissues in precisely the same arrangement each time. The ability to build their own bodies
is probably the most fundamental skill every living creature possesses. This process is
extremely robust to perturbations. Even when the organism is fully developed, some
species still have the capability to repair damage - a process known as regeneration.
Some creatures, such as salamanders, can fully regenerate vital organs, limbs, eyes, or
even parts of the brain! Morphogenesis is a surprisingly adaptive process. Thus, one
major lynch-pin of future work in biomedicine is the discovery of the process by which
large-scale anatomy is specified within cell collectives, and how we can rewrite this
information to have rational control of growth and form. It is also becoming clear that the
software of life possesses numerous modules or subroutines, such as «build an eye here»,
which can be activated with simple signal triggers
Keywords: Cell, Cells, Organs, Life, Biology, Complex

Introduction

We will focus on Cellular Automata models as a roadmap for the effort of identifying cell-level
rules which give rise to complex, regenerative behavior of the collective. CAs typically consist of
a grid of cells being iteratively updated, with the same set of rules being applied to each cell at
every step. The new state of a cell depends only on the states of the few cells in its immediate
neighborhood. Let’s try to develop cellular automata update rule that, starting from a single cell,
will produce a predefined multicellular pattern on a 2D grid.

To design the CA, we must specify the possible cell states, and their update function. Typical CA
models represent cell states with a set of discrete values, although variants using vectors of
continuous values exist. The use of continuous values has the virtue of allowing the update rule to
be a differentiable function of the cell’s neighborhood’s states. The rules that guide individual cell
behavior based on the local environment are analogous to the low-level hardware specification
encoded by the genome of an organism.
Karen Natalia Pulido Rodríguez
Métodos
The techniques or / and topics necessary for a correct understanding (it will be assumed
that the reader has knowledge of algebraic operations and functions) of the model are:

• Def de Vecino: Pixeles que cumplan con la definición de adyacencia, los pixeles
son vecinos si comparten fronteras o esquinas.

• Def of kernel or neighborhood: Set of neighboring pixels in the shape of a square or rectangle.

Karen Natalia Pulido Rodríguez


• Vector calculus: Knowledge of derivatives of first, second and third order, in more
than one variable and knowledge of the gradient.
• Partial differential equations: For the understanding of some filters based on these.
• RGB model: For understanding the coordinates in the matrix representation of color
images.

• Cellular Automata: is a mathematical and computational model for a dynamic system that
evolves in discrete steps.

• Convolution matrix, is the treatment of a matrix by another called "kernel". The


convolution matrix filter uses a first matrix that is the image that will be treated, it is given
by the formula:

• Redes neuron ales residuals: It is assembled in constructions obtained from the cells of the
pyramid of the cerebral cortex. Residual neural networks accomplish this by using shortcuts

Karen Natalia Pulido Rodríguez


or "jump connections" to move over various layers.

• Machine Learning: is a discipline in the field of Artificial Intelligence that, through


algorithms, provides computers with the ability to identify patterns in massive data and
make predictions (predictive analysis).

• The neural network: In the following image we can see some models of neural networks.

Karen Natalia Pulido Rodríguez


Simple Neural Network and Deep Neural Network. We see that the output branches of some
neurons are the input branches of others, and so on. But we see a couple of differences between
the layers. Red neurons do not have input branches. They are information that we are going to
give the neurons "from the outside", or "initial stimulus." We also see that neuron-colored blue
have outgoing branches that are not connected to other neurons. They are the output information
of the neural network or "final stimulus." Depending on the number of hidden layers (yellow) we
can speak of a simple or deep neural network.

• Trigger functions: First, the linear combination of the weights and inputs is calculated.
Trigger functions are nothing more than the way of transmitting this information over the
output connections. We may be interested in transmitting that information unmodified, so we
would simply use the identity function. In other cases, we may not want any information to be
transmitted. That is, transmit a zero at the outputs of the neuron.In general, activation
functions are used to give a "non-linearity" to the model and to make the network capable of
solving more complex problems. If all activation functions were linear, the resulting network
would be equivalent to a network with no hidden layers. We are going to see the four most
used families of activation functions in neural networks. When it comes to classifying neural
networks, we can do it from various types of networks. The ways of classifying them can vary
and among the most popular types of classification are: classification by number of layers,
classification by the type of connections and classification by the degree of connections. At
the same time, there are also different ways of making neural networks, based on different
algorithms and developments.The types of neural networks are as follows: multilayer
perception, convunctional neural networks, recurrent neural networks, and radial-based neural
networks.

Karen Natalia Pulido Rodríguez


Programs
This model will be carried out in the programming language known as Python, it will be developed
in the Google application called Collab. Various packages will be used to develop this model which
are:numpy as np
• import os
• import io
• import PIL.Image, PIL.ImageDraw
• import base64
• import zipfile
• import json
• import requests
• import numpy as np
• import matplotlib.pylab as pl
• import glob
• import tensorflow as tf
• from IPython.display import Image, HTML, clear_output import tqdm
• import os os.environ['FFMPEG_BINARY'] = 'ffmpeg'
• import moviepy.editor as mvp
• from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter clear_output()

Where these packages will be used for the matrix arrangement of the images, the representation of
them in RGB, show the initial image and the resulting image, the Image package of PIL is for the
representation of images in bone color using RGB. With packages like Tensorflow for the use of
neural networks.

Results

In our first experiment, we simply train the CA to achieve a target image after a random
number of updates. This approach is quite naive and will run into issues. We initialize

Karen Natalia Pulido Rodríguez


the grid with zeros, except a single seed cell in the center, which will have all channels
except RGB 2 set to one. Once the grid is initialized, we iteratively apply the update
rule. We sample a random number of CA steps from the 3 range for each training step,
as we want the pattern to be stable across a number of iterations. At the last step we
apply pixel-wise L2 loss between RGBA channels in the grid and the target pattern. This
loss can be differentiably optimized 4 with respect to the update rule parameters by
backpropagation-through-time, the standard method of training recurrent neural
networks.
Once the optimisation converges, we can run simulations to see how our learned CAs
grow patterns starting from the seed cell.

We can see that different training runs can lead to models with drastically different long term
behaviours. Some tend to die out, some don’t seem to know how to stop growing, but some happen
to be almost stable!. How can we steer the training towards producing persistent patterns all the
time?

We can consider every cell to be a dynamical system, with each cell sharing the same dynamics, and
all cells being locally coupled amongst themselves. Initially, we wanted the system to evolve from
the seed pattern to the target pattern - a trajectory which we achieved in Experiment 1. Now, we
want to avoid the instability we observed - which in our dynamical system metaphor consists of
making the target pattern an attractor. One strategy to achieve this is letting the CA iterate for much
longer time and periodically applying the loss against the target, training the system by
backpropagation through these longer time intervals.

Intuitively we claim that with longer time intervals and several applications of loss, the model is
more likely to create an attractor for the target shape, as we iteratively mold the dynamics to return
to the target pattern from wherever the system has decided to venture.

Karen Natalia Pulido Rodríguez


Karen Natalia Pulido Rodríguez
Early on in the training process, the random dynamics in the system allow the model to end up in
various incomplete and incorrect states. As these states are sampled from the pool, we refine the
dynamics to be able to recover from such states. Essentially, we use the previous final states as new
starting points to force our CA to learn how to persist or even improve an already formed pattern, in
addition to being able to grow it from a seed. This makes it possible to add a periodical loss for
significantly longer time intervals than otherwise possible, encouraging the generation of an attractor
as the target shape in our coupled system. Here is what a typical training progress of a CA rule looks
like.

Scope
Now, we want to avoid the instability we observed - which in our dynamical system metaphor
consists of making the target pattern an attractor.

Karen Natalia Pulido Rodríguez


References
[1] Growing Neural Cellular Automata- https://distill.pub/2020/growing-ca/

[2] ,
HORA 14 y 15 - Deep Learning y Autómatas Celulares - #100HorasDeML-
˙
)
f


Š
https://www.youtube.com/watch?v=oTspvhgmL9I&ab_channel=NotCSV

[3] HORA 16-18 - Programando Autómata Celular Neural -#100HorasDeML-


https://www.youtube.com/watch?v=sN2pbLveprc&ab_channel=NotCSV
[4] HORA 23-24 - ¡El Autómata Celular Neural YA CRECE-#100HorasDeML-
https://www.youtube.com/watch?v=4SVEWo1GnHY&ab_channel=NotCSV

[5] HORA 25-26 - Arreglando al Autómata Celular Neural - #100HorasDeML-


https://www.youtube.com/watch?v=ea91EqkF6XI&ab_channel=NotCSV

[6] HORA 27-28 - El Aut. Celular Neural... ¡YA FUNCIONA! - 100HorasDeML-


https://www.youtube.com/watch?v=CCwvzDFbT7U&ab_channel=NotCSV

Karen Natalia Pulido Rodríguez

You might also like