You are on page 1of 7

Journal of Pharmaceutical Sciences 106 (2017) 176-182

Contents lists available at ScienceDirect

Journal of Pharmaceutical Sciences


journal homepage: www.jpharmsci.org

Pharmaceutics, Drug Delivery and Pharmaceutical Technology

Application of Artificial Neural Networks in the Design and


Optimization of a Nanoparticulate Fingolimod Delivery System Based
on Biodegradable Poly(3-Hydroxybutyrate-Co-3-Hydroxyvalerate)
Shadab Shahsavari 1, Leila Rezaie Shirmard 2, 3, Mohsen Amini 4,
Farid Abedin Dokoosh 2, 5, *
1
Department of Chemical Engineering, Varamin-Pishva Branch, Islamic Azad University, Varamin, Iran
2
Department of Pharmaceutics, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran
3
Department of Pharmaceutics, Faculty of Pharmacy, Ardebil University of Medical Science, Ardebil, Iran
4
Department of Medicinal Chemistry, Drug Design and Development Research Center, Tehran University of Medical Sciences, Tehran, Iran
5
Medical Biomaterials Research Center (MBRC), Tehran University of Medical Sciences, Tehran, Iran

a r t i c l e i n f o a b s t r a c t

Article history: Formulation of a nanoparticulate Fingolimod delivery system based on biodegradable poly(3-
Received 4 March 2016 hydroxybutyrate-co-3-hydroxyvalerate) was optimized according to artificial neural networks (ANNs).
Revised 1 July 2016 Concentration of poly(3-hydroxybutyrate-co-3-hydroxyvalerate), PVA and amount of Fingolimod is
Accepted 26 July 2016
considered as the input value, and the particle size, polydispersity index, loading capacity, and entrap-
Available online 22 September 2016
ment efficacy as output data in experimental design study. In vitro release study was carried out for best
formulation according to statistical analysis. ANNs are employed to generate the best model to determine
Keywords:
artificial neural network the relationships between various values. In order to specify the model with the best accuracy and
drug delivery proficiency for the in vitro release, a multilayer percepteron with different training algorithm has been
Fingolimod examined. Three training model formulations including Levenberg-Marquardt (LM), gradient descent,
poly(3-hydroxybutyrate-co-3-
and Bayesian regularization were employed for training the ANN models. It is demonstrated that the
hydroxyvalerate)
response surface methodology predictive ability of each training algorithm is in the order of LM > gradient descent > Bayesian regu-
training algorithms larization. Also, optimum formulation was achieved by LM training function with 15 hidden layers and 20
neurons. The transfer function of the hidden layer for this formulation and the output layer were tansig
and purlin, respectively. Also, the optimization process was developed by minimizing the error among
the predicted and observed values of training algorithm (about 0.0341).
© 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

Introduction sclerosis patients at 0.5 mg daily. Furthermore, alternative for-


mulations that enable decreasing lymphocyte count with reduced
Fingolimod, 2-amino-2-[2-(4-octylphenyl) ethyl]-1, 3-propanediol] impact on other non-target tissues will be necessary.1
as a novel immunosuppressive modulator, is characterized by a The polymeric nanoparticle (NP)-based drug delivery system is
reversible reduction of circulating peripheral blood lymphocyte ideal for controlled release drug delivery due to increased solubility
counts. In experimental allogenic organ transplantation studies, of hydrophobic drugs; enhanced half-life of drugs; improved drug
Fingolimod exhibited mild immunosuppression; however, this efficiency; reduced drug toxicity, and minimum side effects which
has been clinically effective only at concentrations 5-fold higher are associated with multiple drug dosing.2
than the dose administered normally and higher steady-state Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) is the
levels will need to be achieved. The current available capsule most commonly studied polymer of polyhydroxyalkanoates family.
formulation supports daily administration with dose-proportional It is a biodegradable, non-toxic polyester with a low production
pharmacokinetics to achieve active steady-state levels in multiple cost; PHBV has been intensively investigated as a biomaterial for
tissue engineering and an NP-based drug delivery system.3
Artificial neural network (ANN) is a training computational
* Correspondence to: Farid Abedin Dokoosh (Telephone: þ982188009440;
method based on simulating the neurological influence of the hu-
Fax: þ982188026734). man brain to design the best proper model.4 This model works
E-mail address: dorkoosh@tums.ac.ir (F. Abedin Dokoosh). according to non-linear relationships between input factors and

http://dx.doi.org/10.1016/j.xphs.2016.07.026
0022-3549/© 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182 177

pharmaceutical outputs through data iterative training and opti- were determined using photon correlation spectroscopy (Malvern
mization of the outcomes to minimize errors.5 Instruments, Malvern, UK).
ANNs have many usages in the field of pharmaceutics; they have The mathematical relevance between the responses and inde-
been employed for modeling the responses in drug delivery sys- pendent variables were modeled by a second-order polynomial
tems and for evaluating the ANN model with multiple linear equation.
regression.6,7 ANNs used as a parallel system consist of interactions In order to graphically show the interactions between the var-
between a huge number of simple calculation elements, nominated iables and the response, three-dimensional (3D) surface plots were
node or neurons, and fulfill intricate information processing and used in this study.
learn by models.8 The main important example of this model is the
new structure of the data processing system. It is the combination In Vitro Drug Release Study
of a large number of highly bound processing origin (neurons)
functioning in order to solve specific problems. An ANN is The cumulative release of Fingolimod from the PHBV NP in non-
employed for a special application, such as design identification or biological conditions was done during 30 days by a dialysis bag
information assortment, by means of learning action. Learning in (cutoff 12 kDa) in phosphate buffered saline (PBS) at pH 7.4 and
biological systems consists of regulations to the synaptic interfaces 37 C (in sink condition).
within the neuron networks.9 At specified time intervals (30 min; 1, 2, 4, 8, 12, 24, 48, 72, and
In this work, based on the suitable properties of PHBV, 120 h; 1, 2, 3, and 4 weeks), 1 mL of the medium was removed for
Fingolimod-loaded PHBV NPs are formulated to control the release analysis and fresh PBS of an equal volume was displaced. The
of hydrophobic Fingolimod by encapsulating it within hydrophobic in vitro release of Fingolimod was measured in triplicate. The
PHBV NPs. Moreover, a feed forward, multi-layer perceptron (MLP) samples were analyzed by HPLC method by using C8, 125  4.6 mm,
type of ANN is discussed for prediction of the mechanism of release 5 mm column, acetonitrile and phosphate buffer pH 3 (45:55) as
of Fingolimod NPs. mobile phase and UV detection of 220 nm.

Materials and Methods Prediction and Optimization Functions of ANN

Materials Commercially available software, MATLAB® R2008a (Math-


Works, Inc., Natick, MA), was applied to write mathematical code
PHBV with 3 wt.% PHV and polyvinyl alcohol (PVA) (average for training and measuring the ANN developed and used for
molecular weight 30,000-70,000) were purchased from Sigma- formulation optimization.
Aldrich (St. Louis, MO). Fingolimod was synthesized at the Danish MATLAB is a mercantile software developed by MathWorks, Inc.
Pharmaceutical Development Company. Analytical grade LiChro- It is a mathematical software package for theoretical and simu-
solv acetonitrile was purchased from Merck (Darmstadt, Germany) lating numerical calculation.10 Backpropagation, an abbreviation
and other chemicals used for the analytical methods were of for “backward propagation of errors,” is a usual method of training
analytical grade and purchased from Sigma Chemical Company (St. ANNs applied in conjunction with an optimization procedure such
Louis, MO). as gradient descent (GD). The method calculates the gradient of a
loss function with respect to all the weights in the network. Also,
Preparation and Characterization of Nanoparticles the multi-layer network is sometimes referred to as a back-
propagation network. However, the backpropagation technique
Fingolimod-loaded PHBV NPs were prepared by emulsification that is used to compute gradients and Jacobians in a multilayer
and solvent evaporation technique which can be useful for poorly network can also be applied to many different network
water soluble drugs. architectures.11
Different formulation variables such as concentration of PHBV A neural network includes an interconnected group of artificial
and PVA, amount of Fingolimod, and speed of magnetic stirrer (300, neurons, working in union to solve particular problems. ANNs, like
600, 1000 rpm) varied, and the effect on particle size, polydispersity human, learn by example. The neuron has 2 methods of operations:
index (PdI), loading capacity, and loading efficacy were considered. the training/learning mode and the using/testing mode. In original
Only one parameter was changed at a time in each set of experi- cases, an ANN is an adjusted system that converts its structure
ments. Briefly, chloroform (1 mL) containing PHBV (14.1 mg) as oil based on external or internal information that goes through the
phase with Fingolimod (5 mg) was sonicated for 2 min, and oil network in the learning phase. Recent neural networks are non-
phase was added drop wise to 5 mL PVA (as aqueous phase). The linear statistical data modeling tools. They are commonly used to
probe ultrasonication was continued for 10 min and then the model complex relationships among inputs and outputs or to find
mixture was gently stirred at room temperature for 4 h to allow models in data. MLP learning algorithm is an administered learning
complete evaporation of the organic solvent. The obtained sus- algorithm. It is one of the most important progresses in neural
pension was centrifuged at 14,000 rpm for 30 min, the NPs were networks. This learning algorithm is used in multilayer feed-
settled down and lyophilized for further studies, and the super- forward networks including processing elements (neurons) with
natant was used for determination of loading capacity and loading continuous recognizable activation functions (tan-sigmoid and
efficacy and also study of drug released from NPs. log-sigmoid). Given a set of training input-output pair, this algo-
rithm prepares a procedure for changing the weights in a back-
Experimental Design propagation network to classify an input correctly. The sense for
this weight update algorithm is mainly the GD method applied in
Further than the information achieved by preliminary studies, case of simple perceptron networks with recognizable units. This is
which defined the most significant factors, optimum levels of the a procedure where the error is propagated back to covered unit. The
PHBV concentration (A), PVA concentration (B), and Fingolimod aim of the neural network is to train the net to attain a balance
amount (C) that can significantly affect the particle size, PdI, between the net's ability to reply (memorization) and its capability
loading, and encapsulation efficiency were analyzed using the to give reasonable responses to the input that is alike but not
response surface methodology. Size and zeta potential of particles identical to one that is applied in training.12
178 S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182

Table 1
Box-Behnken Experimental Design

Formula Independent Variables Dependent Variables


No.
X1 X2 X3 Particle Size (nm) Polydispersity Index Loading (%) Entrapment Efficiency (%)

1 1.50 0.25 5.00 268 0.23 22.53 73.80


2 1.50 1.13 2.75 286 0.20 10.19 56.00
3 0.50 2.00 2.75 495 0.48 7.63 5.06
4 0.50 1.13 5.00 300 0.23 19.96 14.32
5 1.50 1.13 2.75 218 0.13 10.19 65.00
6 0.50 1.13 0.50 368 0.27 4.17 12.00
7 1.50 2.00 5.00 292 0.24 13.00 51.48
8 1.50 2.00 0.50 351 0.30 0.01 51.52
9 2.50 1.13 5.00 245 0.17 14.41 58.00
10 0.50 0.25 2.75 350 0.40 16.50 35.86
11 1.50 1.13 2.75 239 0.19 10.19 67.00
12 2.50 1.13 0.50 261 0.21 0.42 17.80
13 1.50 1.13 2.75 281 0.19 10.19 67.86
14 1.50 0.25 0.50 324 0.30 2.10 57.55
15 1.50 1.13 2.75 241 0.19 10.19 67.86
16 2.50 0.25 2.75 330 0.40 6.52 56.73
17 2.50 2.00 2.75 270 0.23 2.89 24.12

In this research, a type of ANN called MLP was used for the the difference among the predicted and observed values. To achieve
prediction of the mechanism of preparation of Fingolimod NPs. the specified area of accuracy, the error is minimized through many
MLP is one of the neural network architecture which was most training cycles.
studied and consists of an input layer, one or more latent layers, and The primary parameter in training networks is train operation. A
an output layer. The neurons take input values in the input layer trained ANN model can be applied to predict the response by
and transfer them on to the neurons in the hidden layer without providing a batch of input amounts. To train an ANN model, the
any measurement. data provided from experiments are categorized into 3 sets:
The connectivity among the neurons is nominated weighting training, test, and validation sets. The training set is put on to train
variables. Weighting variables determine the strength of the input the ANN model by adjusting the link weights of network model,
data. Delta rule (learning algorithm in MLP) or backward propa- which consists of data covering all experimental space.
gation is applied for optimization the weights during the learning To forecast or optimize various kinds of controlled release for-
process. In the training data batch, ANN reads the input and output mulations, many researchers have used different types of ANN
values and moderates the value of the weighted links to minimize models and learning algorithms.

Figure 1. The 3D response surface plots of Fingolimod nanoparticles obtained from optimum formulation for (a) particle size, (b) PdI, (c) loading capacity, and (d) entrapment
efficiency.
S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182 179

Figure 2. % Drug release of Fingolimod from the PHBV nanoparticles for optimum formulation.

In this study, various algorithms for training ANN was used as X .


below: GD, Bayesian regularization (BR), and Levenberg-Marquardt MSE ¼ ðyi  zi Þ2 n (1)
(LM) training model.
where yi is the predicted value, zi the observed value, and n the
number of dataset.
Gradient Descent
In this research, the data from in vitro release study were used to
In this research, GD local search method which is the easiest and
model ANN. For this purpose, time was applied as the input and the
popular training optimization algorithm was used. The procedure
percent of drug released from NPs was used as the expected output
operates by numerating and gradient calculating the output errors
of the network. The algorithms applied in the MATLAB program
and also optimizing the weights and biases.13
(mbackprop) consist of few numbers of iterations to speed up the
process. The training set was applied to train the network, and the
Bayesian Regularization test set was employed to specify the level of generalization pro-
BR is a mathematical process which converts a nonlinear duced by the training set and to supervise overtraining the
regression into a statistical issue in the form of series regression. network, with a corresponding MSE.
This training model also automatically sets optimum values for
accurate function parameters and brings forward a smooth Results and Discussion
network using the mean sum squared error.14
Characterization of Nanoparticles
Levenberg Marquardt
The particle size, PdI, loading capacity, and loading efficiency
The LM algorithm is a curve-fitting method and the most highly
of NPs were measured by dynamic light scattering using a
used optimization algorithm that applies the minimum of a several
Malvern Zetasizer ZS (Worcestershire, UK) (Table 1). Every
function which is considered as the sum of squares of the errors
sample was measured 3 times and the data were represented as
among the function and the measured data parts.15
mean ± SD.17
LM16 is indeed a compound of 2 minimization procedures:
the GD method for updating the values to the largest depletion
of the least squares and the Gauss-Newton model for consid- Experimental Design
ering the least squares function and finding the minimum of the
quadratic. The Box-Behnken design was used to discriminate the optimum
In this research, the focus was on the learning case known as level of the independent variables, including PHBV concentration
directed learning, in which a set of input or output data are avail- (%), PVA (%), and Fingolimod amount (mg), affecting the particle
able. Thereafter, the ANN has to be trained to make the favorable size, PdI, loading, and encapsulation efficiency of Fingolimod-
output in accordance with the examples. In order to follow a su- loaded NPs.
pervised training, the method for measuring ANN output error
between the real and the expected output is needed. Particle Size and Polydispersity Index
A proper formula is the mean squared error (MSE) which is ANOVA results indicated that only PHBV concentration (X1) has
presented in Equation 1. an effect on the size of NPs (p ¼ 0.0008); however, PVA

Table 2
In Vitro Data Release of Fingolimod From the PHBV Nanoparticle in Optimum Formulation

Time (day) 0.02 0.04 0.08 0.16 0.32 0.5 1


Drug release (%) 3.58 ± 0.74 5.36 ± 2.57 7.44 ± 1.54 10.30 ± 4.26 12.38 ± 3.51 23.60 ± 1.97 24.31 ± 1.48
Time (day) 2 3 4 5 6 7 10
Drug release (%) 24.98 ± 1.25 25.21 ± 0.74 25.59 ± 1.19 26.87 ± 0.82 29.54 ± 4.51 42.44 ± 3.90 46.35 ± 0.89
Time (day) 15 20 23 25 27 30
Drug release (%) 50.84 ± 2.63 58.12 ± 1.38 60.98 ± 1.22 62.52 ± 3.45 64.87 ± 1.06 65.6 ± 0.53
180 S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182

Table 3
Training Parameters of Various Structures With GD Algorithm

Case Number Transfer Function Transfer Function Number Correlation Coefficient MSE of
of Layer of Hidden Layer of Output Layer of Neuron Training Set
Training Test Validation

A 10 tansig purelin 20 0.8879 0.8963 0.8312 0.0874


B 10 tansig purelin 30 0.8295 0.8653 0.7485 0.1205
C 10 tansig purelin 40 0.7692 0.7962 0.7340 0.1341
D 10 logsig purelin 20 0.9193 0.9773 0.8545 0.0521
E 10 logsig purelin 30 0.8793 0.8891 0.8389 0.0920

concentration (X2) and Fingolimod amount (X3) have no effect on shown in Table 2. Experiments were administrated 3 times and
particle size (their p values are higher than 0.05). The plot for 3D data are represented as mean ± SD.
response surface of particle size alteration of optimum formulation The in vitro drug release profile showed a triphasic pattern, an
is shown in Figure 1a. initial burst release in the first 24 h followed by sustained release
It was demonstrated that the concentration of PHBV is the most for up to 4 weeks.
significant factor that affected PdI (p ¼ 0.0052). As shown in 3D The first stage (burst effect) is caused by drug adsorbed on the
plots (Fig. 1b), reducing the concentration of both PHBV and PVA surface of nanospheres, and the second stage (6 days) appears to
leads to a decrease in PdI. follow a slow release, because of drug embedding inside the mo-
lecular chain of the polymer. In the last phase, over 22 days, due to
Loading and Encapsulation Efficiency diffusion of drug via the channels made by the bulk degradation,
3D response surface plot loading capacity and entrapment drug release follows a linear relationship with time.
efficiency of optimum formulation are illustrated in Figures 1c
and 1d. ANN Modeling
Regression analysis of variance revealed that concentration of
PHBV, PVA, and amount of Fingolimod affected on loading (p < A neural network is composed of an input and output layer with
0.0001), and concentration of PHBV and amount of Fingolimod various numbers of layers and neurons, that is, a backpropagation
affected the loading efficacy. network was chosen for the purposes of this study. Time of release
was used as the input to the network and corresponding to that,
In Vitro Drug Release Study expected output including percent of drug released was applied,
using 20 experimental data generated from Box-Behnken design.
In vitro release of Fingolimod from the PHBV NP was studied in The training group was put for training the network and
PBS at pH 7.4 and 37 C according to United States Pharmacopeia learning models with different training functions.18 In order to
and the results were ready during 30 days as shown in Figure 2. determine the amount of extension submitted by sets and evalu-
Moreover, data of in vitro release for NPs in optimum condition are ated qualification of trained network, test batch was applied. A

Figure 3. Scatter plots of observed versus ANN predicted factorial variables: (a) GD, (b) BR, and (c) LM.
S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182 181

Table 4
Training Parameters of Various Structures With BR Algorithm

Case Number Transfer Function Transfer Function Number Correlation Coefficient MSE of
of Layer of Hidden Layer of Output Layer of Neuron Training Set
Training Test Validation

A 10 tansig purelin 20 0.7849 0.7487 0.7839 0.2019


B 15 tansig purelin 20 0.8209 0.8581 0.8056 0.1912
C 20 tansig purelin 20 0.9541 0.8829 0.8455 0.0544
D 20 logsig purelin 20 0.9134 0.8815 0.8392 0.0698
E 15 logsig purelin 20 0.7685 0.7392 0.7620 0.1983

final check on validation of the trained network was completed by a training function with 15 layers, and therefore tansig and logsig
using verification data sets. Therefore all the data were classified have been selected as a transfer function of hidden layer and purlin
into 3 groups including training (70%), testing (15%), and verifying as a transfer function of output layer. With the increment in the
classes (15%). number of neurons (up to 20), the amount of error increased in
To intercept satiation of the transfer function in MATLAB pro- tansig hidden layer transfer function. Moreover, tansig with 15
cess, approval of the data was scaled up to 0-1. To attain minimum layers and 20 neurons showed better performance compared to
MSE, the number of hidden nodes was chosen because there was a logsig.
growth in error with increasing number of nodes. Also, different As shown in Figure 3c, the observed versus predicted values
number of hidden layers was examined for each training using LM methodology eventuates R2 ¼ 0.9291.
algorithms. The training parameters of various networks with the LM
training model are demonstrated in Table 5.
ANN Model Training Using Gradient Descent Algorithm
In GD algorithm which was perceptually modeled as an easiest
training method in ANN, there was no significant difference in Performance Criteria
proficiency by decreasing or increasing the number of hidden
layers; therefore the number of hidden layer was fixed to 10. Log- ANN operates based on receiving the input and output in-
sigmoid and tangent sigmoid (tansigmoid) was chosen as a hidden formation in the training group and changing a value of the
layer and hyperbolic (purelin) as an output layer for transfer weighted link to diminish the variation among the perceived
function. Several training with GD training algorithms are dis- and predicted values. Because of some training cycles called
played in Table 3. epoch, the error is minimized to achieve acceptable network
It was seen that with the increasing number of neurons by more precision.
than 20 in both logsigmoid and tansigmoid hidden layer, the In this research, many numerical iterations like epoch with
amount of error was increased and R2 was reduced. Overall, log- minimum MSE (optimum epoch) were selected. The analogy be-
sigmoid has shown better performance results. tween 3 GD models was also built by recording the end of the
The observed versus predicted values using GD methodology is central processing unit time that passed.
shown in Figure 3a, where R2 ¼ 0.9023. The process of minimizing errors in network is adjustable; in
contrast, the error in response surface methodology depends on
experimental values and cannot be progressed, even in corre-
ANN Model Training Using Bayesian Regularization Algorithm spondence with best optimization.19
In accordant with BR training algorithm, tansig and logsig were Table 6 demonstrates the statistic measures and performance
chosen as a transfer function of hidden layer and purlin as an indexes for every 3 training algorithm. As displayed in Table 6, LM
output layer, respectively, with 20 neurons. With increasing num- algorithms have shown better average for MSE compared with GD
ber of hidden layers (up to 20) for tansig transfer function of the and BR (0.0341 vs. 0.0521 and 0.0544). Moreover, LM has shown the
hidden layer, the minimum amount of error has been obtained. smaller average mean prediction error toward the others (0.2571
When logsig was selected with 20 layers, the development of cor- vs. 0.2782 and 0.3545).
relation coefficient has not been observed. As explained previously, the training ended when minimum
As in Figure 3b, the observed versus predicted values using BR root-mean-square error on the test dataset was obtained. The
methodology eventuates R2 ¼ 0.8326. The modeling outcomes for number of training epochs and the time elapsed for total
various structures with BR algorithm are reported in Table 4. epochs indicated when the training ended, which varied
drastically between GD modes.
ANN Model Training Using Levenberg-Marquardt Algorithm LM outperformed GD and BR in terms of prediction and gener-
In this research, LM algorithm which is commonly applied for alization capability. On the other hand, LM demonstrated to be less
mean square problems and approaching functions was also used as biased and more precise, in comparison with GD and BR.

Table 5
Training Parameters of Various Structures With LM Algorithm

Case Number Transfer Function Transfer Function Number Correlation Coefficient MSE of
of Layer of Hidden Layer of Output Layer of Neuron Training Set
Training Test Validation

A 15 tansig purelin 20 0.9526 0.9102 0.9178 0.0341


B 15 tansig purelin 30 0.8954 0.8912 0.9055 0.0508
C 15 tansig purelin 40 0.7758 0.8506 0.8764 0.0965
D 15 logsig purelin 20 0.8025 0.8865 0.8930 0.0732
E 15 logsig purelin 30 0.7928 0.8619 0.8853 0.0880
182 S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182

Table 6 References
Statistical and Comparison Between Performance Index of Three Training
Algorithms 1. Chen J, Davis S. The release of diazepam from poly (hydroxybutyrate-hydrox-
yvalerate) microspheres. J Microencapsul. 2002;19(2):191-201.
Performance Index Training Algorithms
2. Pacheco DP, Amaral MH, Reis RL, Marques AP, Correlo VM. Development of an
GD BR LM injectable PHBV microparticles-GG hydrogel hybrid system for regenerative
medicine. Int J Pharm. 2015;478(1):398-408.
Average MSE for training set 0.0521 0.0544 0.0341 3. Masood F, Chen P, Yasin T, Fatima N, Hasan F, Hameed A. Encapsulation of
Average mean prediction error 0.2782 0.3545 0.2571 Ellipticine in poly-(3-hydroxybutyrate-co-3-hydroxyvalerate) based nano-
Average number of epochs at the 231 205 379 particles and its in vitro application. Mater Sci Eng C Mater Biol Appl.
end of training 2013;33(3):1054-1060.
Average central processing unit 8.5 7.2 10.6 4. Stanwick JC, Baumann MD, Shoichet MS. Enhanced neurotrophin-3 bioactivity
time elapsed at the end of training (s) and release from a nanoparticle-loaded composite hydrogel. J Control Release.
2012;160(3):666-675.
5. Duan B, Wang M. Encapsulation and release of biomolecules from CaeP/PHBV
nanocomposite microspheres and three-dimensional scaffolds fabricated by
selective laser sintering. Polym Degrad Stab. 2010;95(9):1655-1664.
Conclusion 6. Peh KK, Lim CP, Quek SS, Khoh KH. Use of artificial neural network to predict
drug dissolution profiles and evaluation of network performance using simi-
larity factor. Pharm Res. 2000;17:1384-1388.
In this study, Box-Behnken design-expert experimental design 7. Barmplexis P, Kanaze FI, Kachrimanis K, Georgarakis E. Artificial neural net-
was used to optimize and reach the best formulation of Fingoli- works in the optimization of a nimodipine controlled release tablet formula-
tion. Eur J Pharm Biopharm. 2010;74:316-323.
mod NPs based on biodegradable PHBV. The consequence of
8. Pham DT. An introduction to artificial neural networks. In: Bulsari AB, ed. Neural
formulation variables includes concentration of PHBV, PVA, and Networks for Chemical Engineering. Amsterdam: Elsevier; 1995 Chapter 1.
amount of Fingolimod. Also, NPs' characteristics was included 9. Shahsavari S, Vasheghani-Farahani E, Ardjmand M, Dorkoosh F. Modeling
of drug released from acyclovir nanoparticles based on artificial neural
particle size, PdI, loading capacity, and entrapment efficiency. The
networks. Lett Drug Des Discov. 2014;11(2):174-183.
in vitro release study of optimized Fingolimod NPs was then car- 10. The MathWorks Inc. Pro-matlab for Sun Workstations, User's Guide. Natick, MA:
ried out. The MathWorks Inc.; 1990.
The ANN was employed to predict the best model for in vitro 11. Chaibva F, Burton M, Walker RB. Optimization of salbutamol sulfate dissolution
from sustained release matrix formulations using an artificial neural network.
release study of NPs. The feed-forward backpropagation was Pharmaceutics. 2010;2:182-198.
applied using MATLAB program to appraise the effect of input 12. Jain S. Brain cancer classification using GLCM based feature extraction in
variable (time) on response (amount of drug released from NPs). artificial neural network. Int J Comput Sci Eng Technol. 2013;4(7):966-970.
13. Rumelhart D, Hinton GE, Williams RJ. Learning representations by back-
Three training model formulations including LM, GD, and BR were propagation errors. Nature. 1986;233:533-536.
employed for training the ANN models. Therefore, the generaliza- 14. MacKay DJC. A practical Bayesian framework for back propagation network.
tion and predictability of the models were investigated. Neural Comput. 1992;4(3):448-472.
15. Hecht-Nielsen R. Kolmogorov's mapping neural network existence theorem.
It is demonstrated that the predictive ability of each training San Diego, CA: Proc First IEEE Int Joint Conf Neural Networks; 1987:11-14.
algorithm is in the order of LM > GD > BR. In summary, optimum 16. Levenberg KA. Method for the solution of certain non-linear problems in least
formulation was achieved by LM training function with 15 hidden squares. Q Appl Math. 1994;2:164-168.
17. De Campos AM, Sa nchez A, Alonso MJ. Chitosan nanoparticles: a new
layers and 20 neurons. The transfer function of the hidden layer for
ophthalmic vehicle. Int J Pharm. 2001;224:159-168.
this formulation and the output layer was tansig and purlin, 18. Zhang L. Parallel Training Algorithms for Analogue Hardware Neural Nets. Ph.D.
respectively. Also, the MSE of training was 0.0341. thesis. Australia: School of software engineering and Data communication,
Queensland University of Technology; 2007.
Finally, the results of this research demonstrate that ANN was a
19. Shahsavari S, Bagheri G, Mahjub R, et al. Application of artificial neural net-
useful program for modeling and predicting a nanoparticulate works for optimization of preparation of insulin nanoparticles composed of
Fingolimod delivery system. quaternized aromatic derivatives of chitosan. Drug Res. 2014;64:151-158.

You might also like