Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Article history: Formulation of a nanoparticulate Fingolimod delivery system based on biodegradable poly(3-
Received 4 March 2016 hydroxybutyrate-co-3-hydroxyvalerate) was optimized according to artificial neural networks (ANNs).
Revised 1 July 2016 Concentration of poly(3-hydroxybutyrate-co-3-hydroxyvalerate), PVA and amount of Fingolimod is
Accepted 26 July 2016
considered as the input value, and the particle size, polydispersity index, loading capacity, and entrap-
Available online 22 September 2016
ment efficacy as output data in experimental design study. In vitro release study was carried out for best
formulation according to statistical analysis. ANNs are employed to generate the best model to determine
Keywords:
artificial neural network the relationships between various values. In order to specify the model with the best accuracy and
drug delivery proficiency for the in vitro release, a multilayer percepteron with different training algorithm has been
Fingolimod examined. Three training model formulations including Levenberg-Marquardt (LM), gradient descent,
poly(3-hydroxybutyrate-co-3-
and Bayesian regularization were employed for training the ANN models. It is demonstrated that the
hydroxyvalerate)
response surface methodology predictive ability of each training algorithm is in the order of LM > gradient descent > Bayesian regu-
training algorithms larization. Also, optimum formulation was achieved by LM training function with 15 hidden layers and 20
neurons. The transfer function of the hidden layer for this formulation and the output layer were tansig
and purlin, respectively. Also, the optimization process was developed by minimizing the error among
the predicted and observed values of training algorithm (about 0.0341).
© 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.xphs.2016.07.026
0022-3549/© 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182 177
pharmaceutical outputs through data iterative training and opti- were determined using photon correlation spectroscopy (Malvern
mization of the outcomes to minimize errors.5 Instruments, Malvern, UK).
ANNs have many usages in the field of pharmaceutics; they have The mathematical relevance between the responses and inde-
been employed for modeling the responses in drug delivery sys- pendent variables were modeled by a second-order polynomial
tems and for evaluating the ANN model with multiple linear equation.
regression.6,7 ANNs used as a parallel system consist of interactions In order to graphically show the interactions between the var-
between a huge number of simple calculation elements, nominated iables and the response, three-dimensional (3D) surface plots were
node or neurons, and fulfill intricate information processing and used in this study.
learn by models.8 The main important example of this model is the
new structure of the data processing system. It is the combination In Vitro Drug Release Study
of a large number of highly bound processing origin (neurons)
functioning in order to solve specific problems. An ANN is The cumulative release of Fingolimod from the PHBV NP in non-
employed for a special application, such as design identification or biological conditions was done during 30 days by a dialysis bag
information assortment, by means of learning action. Learning in (cutoff 12 kDa) in phosphate buffered saline (PBS) at pH 7.4 and
biological systems consists of regulations to the synaptic interfaces 37 C (in sink condition).
within the neuron networks.9 At specified time intervals (30 min; 1, 2, 4, 8, 12, 24, 48, 72, and
In this work, based on the suitable properties of PHBV, 120 h; 1, 2, 3, and 4 weeks), 1 mL of the medium was removed for
Fingolimod-loaded PHBV NPs are formulated to control the release analysis and fresh PBS of an equal volume was displaced. The
of hydrophobic Fingolimod by encapsulating it within hydrophobic in vitro release of Fingolimod was measured in triplicate. The
PHBV NPs. Moreover, a feed forward, multi-layer perceptron (MLP) samples were analyzed by HPLC method by using C8, 125 4.6 mm,
type of ANN is discussed for prediction of the mechanism of release 5 mm column, acetonitrile and phosphate buffer pH 3 (45:55) as
of Fingolimod NPs. mobile phase and UV detection of 220 nm.
Table 1
Box-Behnken Experimental Design
In this research, a type of ANN called MLP was used for the the difference among the predicted and observed values. To achieve
prediction of the mechanism of preparation of Fingolimod NPs. the specified area of accuracy, the error is minimized through many
MLP is one of the neural network architecture which was most training cycles.
studied and consists of an input layer, one or more latent layers, and The primary parameter in training networks is train operation. A
an output layer. The neurons take input values in the input layer trained ANN model can be applied to predict the response by
and transfer them on to the neurons in the hidden layer without providing a batch of input amounts. To train an ANN model, the
any measurement. data provided from experiments are categorized into 3 sets:
The connectivity among the neurons is nominated weighting training, test, and validation sets. The training set is put on to train
variables. Weighting variables determine the strength of the input the ANN model by adjusting the link weights of network model,
data. Delta rule (learning algorithm in MLP) or backward propa- which consists of data covering all experimental space.
gation is applied for optimization the weights during the learning To forecast or optimize various kinds of controlled release for-
process. In the training data batch, ANN reads the input and output mulations, many researchers have used different types of ANN
values and moderates the value of the weighted links to minimize models and learning algorithms.
Figure 1. The 3D response surface plots of Fingolimod nanoparticles obtained from optimum formulation for (a) particle size, (b) PdI, (c) loading capacity, and (d) entrapment
efficiency.
S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182 179
Figure 2. % Drug release of Fingolimod from the PHBV nanoparticles for optimum formulation.
Table 2
In Vitro Data Release of Fingolimod From the PHBV Nanoparticle in Optimum Formulation
Table 3
Training Parameters of Various Structures With GD Algorithm
Case Number Transfer Function Transfer Function Number Correlation Coefficient MSE of
of Layer of Hidden Layer of Output Layer of Neuron Training Set
Training Test Validation
concentration (X2) and Fingolimod amount (X3) have no effect on shown in Table 2. Experiments were administrated 3 times and
particle size (their p values are higher than 0.05). The plot for 3D data are represented as mean ± SD.
response surface of particle size alteration of optimum formulation The in vitro drug release profile showed a triphasic pattern, an
is shown in Figure 1a. initial burst release in the first 24 h followed by sustained release
It was demonstrated that the concentration of PHBV is the most for up to 4 weeks.
significant factor that affected PdI (p ¼ 0.0052). As shown in 3D The first stage (burst effect) is caused by drug adsorbed on the
plots (Fig. 1b), reducing the concentration of both PHBV and PVA surface of nanospheres, and the second stage (6 days) appears to
leads to a decrease in PdI. follow a slow release, because of drug embedding inside the mo-
lecular chain of the polymer. In the last phase, over 22 days, due to
Loading and Encapsulation Efficiency diffusion of drug via the channels made by the bulk degradation,
3D response surface plot loading capacity and entrapment drug release follows a linear relationship with time.
efficiency of optimum formulation are illustrated in Figures 1c
and 1d. ANN Modeling
Regression analysis of variance revealed that concentration of
PHBV, PVA, and amount of Fingolimod affected on loading (p < A neural network is composed of an input and output layer with
0.0001), and concentration of PHBV and amount of Fingolimod various numbers of layers and neurons, that is, a backpropagation
affected the loading efficacy. network was chosen for the purposes of this study. Time of release
was used as the input to the network and corresponding to that,
In Vitro Drug Release Study expected output including percent of drug released was applied,
using 20 experimental data generated from Box-Behnken design.
In vitro release of Fingolimod from the PHBV NP was studied in The training group was put for training the network and
PBS at pH 7.4 and 37 C according to United States Pharmacopeia learning models with different training functions.18 In order to
and the results were ready during 30 days as shown in Figure 2. determine the amount of extension submitted by sets and evalu-
Moreover, data of in vitro release for NPs in optimum condition are ated qualification of trained network, test batch was applied. A
Figure 3. Scatter plots of observed versus ANN predicted factorial variables: (a) GD, (b) BR, and (c) LM.
S. Shahsavari et al. / Journal of Pharmaceutical Sciences 106 (2017) 176-182 181
Table 4
Training Parameters of Various Structures With BR Algorithm
Case Number Transfer Function Transfer Function Number Correlation Coefficient MSE of
of Layer of Hidden Layer of Output Layer of Neuron Training Set
Training Test Validation
final check on validation of the trained network was completed by a training function with 15 layers, and therefore tansig and logsig
using verification data sets. Therefore all the data were classified have been selected as a transfer function of hidden layer and purlin
into 3 groups including training (70%), testing (15%), and verifying as a transfer function of output layer. With the increment in the
classes (15%). number of neurons (up to 20), the amount of error increased in
To intercept satiation of the transfer function in MATLAB pro- tansig hidden layer transfer function. Moreover, tansig with 15
cess, approval of the data was scaled up to 0-1. To attain minimum layers and 20 neurons showed better performance compared to
MSE, the number of hidden nodes was chosen because there was a logsig.
growth in error with increasing number of nodes. Also, different As shown in Figure 3c, the observed versus predicted values
number of hidden layers was examined for each training using LM methodology eventuates R2 ¼ 0.9291.
algorithms. The training parameters of various networks with the LM
training model are demonstrated in Table 5.
ANN Model Training Using Gradient Descent Algorithm
In GD algorithm which was perceptually modeled as an easiest
training method in ANN, there was no significant difference in Performance Criteria
proficiency by decreasing or increasing the number of hidden
layers; therefore the number of hidden layer was fixed to 10. Log- ANN operates based on receiving the input and output in-
sigmoid and tangent sigmoid (tansigmoid) was chosen as a hidden formation in the training group and changing a value of the
layer and hyperbolic (purelin) as an output layer for transfer weighted link to diminish the variation among the perceived
function. Several training with GD training algorithms are dis- and predicted values. Because of some training cycles called
played in Table 3. epoch, the error is minimized to achieve acceptable network
It was seen that with the increasing number of neurons by more precision.
than 20 in both logsigmoid and tansigmoid hidden layer, the In this research, many numerical iterations like epoch with
amount of error was increased and R2 was reduced. Overall, log- minimum MSE (optimum epoch) were selected. The analogy be-
sigmoid has shown better performance results. tween 3 GD models was also built by recording the end of the
The observed versus predicted values using GD methodology is central processing unit time that passed.
shown in Figure 3a, where R2 ¼ 0.9023. The process of minimizing errors in network is adjustable; in
contrast, the error in response surface methodology depends on
experimental values and cannot be progressed, even in corre-
ANN Model Training Using Bayesian Regularization Algorithm spondence with best optimization.19
In accordant with BR training algorithm, tansig and logsig were Table 6 demonstrates the statistic measures and performance
chosen as a transfer function of hidden layer and purlin as an indexes for every 3 training algorithm. As displayed in Table 6, LM
output layer, respectively, with 20 neurons. With increasing num- algorithms have shown better average for MSE compared with GD
ber of hidden layers (up to 20) for tansig transfer function of the and BR (0.0341 vs. 0.0521 and 0.0544). Moreover, LM has shown the
hidden layer, the minimum amount of error has been obtained. smaller average mean prediction error toward the others (0.2571
When logsig was selected with 20 layers, the development of cor- vs. 0.2782 and 0.3545).
relation coefficient has not been observed. As explained previously, the training ended when minimum
As in Figure 3b, the observed versus predicted values using BR root-mean-square error on the test dataset was obtained. The
methodology eventuates R2 ¼ 0.8326. The modeling outcomes for number of training epochs and the time elapsed for total
various structures with BR algorithm are reported in Table 4. epochs indicated when the training ended, which varied
drastically between GD modes.
ANN Model Training Using Levenberg-Marquardt Algorithm LM outperformed GD and BR in terms of prediction and gener-
In this research, LM algorithm which is commonly applied for alization capability. On the other hand, LM demonstrated to be less
mean square problems and approaching functions was also used as biased and more precise, in comparison with GD and BR.
Table 5
Training Parameters of Various Structures With LM Algorithm
Case Number Transfer Function Transfer Function Number Correlation Coefficient MSE of
of Layer of Hidden Layer of Output Layer of Neuron Training Set
Training Test Validation
Table 6 References
Statistical and Comparison Between Performance Index of Three Training
Algorithms 1. Chen J, Davis S. The release of diazepam from poly (hydroxybutyrate-hydrox-
yvalerate) microspheres. J Microencapsul. 2002;19(2):191-201.
Performance Index Training Algorithms
2. Pacheco DP, Amaral MH, Reis RL, Marques AP, Correlo VM. Development of an
GD BR LM injectable PHBV microparticles-GG hydrogel hybrid system for regenerative
medicine. Int J Pharm. 2015;478(1):398-408.
Average MSE for training set 0.0521 0.0544 0.0341 3. Masood F, Chen P, Yasin T, Fatima N, Hasan F, Hameed A. Encapsulation of
Average mean prediction error 0.2782 0.3545 0.2571 Ellipticine in poly-(3-hydroxybutyrate-co-3-hydroxyvalerate) based nano-
Average number of epochs at the 231 205 379 particles and its in vitro application. Mater Sci Eng C Mater Biol Appl.
end of training 2013;33(3):1054-1060.
Average central processing unit 8.5 7.2 10.6 4. Stanwick JC, Baumann MD, Shoichet MS. Enhanced neurotrophin-3 bioactivity
time elapsed at the end of training (s) and release from a nanoparticle-loaded composite hydrogel. J Control Release.
2012;160(3):666-675.
5. Duan B, Wang M. Encapsulation and release of biomolecules from CaeP/PHBV
nanocomposite microspheres and three-dimensional scaffolds fabricated by
selective laser sintering. Polym Degrad Stab. 2010;95(9):1655-1664.
Conclusion 6. Peh KK, Lim CP, Quek SS, Khoh KH. Use of artificial neural network to predict
drug dissolution profiles and evaluation of network performance using simi-
larity factor. Pharm Res. 2000;17:1384-1388.
In this study, Box-Behnken design-expert experimental design 7. Barmplexis P, Kanaze FI, Kachrimanis K, Georgarakis E. Artificial neural net-
was used to optimize and reach the best formulation of Fingoli- works in the optimization of a nimodipine controlled release tablet formula-
tion. Eur J Pharm Biopharm. 2010;74:316-323.
mod NPs based on biodegradable PHBV. The consequence of
8. Pham DT. An introduction to artificial neural networks. In: Bulsari AB, ed. Neural
formulation variables includes concentration of PHBV, PVA, and Networks for Chemical Engineering. Amsterdam: Elsevier; 1995 Chapter 1.
amount of Fingolimod. Also, NPs' characteristics was included 9. Shahsavari S, Vasheghani-Farahani E, Ardjmand M, Dorkoosh F. Modeling
of drug released from acyclovir nanoparticles based on artificial neural
particle size, PdI, loading capacity, and entrapment efficiency. The
networks. Lett Drug Des Discov. 2014;11(2):174-183.
in vitro release study of optimized Fingolimod NPs was then car- 10. The MathWorks Inc. Pro-matlab for Sun Workstations, User's Guide. Natick, MA:
ried out. The MathWorks Inc.; 1990.
The ANN was employed to predict the best model for in vitro 11. Chaibva F, Burton M, Walker RB. Optimization of salbutamol sulfate dissolution
from sustained release matrix formulations using an artificial neural network.
release study of NPs. The feed-forward backpropagation was Pharmaceutics. 2010;2:182-198.
applied using MATLAB program to appraise the effect of input 12. Jain S. Brain cancer classification using GLCM based feature extraction in
variable (time) on response (amount of drug released from NPs). artificial neural network. Int J Comput Sci Eng Technol. 2013;4(7):966-970.
13. Rumelhart D, Hinton GE, Williams RJ. Learning representations by back-
Three training model formulations including LM, GD, and BR were propagation errors. Nature. 1986;233:533-536.
employed for training the ANN models. Therefore, the generaliza- 14. MacKay DJC. A practical Bayesian framework for back propagation network.
tion and predictability of the models were investigated. Neural Comput. 1992;4(3):448-472.
15. Hecht-Nielsen R. Kolmogorov's mapping neural network existence theorem.
It is demonstrated that the predictive ability of each training San Diego, CA: Proc First IEEE Int Joint Conf Neural Networks; 1987:11-14.
algorithm is in the order of LM > GD > BR. In summary, optimum 16. Levenberg KA. Method for the solution of certain non-linear problems in least
formulation was achieved by LM training function with 15 hidden squares. Q Appl Math. 1994;2:164-168.
17. De Campos AM, Sa nchez A, Alonso MJ. Chitosan nanoparticles: a new
layers and 20 neurons. The transfer function of the hidden layer for
ophthalmic vehicle. Int J Pharm. 2001;224:159-168.
this formulation and the output layer was tansig and purlin, 18. Zhang L. Parallel Training Algorithms for Analogue Hardware Neural Nets. Ph.D.
respectively. Also, the MSE of training was 0.0341. thesis. Australia: School of software engineering and Data communication,
Queensland University of Technology; 2007.
Finally, the results of this research demonstrate that ANN was a
19. Shahsavari S, Bagheri G, Mahjub R, et al. Application of artificial neural net-
useful program for modeling and predicting a nanoparticulate works for optimization of preparation of insulin nanoparticles composed of
Fingolimod delivery system. quaternized aromatic derivatives of chitosan. Drug Res. 2014;64:151-158.