(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 5, 2010
For the researcher and the financial analyst, the mainadvantage of ANNs is that there is no need to specify thefunctional relation between variables. Since they areconnectionist-learning machines, the knowledge is directlyimbedded in a set of weights through the linking arcs amongthe processing nodes. In order to train a neural network properly one needs a large set of representative 'goodquality’ examples. In the case of bankruptcy problems, theresearcher should be cautious when drawing conclusionsfrom neural networks trained with only one or two hundredcases, as observed in most previous studies .
Generalized Regression Neural Network
The GRNN was applied to solve a variety of problemslike prediction, control, plant process modeling or generalmapping problems .General regression neural network Specht , Nadaraya  and Watson , does not require an iterativetraining procedure as in back-propagation method.The GRNN is used for estimation of continuousvariables, as in standard regression techniques. It is relatedto the radial basis function network and is based on astandard statistical technique called kernel regression. Bydefinition, the regression of a dependent variable y on anindependent x estimates the most probable value for y, givenx and a training set. The regression method will produce theestimated value of y, which minimizes the mean-squarederror. GRNN is a method for estimating the joint probabilitydensity function (pdf) of x and y, given only a training set.Because the pdf is derived from the data with nopreconceptions about its form, the system is perfectlygeneral. Furthermore, it is consistent; that is, as the trainingset size becomes large, the estimation error approaches zero,with only mild restrictions on the function. In GRNN,instead of training the weights, one simply assigns to wij thetarget value directly from the training set associated withinput training vector i and component j of its correspondingoutput vector . GRNN architecture is given in Fig. 2.GRNN is based on the following formula :(1)where
is the output of the estimator,
is the estimatorinput vector,
is the expected output value, given theinput vector
is the joint probability densityfunction (pdf) of
.The function value is estimated optimally as follows:(2)where
= the target output corresponding to input trainingvector
, the output of the hidden layer neuron,
( ) ( )
u xu x D
, the squared distance between theinput vector
and the training vector
= the input vector,
=training vector i, the center of neuron i, spread=a constantcontrolling the size of the receptive region.
Figure 2. Generalized Regression Neural Network (GRNN) Architecture
This study was conducted at the faculty of Economicsand Administrative Sciences, Al-Zaytoonah University of Jordan in Hashemite Kingdom of Jordan. Our sampleconsists of 208 students belonging to accountingdepartment. The information for this study has beenobtained from the register office at Al-Zaytoonah Universityof Jordan and which are maintained on a computerizeddatabase.The Cumulative Grade Average Point (CGPA) is used asan indicator to measure the performance of the universitystudents'.The students' overall performance was hypothesized tobe a function of the following factors: (1) Secondary schoolperformance is measured by scores in secondary schoolcertificate examination, measured in a percentage form (2)type of secondary school branch, (3) gender, and (4)boarding or non boarding student.
A generalized regression neural network (GRNN) with
radial basis layer and a special linear layer and linear outputneurons was created using the neural network toolbox fromMatlab 7.9 as shown in Fig. 2. Generalized regression neuralnetworks are a kind of radial basis network that is often usedfor function approximation.The first layer has as many neurons as there are input/ target vectors. Each neuron's weighted input is the distancebetween the input vector and its weight vector. Eachneuron's net input is the product of its weighted input withits bias. Each neuron's output is its net input passed throughradial basis transfer function. Radial basis transfer functionis a neural transfer function which calculates a layer's outputfrom its net input. If a neuron's weight vector is equal to theinput vector (transposed), its weighted input will be 0, its net
dy y x f
dy y x f y
x y E