You are on page 1of 13

Optical Character Recognition System : An Implementation of Human Handwriting Recognition Software with KohoNet

The Challenges Behind
To build a Human and Computer Interactive Agent for Hand Written Character Extraction. To build a Character Dictionary for the Initial Character Sets as well as Experienced Character Sets. To build a Sample Work Space to play/draw the characters which are supposed to be recognized by the system. To build a KohoNet Network, whose purpose is to trace the edges from the Human Handwritten Characters. To build a Knowledge Model which will upgrade the experience level of the system for different variety of patterns. Motivated Solutions to the Challenges As the solution to the above problem, this system is motivated to provide a simulation prototype for the purpose of the English Character (Alphabets both in upper case and lower case, Numbers and perhaps some Special Characters) Recognition. Relevant Buzzwords OCR : Optical Character Recognition SWP : Sample Work Space HMM : Hidden Markov Model CPM : Character Pattern Matrix TTF : True Type Font KohoNet : Kohonen Neural Network DFT : Discrete Fourier Transformation DCT : Discrete Cosine Transformation WVS: Word Vector Space

With the advancement of Database Technology and the advent of the Data Mining Paradigm, the Reposition and Retrieval of information has become easier beyond the imagination. But the recent scene in the Information Technology is still inadequate to manage the analysis of data. On the one hand we are using the most efficient technologies like Java, Dot Net, SAS, SAP, Advanced Excel etc. but we are still re-entering data from a Hand Written Form to a Database manually!!! One way to overcome this flaw is to introduce Hand Written Character Recognition (HWCR) System. But the trend is too complex.

The converted text is found either in ASCII format or may be in Unicode Text Format (UTF). Table 1 : List of English Characters 1. The Work Space will consist of a Kohonen Neural Network that will primarily guide the interface for the character extraction and recognition. Lower Case Alphabets (a-z) and Numbers (0-9). The prime objective behind this experiment(s) is/are to build a Simulation Software.The contribution through this paper is to demonstrate the design and development scheme of an OCR that would perhaps solve the above problem(s) partially. because these are intentionally developed for special kinds of dedicated purposes. capable of performing basic OCR Functionalities. Upper Case Characters Lower Case Characters Numeric Characters ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz 0123456789 The principle idea behind this simulation software is to convert the scanned character image to the corresponding editable text. INTRODUCTION The term OCR is abbreviated for Optical Character Recognition. Text extraction may be done in different ways. OCR is basically a paradigm whose purpose is to locate and extract character(s) from a text image to a computer editable text. And many other relevant cases. Can be used for the purpose of huge amount of data entry. Useful in word processing systems to extract characters from the graphics mat or scanned document. The Kohonen Neural Network will be used here to provide training and recognition procedures which in turn classify the character pattern stages in the pattern matrix. . called Hidden Markov Model (HMM). The HMM scheme represents each word through consecutive frame(s) where each frame represents a unit character matrix. may be through a scanner. The construction of words has been carried out by a probabilistic model. In this report the OCR approach is defined for English Character Symbols which are about to follow the classifications such as Upper Case Alphabets (A-Z). or may be through a graphics mat (a device through which text is written using photo pen) or any other equivalent means. These kinds of systems are basically categorized into Embedded Applications. This paradigm is beneficial in different ways. It can be used in case of Digital Signature Manipulation.

All of these stages are dependent to the pre processing stages. In other words we can say that these are the basic building blocks for all recognition systems. F1:Mat(O). . Discrete Cosine Transformation etc. So in one word. character recognition process also involves several preliminary steps. So great care should be taken while designing the pre processing stages. Form Processor The purpose of the Form Processor in an OCR is to acquire the pixels from the . care must be taken while selecting data. Once the pixels are extracted from the Form. F3:Mat(A). Image Vector Processing.e Mat(?)) which will be considered as a Word Vector Space (WVS). The very next stage is the pre processing stage. they are stored in an image vector or byte array from which the actual pixels is expected to be found. These are some mandatory general steps which are followed by any kind of recognition system. Scanner Manager Scanner Manager is the unit that is used for the extraction of pixels from a Hand Written Form using a scanning device. F4:Mat(L) and F5:Mat(T). the choice of data must be specific to the system. These agents are: User Interface This system has a very rich Graphical User Interface which is designed by keeping in mind the convenience of the users. These procedures involve grey scale image conversion. During the extraction stage each character pattern would be stored in a set of six 5(C) x 7(R) matrices (i. On the other hand selection of overestimated data may increase the complexity of the system. they are stored in a file in Bit Map (. Once the pixels are extracted. But these agents are not in the scope of this experiments.bmp file that is created in the Scanning Stage. Image Processor The main processing agent is the Image Processor. THE PRINCIPLE ARCHITECTURE The Principle Architecture of this OCR paradigm has three successive levels of data processing units and there exist seven different agents in the system to serve different dedicated purposes. Purpose of this agent is to execute the set of complex Image Processing Algorithms like Hidden Markov Scheme. 3. UpLoader and X Form Parser. binary image conversion and skew correction. The steps are as follows: First of all we need a large amount of data or collected data which we need to preserve for the future training purposes. All the operations starting from the dictionary management to data analysis will be done through this GUI.For example if we consider the word COBALT then it would have six frames like F0:Mat(C). Discrete Fourier Transformation. The main important aspect should be that. has been used to navigate users about the different ongoing behaviors and status of the system. Kohonen Network Management. CHARACTER RECOGNITION PROCEDURE Like all other recognition procedures. status bars etc.bmp) Image in monochrome (Black and White) format. 2. Here mainly the image processing procedures are required. Different buttons. toolbars. F2:Mat(B). Choice of irrelevant data may lead the system to a misguided training stage. these two agents are used for the purpose of uploading the form data to the web server so that they can be fed to the web form and parsed by the web form respectively.

In case of the Single Unit Character Matrix. Chain Code. IMPLEMENTATION Of CHARACTER RECOGNITION STEPS 4.1. 4.mdb file format. 2. My primary experiments begin with single unit character matrix and the scheme has also been intended to extend for word recognition also. that we have to choose the data types for the system with great care. Extracted characters may be found either in Grey Scale or in Black & White Scale. The list is as follows: Table 2 : List of Initial Character Bit Patterns Char A B C D E F G H I J K L M N O P Q R S T U Matrix Bit Pattern 00110001100111001010111111100110001 11111100011000111111100111000110111 11111100001000010000100001100001111 11111100011000110000100011000111111 11111100001000011111100001000011111 11111100001000011110100001000010000 01110110001000010111100011000111111 10001100001000111001111111000110001 11111001000010000100001000010000111 11111001000010000100101001010011100 10001100111111011010100101001110011 10000100001000010000100001000011111 10000100001000010000100001000011111 11111110111000110001100001000110001 11111100011000110000100011000111111 11111100011001111110100001000010000 01111110011000110001100111101101111 11111100011000111011111101001110001 01111110001100000111100001000011111 11111001000010000100001000010000100 10001100001000110001100011001111110 . 3. These steps are basically followed to convert the set of raw data into the trainable components In this Design Scheme we have chosenthe KohoNet for classification scheme. Data Collection (Dictionary Management) As discussed earlier. so that they can be represented by truth value (0 or 1). Pixel Mapping. Because the flexibility as well as the durability of the system absolutely depends on it. Initially the choice begun with a dictionary of 26 (for upper case A-Z only) fixed character bit matrices which are stored in a Microsoft Access Database file in . Next the pixels are examined and mapped into a specific area and vector is extracted from the image containing English Word or character.In the third stage we need to concentrate on the activities like Pixel Ratio Analysis. Printed or Hand Written English character is acquired by means of Scanner and the corresponding bytes are taken as the raw data to the system for processing purposes. Edge Detection. This part is considered as the processing stage. I have chosen 7R x 5C Matrix for each and individual characters. In general scene the truth value 1 is used to represent the black pixels and truth value 0 for white pixel. Histogram Analysis etc. Thinning. Here choices are English Characters in upper case. lower case and numeric types. Those pixels which are in Grey Scale must be converted to the Black & White Scale during the stage of Pre Processing. So in this case the recognition of the characters takes place using the following steps: 1.

that the palate must be designed on the basis of 7(H):5(W) proportion.V W X Y Z 10001100011101101011011100111000110 10101101011010110101101011011111111 10011110100111001100111001011010010 10001110110111001100010000100001000 11111000110011001100110001000011111 As this system is an embedded based application system and we have chosen the characters in fixed matrix pattern. So it is mandatory to follow the size ratio during the design of the character palate. Now if somehow the size of the palate is not in this ratio then the recognition scheme will be misguided. Now during the design of the Character Entry Palate we have to keep in mind one thing. While determining the character patterns. it has got the following Pattern Actual Character Character Matrix Fig. These Bit Patterns are actually preserved for purpose of representing the Unit Character Matrices for the corresponding characters. if a cell in the matrix poses 1 that means the cell contains a black pixel and 0 for the white pixel. we have implemented a Work Space from which we can enter any . For example if we want to consider the character A.1: Sample Character Pattern Pattern Matrix All the above characters are considered here for representing a fixed ratio for them. so we don’t need to concentrate on the skew correction technique. In the secondary stage of the Data Collection. As we can see that each Bit Pattern of the characters mentioned in the Table 2 has 35Bit pattern.

We have named this scheme as Dictionary Collection Entry. The implemented workspace is as follows: Fig. For instance if we need to recognize lower case letters like a. c. Now if we want to create the pattern for 6.English Character Pattern of our preferences. so that it can be recognized properly. b. The pattern can be saved to the existing dictionary by clicking the Save Button. It will improve the recognition scheme for identify every types English Characters. Now what is pattern editing? To explain this let us consider that somebody has created the pattern of 3 but actually entered the pattern for 7. we can store any type of pattern which can be used for the further recognition technique. we have to enter 6 in the Character Field and the bit pattern into the Bit Pattern field in terms of 0 and 1. d …… then we can enter the pattern to the dictionary before execute the recognition technique. Finally the collection scheme ends with pattern editing functionality. So like this.2: New Character Pattern Creation Scheme The example has been plotted for the new character 6. While entering the pattern it will simultaneously show the character pattern in the right most pattern matrix/palate. This editor allows one to enter the English Character along with its pattern. .

IMAGE SEGMENTATION PROCEDURE Image segmentation is a very crucial stage in OCR System. The recognition of frames in turn poses the Hidden Markov Model of the system. This system provides a work space from which one can change the wrong pattern of the characters by replacing the 0’s and 1’s to provide the required pattern. 5.Fig. viz. then it would have 3 frames and each frame with 7:5 H:W ratio. 5. For example if I consider the word “SEG”. The above discussion is illustrated in figure below. In my system each of the character frames have the height and width ratio as 7:5. So by this we can determine the total number of states present in the HMM Model of the system. The actual height and width of the frames would be calculated according to this statistical ratio. The entire Image Segmentation Procedure is divided into three basic steps. Frame Recognition This step is responsible to find out the character frames from the scanned document image. .1. This stage recognizes the character frames from the given document image (.bmp) fetched after the scanning stage.3: Wrong Character Pattern For 3 Now to correct the pattern of 7 we need to modify the pattern.

It can capture most of the visually significant information from the character frame by using a very few number of coefficients. 5. x and y are the coordinate position for each pixel. Character Feature Calculation In this phase each frame is taken separately and character feature calculation is performed on each and individual pixels of the frame by applying Discrete Cosine Transformation (DCT). 1≤y≤C-1 7 x 5 Blocks Column Wise DCT .4 :Word Frame Recognition (SEG) (Each Frame of 7:5 R:C Ration) 5.2.Frame:1(S) Frame:2(E) Frame:3(G) Fig. This scheme is closely related with Discrete Fourier Transform (DFT). Now the DCT of image is given by the following expressions: ∑∑ 0≤x≤R-1 ( ) ( ) and 0≤y≤C-1 ⁄ √ √ ⁄ √ √ . A and B images are consist of R rows and C columns. It is used to represent the sum of various sinusoids of varying magnitude and frequencies. 1≤x≤R-1 . Discrete Cosine Transformation (DCT) The Discrete Cosine Transformation (DCT) converts a continuous signal into its elementary frequency components.3. Assumptions: A and B are the input and output images.

6: Character Vector of B . Creation of Pattern Vector Once we have find the Hidden Markov Model from the system and the model is sampled we have the two distinct areas in the vector. Hence the basic Image Processing steps are complete. 6 which describes the black and white areas of the character B. For example see Fig. 6. The steps involved in Pattern Recognition with KohoNet are as follows: 6. Otherwise the input pattern will be added to the character dictionary as a new character pattern or may be it could be a variety of any existing pattern depending on the user’s preference. the system will compare the pattern with the existing character patterns shored in the character pattern dictionary. During this recognition process if the input pattern matches about 70% or above of the dictionary character pattern then it will be recognized with success.5: Discrete Cosine Transformation Since Discrete Cosine Transformation is a linearly separable transformation process.1. Fig. One is the white area and the other one is black area. so computing a two dimensional DCT can be done easily by applying one dimensional DCT’s for both column wise and next row wise as above. Pattern Recognition with KohoNet This is the most sensitive stage of Optical Character Recognition System. Once the DCT process is complete the frame matrix is updated and information is stored into a file which will be given as HMM Model input for the particular character or word that is considered to be trained. Once the bit pattern is predicted. In this stage the system actually finds out the bit pattern of the individual character matrices.(8bit Input Data) (1D Transformtion) Row Wise DCT DCT Coefficients (1D Transformation)(64Bit Output Data) Fig.

But the question may arise what exactly the Kohonen Neural Network is? The answer is simple . Hence the pattern vector of the Fig. Fig. Kohonen Neural Network does not output all the input neurons successively. For example while retrieving the pattern from the above character (Ref. So the system has 7x5=35 input pixels. Whenever an input vector is produced to the algorithm it starts analyzing the pixel neurons by introducing the mpath detection scheme. Finally if all the neurons are traversed successfully then the Winner Neuron will be the output of the system.2. 7. It takes the absolute responsibility to group out data i. 6. 11 – here the input vector has 7Rows and 5Columns. of input neurons. 8): . Now the procured vector is to be considered as the input to the Kohonen Neural Network. 6 will be as per the Fig. 6). It can be train in an unsupervised mode i. Now if the system select 8 sets of the neurons as an output the scheme will be as follows (Ref.7: Pattern Vector Of B Now each of these vectors will be given a new unique ID and stored in the database so that they can be utilized during the future recognition is a Neural Network Architecture which has been invented by Dr.e Feedforward Back Propagation Neural Network) apart from several aspects as follows: The way of Training and Recall of the Patterns. Tuevo Kohonen and it is named after its inventor. The Kohonen Neural Network differs from Clausal Neural Network (i. Therefore there will be 352=1225 nos.e if the input of the system does not provide specification for data it can automatically identify that. It does not use any sort of bias weight. if a white area is found in the input character matrix then it will mark the corresponding pattern vector element as 0 and for black area it will mark as 1. During this either of the input neuron is selected to be as a ―Winner‖. For example let us consider the scheme for Fig. Fig. Introduction to Kohonen Neural Network So far we have been through a lot of pre processing stages to procure the pattern vector. It does not require any kind os activation function.Next the system will recognize the pattern vector corresponding to the pattern area. Fig.e it does not required any supporting artificial intelligence to recognize the network.

training mechanism uses a certain number of samples for modeling a particular character or word that is used for estimating model parameters. 07 we can observe that the word has three states without the start and end states. Fig. 6.6. The state transition of the word ―SEG‖ would be as figure below. Now to describe this let us reconsider the word ―SEG‖ once again. we can proceed to the next stage that is HMM deployment. . The only difference is that. Deployment of Hidden Markov Model Once the recognition stage is complete. Now if we add the start and end states with the word then we will get five consecutive states. Ref.Fig.5.8: Kohonet Output Selection Scheme As per the Fig. 8 it is shown that only four neurons amongst the input pixels haven been chosen as the winners. In this scenario the HMM Model will clearly represent the extracted frames and states of the word ―SEG‖. How Does Training & Recognition Differs The image processing methodology is exactly same for the operation of training and recognition. 6. On the other hand recognition operation create model for the particular image character or word to be recognized.

7. And also introduction of new algorithms to reduce the present complexity issues.9: HMM State Transition for “SEG” Finally we have got the Hidden Markov Model for the word ―SEG‖. Salt Lake.2009 held at NITTTR. There are lots of topics like Kohonet Structure. Kolkata. Now it’s the time to recognize the pattern of the character written in the above three frames. IIT Mumbai on 24. 9. Error Detection and Correction etc are still under research. CDAC. References [1] Seminar on Open Source Software by IBM. Weight [2] SAHANA Official Website for XFORM (http://wiki. Rate of Accuracy. 8. I have chosen Kohonen Neural Network because I have found it as an innovative and independent pattern recognition process. Height Adjustment.04. Conclusion I have written this report to demonstrate some basic steps involved in the Optical Character Recognition Systems. Future Work As I stated above that this paradigm has a wide area of research work and associating development schemes. So far I have tried to demonstrate the major functionalities involved in this system. In the subsequent topics the discussion will go ahead with the steps followed by the KohoNet. There are also multiple ways to implement OCR Paradigm for the purpose of different language character recognition.php/sahana ocr) . digital signature recognition etc. Along with these there are also lots of other concerned mathematical and technical theories to learn and develop. Next this model will be taken as the input to the Kohonen Network Model for the purpose of pattern But due to the short time span and limitations to the write-ups I could arrange the above limited things. Also the above described Kohonen Scheme describes the basic principle of the architecture.php?id=dev:sahana_xform) [3] SAHANA Official Website for OCR (http://wiki.sahana. Hope the reader has enjoyed the learning.

2002.[4] Journal of Zhejiang University SCIENCE by Dr. University of Hyderabad. Woods. Bangladesh [10] Journal by Md. India. BRAC University. by Dr. [6] OCR FOR ANYDOC IS JUST WHAT THE DOCTOR ORDERED. BRAC University. S. Atul Negi Language Engineering Conference. on Universal Digital Library—Future research directions [5] Journal by Dr. Xiaofan Lin HP Imaging Systems Laboratory [8] On Developing High Accuracy OCR Systems For Telugu And Other Indian Scripts by Dr. Chakravarthy Bhagavati. Shoeb Shatil. LEONARD – AnyDOC Software [7] DRR Research beyond COTS OCR Software: A Survey by Dr. Mahesh Kumar. Gonzalez. [9] Journal by Md. Dec. Bangladesh [11] Rafael C. Digital Image Processing (Second Edition) . Abul Hasnat. Dhaka. Dhaka. BALAKRISHNAN N. AJ Palkovic on Improving Optical Character Recognition. Tanuku Ravi. Richard E.

   Y    W ÈW   "         Y      V ÈV          3-7392'7/$ .

99073-0.089.381472..0.041 59./2+63:3.8-.943.7.70.425090     +88/62"/-312832.4250909017..9:.3/9024/088.81448   6/+83230 +88/62&/-836 3.70.308935.943889470/394....9075.381472.00.84/035:9147905.-097..99073#0.84190...3/3097480..7 :739870.70904830%7.3-0/4300.090-.:.250800 . .7.438/070/94-097.9438.-4.70.08..97.90/ 9088902.7.425:93.907     1 +6+-8/6&/-83630 .990739039-070.99073 472.0 %0 890583.70.03./08.943.9075.82498+8+  $6+27036832         "3.08 .943.990732.3/31472.990739900893.907/.7-0890-.202.94/203843..090 88902.4:2380.99073 8570/.2.98 .0  3..9907388470/3 90..30.7805.7.70./:.088389058.04190/.%.43943$8902 39889.//0/9490.  470.553430/203843.09094/893..9.4394394409.833/8 %8890248980389.4..947  3089090.907#0...0888.-09.9074747/9.088  907809035:95.'7/$$3/00-/287  $6+27036+832 898498+8+    1 7-6/8/372/$6+27036+832  $3.-4:9 47.13/84:990-95.99073/../17429088902.3/9049074308-..08819035:95.-0.0/3!..% 8147 -49..90/.013/90.8..30. 10.088 84 .425.9907341903/....090-95..9072..-4..3/!74.250/0 .7.43943574.7.7.30/ 03..9.090%574. 857010703.4:/-0.8390..70941.7.7.943574.978:5/.

99073..020 :73980907419035:930:7438800.8 .088.47708543/394905.99073.907 #01   1.:70905..90/94-0.7097.99073.0..9474190 -0.0/9490.0390/-6 $9/:3332/2.0780/8:.4190!.0881:9039033070:743-0904:95:941 9088902  470.250070970.33.9790392.3/ 4:238 $49088902.89035:99490332/2/96+/8.25009:8.791..850..90.80849.0947 .03.80/24/0 09/408349706:70/.947 4 90574.70.0894574.81448 #01     .3/147-.9.70.0890.0.4309030947  O99.7.9072..33.7.9478-0.0905..38:554793..9.79174280.335:9 .947894-0..99073174290.3.99073.3-0:90//:73901:9:7070.30:36:0.9-83283332/2/96+/8..-4..0. 09908890270.3-097.4792989.850790       1  +88/62&/-8360  40. 3307  3.3/#0.798.5.09470.8. 39003.99073.36 $41.780.43943574.07.1.947002039.0-003974:.:942.70..4941570574.8 03.7.0 030.0.9 90.0.0947/4083494:95:9. /96+/8.81448 O%0.9   443030:7.8 14:3/39035:9..3.1907983..03947 %0443030:7.34:95:9908..419080.08808     2863.36 :990 6:089432.07.. 47 0.9.-.0.-84:907085438-99474:54:9/.943147/.835:9508 %07014709070-0348 41 35:930:7438 419088902800.0947/110781742.9478574/:.!745.9030:7438.380788250 98.9/090.:8.947.8.70.9./03919..90.366-8/-896/.3905030:7438-3974/:.47708543/35.9035:930:74388:.3:38:507.0947  0 00/147.8#48.7/.:70/.9431:3.438/07908./0850.368%0.20/ .41%7..9438.3/983.3847941-.430905.0. 019035:9419088902/408349 574.020-0 .8-0033.088389.9430:7.92.020147  0709035:9.990738  O9/408349706:70.438/070/.9.33/48..3905.990332/2/96+/8.3/89470/390/..0.0:7.98098419030:7438..947.0.9...98.809  O9.3902 5.1..943  O9/408349:80.790...

.382:808.089.3574.//9089.3/70.79..079..3/ 70.:9.908 41 0..70.34-807.09.98:80/1470892.89033078     3.4394389.209078  390 4907.94314:730:7438.439434507.99047/.98.3/03/89.9324/ 3.3320.243899035:9508.00/949030989..9434197.425090 0.9074747/94-0 70.7-09809:870.:.   1 332/898498#//-832#-//  850790 988439.9 97.770570803990097.98 /0542039 494/08..4380.430/     /43/2830.03-003..9024/0147905.20147904507.908 %089.0.438/079047/ $ 43.7.5..9074747/9.09..3 3988.81:70-04    .089.90899047/9030091. $ 4:/-0.74 904/0.90/17.3/89.7.943.0.3/03/89.79.4803 .2508 14724/03.0574.0.90 97.79.8970089.3/7$6+221"/-31283200/67 %02.08832094/4480.33:2-07418.09070.79..908419047/ $ #01    0.90894:99089.08.43943 %043/110703....72./2+63:3.

43943$89028 .0208 8490.94389490790 :58.43943 /9.8942502039 #!.0290/938 4509070./990890581440/-904409     32-9732 .43943 390 8:-806:039945.3/3/0503/0395.7.9434130 .479289470/:.   1 #8+8/$6+27832036### 503$4:7.8044309$97:. 573.884.3/4770.417080.0-890147 # 995..8.70.94309.43   0/.90902.430 905.77./2.504190.733    9896/'36 889..09.70..0730/2.045 :9/:09490847992085..0905708039.93 /0..-4.0.9041.471: 9035:994904430309474/0147905:75480415.7.9907370.  ($ 11. %070.7.:88434.9:70 %070.70-  %:2-.0799039870547994/0243897.47.3//0...902.7..74..43943 574.0.020/  43990809070..9:7070.208 099824/0-09.0970017.9:70 #../2147905:7548041/1107039 .80340/900.842:950.03.73.907 #0..-4.3/.4:/ .9985..3.0/08.3 334.7049841945.4/01479047/ $ 49 8909209470..890583.0$419.90470894 0.-4.0947-0.943.014:3/ %%%# $.4.9907370.3090.7089:3/07 7080./0.90/..0 4.0/390 59..890/8...4250988:08     "/0/6/2-/7 ($023.9.4394309.7-0/44303$.9083.7.3..7-0890-.:7.7.-4.  09/:892039 09/:892039 7747090.990734190.04990//03./07.908420-.088 $41.8.04520398.4.43.4803443030:7. 3/.843974/:.


.. 8.3.

8.*1472  ($ 11..0-890147 # 995../4: 55//0.3.


 8. ..3.

/4: 55.

8.7  .4.3..

907#0.:7.410./08  (4:73. 0.-/ $40-$.9 #&3. -7.3.(4:73..83.3$. ./08 (#.-/ -:..7 :9:707080.3 $89028.3&3.7 9:0.0330073 4310703.078941/07./ 3/..7.7.7..432574..70$:7.0789 .79.7598-7  .-47.33!2.2..3 59.-7 !..0-7 ..0 43.-.0453.7/ 44/8 9./  ( # # $&$%%% % # ## -7  #  3 $419.3:.0 &3.9438 (4:73.947 ( 30. $ .08:2.7.9 #&3.0 #.0789 .1.. .70 (###080.0789$-7 #$ 43&3. #$8902847%0::3/ 9073/..0!74.0883 $0.9 %...-043/ %$ #$419.4.3::#....43//943      .    (4:73.3.