You are on page 1of 1
| Find solutions to your homework © | , Question p ‘TASK 1. Neural Network Using any programming language, you are to prepare a neural network program to solve and train a simple neural network that solves an XOR function. Follow exactly the steps used in the class For the forward and backward propagation. ‘The guidelines are as follows: 1. Target error=0.09, 2. Elaborate in detail the process and source code on how you achieved the following. Support your discussion with screenshots. a. Epoch or the number of iterations should be set by the user. b. Inputs (should be editable) c. Weights (should be editable) ‘Computing Department Atificial Intelligence Techniques Page L of 2 Universit Likaaeus Malaysia PAHANG Muscat Colleg d. Output and Error ©. Sigmoid activation functions and gradient functions f, Forward and Backward Propagation processes, Show transcribed data Expert Answer % This solution was written by a subject matter expert. It's designed to help students like you learn core concepts. Step-by-step ist step | Allsteps | Answer only Step 1/4 A The term Brain Organizations alludes to the gement of neurons either natural or ee in nature. In computerized ning reference, brain networks are a bunch of calculations that are intended to perceive an example like a human mind. They decipher tactile information through a sort of machine discernment, marking, or grouping crude info. The acknowledgment is mathematical, which is put away in vectors, into which all certifiable information, be it pictures, sound, text, or time series, should be deciphered. A brain organization can be envisioned as a framework that comprises of various exceptionally interconnected hubs, called 'neurons’, which are coordinated in layers that cycle data utilizing dynamic state reactions to outside inputs. Prior to understanding the working and design of brain organizations, let us attempt to comprehend what counterfeit neurons really are. Counterfeit Neurons Perceptron: Perceptrons are a kind of fake neurons created during the 1950s and 1960s by the researcher Forthright Rosenbalt, motivated by prior work by Warren McCulloch and Walter Pitts. All in all, how do perceptron works? A perceptron takes a few parallel results x1, x2, ...., and produces a solitary twofold result. X1 X2 — —_—_> a oust X3 It could have more or less sources of info. To work out/process the result loads assume a significant part. Loads wl, w2, ...., are genuine numbers communicating the significance of the separate contributions to the results. The neuron's output (0 or 1) thoroughly relies on a limit esteem and is processed. Output = {0 if SCiwi.€ < 0 {1 if Sojwj.xj > t0 Here tO is the limit esteem. It is a genuine number which is a boundary of the neuron. That is the fundamental numerical model. The perceptron is that a gadget pursues choices by weighing up the proof. By changing the loads and the limit, we can get various models of direction. Step 2/4 A Sigmoid Neurons: Sigmoid neurons are a lot of nearer to perceptrons, however adjusted so that little changes in their loads and predisposition cause just a little change in their result. It will permit an organization of sigmoid neurons to learn all the more effectively. Very much like a perceptron, the sigmoid neuron has inputs, x1, x2, ... . Yet, rather than being only 0 or 1, these sources of info can likewise be any worth somewhere in the range of O and 1. Thus, for example, 0.567... is a legitimate contribution for a sigmoid neuron. A sigmoid neuron likewise has loads for each information, wl, w2,..., and a general predisposition, b. However, the result isn't O or 1. All things being equal, it's o(w.x + b), where ois known as the sigmoid capability: 1 a(z) — l+e™* The output of a sigmoid neuron with inputs Xp Xo, ... weights Wy, Wo, ..., and bias b is: _ 1 Outputs Trapt Sis) Step 3/4 “A The Design of Brain Organizations A brain network comprises of three layers: Input Layer: Layers that take inputs in light of existing information. Secret Layer: Layers that utilization backpropagation to advance the loads of the info factors to work on the prescient force of the model. Yield Layer: Result of expectations in view of the information from the info and secret layers.The input data is introduced to the neural network through the input layer that has one neuron for each component present in the input data and is communicated to hidden layers(one or more) present in the network, It is called ‘hidden’ only because they do not constitute the input or output layer. In the hidden layers, all the processing actually happens through a system of connections characterized by weights and biases(as discussed earlier). Once the input is received, the neuron calculates a weighted sum adding also the bias and according to the result and an activation function (the most common one is sigmoid), it decides whether it should be ‘fired’ or ‘activated’. Then, the neuron transmits the information downstream to other connected neurons in a process called ‘forward pass’. At the end of this process, the last hidden layer is linked to the output layer which has one neuron for each possible desired output. Step 4/4 “~ Executing Brain Organization in R Programming It is particularly simpler to carry out a brain network by utilizing the R language due to its fantastic libraries inside it. Prior to executing a brain network in R we should grasp the design of the information first. Grasping the design of the information Here how about we utilize the double datasets. The goal is to foresee whether an up-and-comer will get owned up to a college with factors, for example, gre, gpa, and rank. The R script is given next to each other and is remarked for better comprehension of the client. The information is in .csv design. We will get the functioning index with getwd() capability and spot out datasets binary.csv inside it to continue further. Kindly download the csv record here. | # preparing the dataset 2 getwd() 3 data <- read.csv("binary.csv" ) 4 str(data) 5 Output: 6 7 ‘data.frame': 400 obs. of 4 variables: 8 $ admit: int 0111011010... $ gre : int 380 660 800 640 520 760 560 9 400 540 700... $ gpa : num 3.61 3.67 4 3.19 2.93 3 10 2.98 3.08 3.39 3.92 ... 11 $rank: int 3314421232... Looking at the structure of the datasets we can observe that it has 4 variables, where admit tells whether a candidate will get admitted or not admitted (1 if admitted and 0 if not admitted) gre, gpa and rank give the candidates gre score, his/her gpa in the previous college and previous college rank respectively. We use admit as the dependent variable and gre, gpa, and rank as the independent variables. Now understand the whole 12 process in a stepwise manner 4 Step 1: Scaling of the data To set up a neural network to a dataset it is very important that we ensure a proper scaling of data. The scaling of data is essential because otherwise, a variable may have a large impact on the prediction variable only because of its scale. Using unscaled data may lead to meaningless results. The common techniques to scale data are min-max normalization, Z-score normalization, median and MAD, and tan-h estimators. The min-max normalization transforms the data into a common range, thus removing the scaling effect from all the variables. Here we are using min-max 16 normalization for scaling data. 18 # Draw a histogram for gre data 19 hist(data$gre) Explanation Please refer solution in steps. Final answer “~ | will try to explain what | know about your question, hope it helps you, thank you. Was this answer helpful? i 2 KD 0

You might also like