You are on page 1of 7

1. What do you mean by semantic networks? Explain inheritance in semantic networks?

Ans : - A semantic net (or semantic network) is a knowledge representation technique used for propositional information. So it is also called a propositional net. Semantic nets convey meaning. Semantic nets are two dimensional representations of knowledge. Mathematically a semantic net can be defined as a labeled directed graph. Semantic nets consist of nodes, links (edges) and link labels. In the semantic network diagram, nodes appear as circles or ellipses or rectangles to represent objects such as physical objects, concepts or situations. Links appear as arrows to express the relationships between objects, and link labels specify particular relations. Relationships provide the basic structure for organizing knowledge. The objects and relations involved need not be so concrete. As nodes are associated with other nodes semantic nets are also referred to as associative nets. Inheritance Reasoning

In the above figure all the objects are within ovals and connected using labeled arcs. Note that there is a link between Jill and Female Persons with label Member Of. Similarly there is a Member Of link between Jack andMalePersons and Sister Of link between Jill and Jack. The Member Of link between Jill and Female Persons indicates that Jill belongs to the category of female persons. Unless there is specific evidence to the contrary, it is assumed that all members of a class (category) will inherit all the properties of their super classes. So semantic network allows us to perform inheritance reasoning. For example Jill inherits the property of having two legs as she belongs to the category of Female Persons which in turn belongs to the category of Persons which has a boxed Legs link with value 2. Semantic nets allow multiple inheritance. So an object can belong to more than one category and a category can be a subset of more than one another category. Inverse links Semantic network allows a common form of inference known as inverse links. For example we can have a HasSister link which is the inverse of Sister Of link.The inverse links make the job of inference algorithms much easier to answer queries such as who the sister of Jack is. On discovering that HasSister is the inverse of Sister Of the inference algorithm can follow that link Has Sister from Jack to Jill and answer the query. Disadvantage of Semantic Nets One of the drawbacks of semantic network is that the links between the objects represent only binary relations. For example, the sentence Run(ChennaiExpress, Chennai, Bangalore,Today) cannot be asserted directly. There is no standard definition of link names. Advantages of Semantic Nets Semantic nets have the ability to represent default values for categories. In the above figure Jack has one leg while he is a person and all persons have two legs. So persons have two legs has only default status which can be overridden by a specific value. Semantic nets convey some meaning in a transparent manner.

Semantic nets convey meaning.Semantic nets are simple and easy to understand. Semantic nets are easy to translate into PROLOG 2. nodes appear as circles or ellipses or rectangles to represent objects such as physical objects. So an object can belong to more than one category and a category can be a subset of more than one another category. As nodes are associated with other nodes semantic nets are also referred to as associative nets. Bangalore. Unless there is specific evidence to the contrary. Disadvantage of Semantic Nets One of the drawbacks of semantic network is that the links between the objects represent only binary relations. and link labels specify particular relations. Links appear as arrows to express the relationships between objects. There is no standard definition of link names.Today) cannot be asserted directly..the advantages and disadvantages of CD is below: Advantages Disadvantages  Can read discs made by different  Increased overhead – less usable CD-R and CD-RW devices space due to the multiple table of  CD-RW’s media can be contents. Advantages of Semantic Nets Semantic nets have the ability to represent default values for categories. For example. Inheritance Reasoning In the above figure all the objects are within ovals and connected using labeled arcs. concepts or situations. For example Jill inherits the property of having two legs as she belongs to the category of Female Persons which in turn belongs to the category of Persons which has a boxed Legs link with value 2. In the semantic network diagram. it is assumed that all members of a class (category) will inherit all the properties of their super classes. Semantic nets allow multiple inheritance. fixed or variable length written/erased over and over packets  CD-RW devices can write to both  CD-RW drives are slower than CD-R CD-R and CD-RW media drives by 2X  Backward computability issues due to lower emulation of the pits and . In the above figure Jack has one leg while he is a person and all persons have two legs. Similarly there is a Member Of link between Jack andMalePersons and Sister Of link between Jill and Jack. Inverse links Semantic network allows a common form of inference known as inverse links. Relationships provide the basic structure for organizing knowledge.The inverse links make the job of inference algorithms much easier to answer queries such as who the sister of Jack is. Semantic nets consist of nodes. Note that there is a link between Jill and Female Persons with label Member Of.A semantic net (or semantic network) is a knowledge representation technique used for propositional information. For example we can have a HasSister link which is the inverse of Sister Of link. What are the advantages and disadvantages of CD? Ans:. Mathematically a semantic net can be defined as a labeled directed graph. Explain Partitioned semantic networks with an example. So semantic network allows us to perform inheritance reasoning. On discovering that HasSister is the inverse of Sister Of the inference algorithm can follow that link Has Sister from Jack to Jill and answer the query. The objects and relations involved need not be so concrete. Semantic nets are two dimensional representations of knowledge. Ans:. Chennai. links (edges) and link labels. 3. So persons have two legs has only default status which can be overridden by a specific value. The Member Of link between Jill and Female Persons indicates that Jill belongs to the category of female persons. the sentence Run(ChennaiExpress. So it is also called a propositional net.

Also called rule based system. But hit the right spot. The step function simply holds the final output at 0 until y exceeds a threshold value at which point the output is set to 1. the output neuron fires and a motor response is generated.Artificial intelligence based system that converts the knowledge of an expert in a specific subject into a software code. 5. . (2) an Inference engine that interprets the submitted problem against the rules and logic of information stored in the knowledge base. Ans:-R o b o t i c s : The simplest kind of animal response to its environment is the spinal reflex arc. If the input neuron's activity exceeds the output neuron's threshold. Define expert system and explain applications and architectural principals of expert System Ans:. expert systems technology has found application only in areas where information can be reduced to a set of computational rules. In this case. and an (3) Interface that allows the user to express the problem in a human language such as English. The activation function can take almost any form. The step function looks like this: This particular step function has a threshold value of 0 at which point the function transitions from a value of 0 to a value of 1.lands  Areas can get burned out 4. The figure below illustrates the situation: Reading the figure from top to bottom. we represent the activity of the input neuron by a variable x while the activity of the output neuron is symbolized by y. but the most commonly used are the step function and the sigmoid function. The synaptic strength or weight between the input neuron and output neuron is represented by w. This code can be merged with other such codes (based on the knowledge of other experts) and used for answering questions (queries) submitted through a computer. This simple circuit has nearly all the ingredients we will need to build more complicated artificial neural networks. the activity of the output neuron is then given by the equation: y=w·x-b where b is the output neuron's bias. and logic rules that govern how that information is applied. For a given level of activity of the input neuron. such as insurance underwriting or some aspects of securities trading. we see that physical energy stimulates the input neuron which makes a connection with the output neuron. In mathematical or engineering terms. What do you mean by the term “Robotics”? Explain. and the leg kicks forward. Probably the best known reflex in people is the patellar reflex or "knee jerk" reaction. This is similar to the way the patellar knee reflex works: if the mallet doesn't hit the base of the knee just right. The final response of the network is then given by: r = a(y) where a is called the activation function. Expert systems typically consist of three parts: (1) a knowledge base which contains the information acquired by interviewing experts. a sensory neuron just below the knee connects directly to a motor neuron in the quadriceps which causes the lower leg to kick outward. Despite its earlier high hopes. there is no reflex.

b1 while the input to the right motor unit is give by: y2 = w21x1 + w22x2 . while the output nodes are represented by y1 and y2. the sigmoid function is roughly linear in its middle range. Our new network looks like the figure below: We now have two input units and two output units. However. such as the voltage being sent to a motor which we would not want to exceed a certain value. The mathematical formula for a sigmoidal function is as follows: f(x) = 1 / (1 + exp(-x)) where exp() is the exponential function.The sigmoid function is a less drastic version of the threshold function and is also called a squashing function. at least not in real neurons. Go Into The Light! A Four -Neuron Light Following Robot Suppose we would like our robot to follow a patch of light. As you can see from the figure. well it can't. The connection between input unitxi and output unit yj is then written as wji. they will fire at some frequency. positive or negative. if you happened to be wondering how a neuron's activity level can be negative. The non-linear property of both the step and sigmoid functions turns out to be of critical significance in artificial neural networks. there would be a range of impact values that cause a proportionally smaller or larger kick of the leg. y1. If the patellar reflex worked this way. As you can see by playing with different values of x. From the figure above. It looks like the picture to the left: As the figure illustrates. If this base level activity is suppressed by an input.b2 . as will the activity of the right motor. the input nodes are labeledx1 and x2. Similarly. This will be fully explained in a later section on categorization. For this reason. There is one way that a real neuron's activity can be considered negative: most neurons have a base level of activity—in other words. This means that changes in the x value lead to roughly proportional changes in the y value in this region. You could use such a method to have your robot come to you from across the room by simply shining a flash light in front of it and guiding it across the floor. is given by the sum: y1 = w11x1 + w12x2 . we will represent a typical input node by xi where i can range from 1 to N. even if they are receiving no input. artificial neurons are sometimes referred to simply as units or nodes. we can use any range of values we like. an output node will be represented by yj where j can range from 1 to M. we now have four connections: two straight through connections. However. Consequently. we can use it to drive our robot. large negative values of x result in a value of f(x) near 0 while large positive values of x yield an f(x) close to 1. This type of activation function is particularly useful in robotics since it can put an automatic upper and lower value on control signals. By the way. Let us introduce a new notation for keeping track of inputs. This means that the activity of the left motor will depend on the readings from both the left and right light sensors. For a network with N input nodes and M output nodes. our goal is not to model real neurons exactly but to borrow as many concepts from them as we find useful. the kick would not get appreciably smaller or larger. and two cross connections. But outside of this range. then the lower value could be considered "negative" relative to the baseline. which is consistent with the graph above. when we are talking about artificial neurons. By adding just two more neurons to our simple reflex circuit. outputs and connections. The reason is that non-linearity enables the network to make "decisions" in a way that is not possible in purely linear networks. large negative or positive values of the input produce asymptotically smaller changes in the output. we see that the total input into the left motor unit. However.

then we should turn towards the left which means we must turn on the right motor more than the left motor. One nice thing about these equations is that they generalize to any number of input and output units. Setting both the bias values to 0. the robot is sitting in the dark—then we want both motors to be off. we can make this happen if the connection weight w12 is a number greater than 0 and the weight w11 is less than 0 so that it suppresses the left motor.5 x 300 + 1 x 100 = -50 while the net input to the right output unit y2 is: y2 = 1 x 300 – 0. So our simplified control equation becomes: = And plugging in our values for the connection weights we have: = . suppose the left light sensor is giving a reading of 300 units while the right sensor registers only 100 units. meaning the light is brighter to the robot's left side. So our network can have thousands of nodes all cross connected to one an other. Let's set w12 = 1 and w11 = -0. We can write both equations together using a more compact matrix notation: y=w·x-b This equation states that the vector of values across the output units is given by the matrix product of the connection strengths times the vector of input values minus the vector of bias values.Where b1 and b2 are the biases on our two output units. This means that when x1 = x2 = 0. the total input to the left output unit y1 is: y1 = -0. one very common practice is to let a(y) select only the most active output unit in a process called winner take all.5.5 x 100 = 250 This means our left motor will turn backwards with a speed of -50 units while the right motor turns forward with a speed of 250 units.e. Referring to our network diagram. For example. Just the opposite argument holds when the light is stronger on the right so we set w21 = 1 and w22 = -0. our robot will spin in place to the left and the robot turns toward the brighter light on the left as we hoped. the above equation must give y1 = y2 = 0. Consequently. It is easy to see that if the left light sensor is receiving more light than the right sensor. the matrix version of the two output equations above is: = We will return to the matrix formulation of our problem in a little bit. The only way this can happen for non-zero connection weights is for both bias values. b1 and b2 to be 0. yet we still just use matrix multiplication to get the output values from the input values. In our current example. This will become important in later chapters when we discuss how neural networks can be used to make choices between alternative actions. Let's now return to our more general matrix notation which we show again below: = If both light sensors are reading 0—i. let's just play with some of the numbers to get a better feel for our network. The activation function then generalizes to: r = a(y) where the response vector r can now be a function of all the output units.5. Let's now re-draw our network diagram substituting these values for the connection weights: To see if this works. But first.

Now you may be wondering if we could have chosen other connection weights that would also work. The final step in preparing the neural controller for our light following robot is choosing an activation function to map the values of the output units into actual motor control signals. or by studying examples like problem solution. learning can be accomplished using a number of different methods. The maximum differential we can expect between the two sensors occurs when one of them registers Land the other reads 0. Plugging these values into our matrix equation for x1 and x2 yields output values ofy1 = -0. and if this value is smaller than our threshold. we need to set a minimum output needed for the robot to react. the robot will turn more quickly toward a difference in light values than in the first case.General Learning Model: .5L and y2 = L. there are an infinite number of ways we can choose the weights and get similar behavior. Combining this with our scaling function yields our final activation function for our motor signals: r(yi) = S/L · H(max(y1. we need to multiply output values by S/L. And the answer is yes. Explain general learning model with necessary diagrams? Ans:. How does our neural controller stack up against the real world? 6. where the feedback provides some measure of the accuracy and usefulness of the newly acquired knowledge. we only want our robot to follow a light that is brighter than its surroundings. In this particular scenario. We simply find the maximum value given by our two output units. Testing means that the knowledge should be used in performance of some task from which meaningful feedback can be obtained. the following matrix would also work: = In this case. such as by memorization facts. We will explore this potential at great length in the section on neural network learning. Anything less than this and we want to set the motor control signal to 0 so the robot does not move. Consequently. In essence we are simply scaling the output values from the units of our light sensors to those of our motor controller. Assuming we want the maximum output value L to map into the maximum motor speed S. Let's call this minimum value T for threshold. y2) – T) · yi This is actually much simpler than it looks. y2) – T) · yi where H(x) is the step function we met earlier and evaluates to 1 if x > 0 and 0 if x ≤ 0. we set both outputs to 0. The real power of artificial neural networks lies in their ability to learn an optimal set of connection weights from experience. the actual choice of coefficients will come down to the nuances of how you want your robot to behave. So much for all the theory.AS noted earlier. We can accomplish this with the function: yi = H(max(y1. Let's represent the maximum speed of our motors by the letter S and the maximum value the light sensors can take as L. So in the end. This new knowledge must then be assimilated into a knowledge base and be tested in some way for its utility. For example. otherwise we scale the outputs appropriately and send them on to the motors. by being told. Learning requires that new knowledge structures be created from some form of input stimulus. So the first part of our activation function is simply: r(yi) = S/L · yi In addition. General Learning Model .

they may be negative. or classifying instances of some concept. When they are hosen to be the same. much the same as it does for humans. The language may be the same representation scheme as that used in the knowledge base (such as a form of predicate calculus). the form and extent of any initial background knowledge .1 where the environment has been included as a part of the overall learner system. For some systems the environment may be a user working at a keyboard . the feedback must be accurate more than 50% of the time. The feedback may be a simple yes or no type of evaluation. In even more realistic cases the system will have real physical sensors which interface with some world environment. we say the single representation trick is being used. On the other hand. One method of achieving this is through the use of background knowledge which can be used to constrain the search space or exercise control operations which limit the search process. the learning process may be very slow and the resultant knowledge incorrect. The environment may be regarded as either a form of nature which produces random stimuli or as a more organized training source such as a teacher which provides carefully selected training examples for the learner component. The information conveyed to the learner component is used to create and modify knowledge structures in the knowledge base. or they may be mixture of both positive and negative. that is the feedback may actually be incorrect some of the time. The cycle described above may be repeated a number of times until the performance of the system has reached some acceptable level. Feedback is essential to the learner component since otherwise it would never know if the knowledge structures in the knowledge base were improving or if they were adequate for the performance of the given tasks. the performance component produces a response describing its action in performing the task. or it may contain more useful information describing why a particular action was good or bad. providing an accurate assessment of the performance or it may contain noise. This same knowledge is used by the performance component to carry out some tasks. symbolic training examples. Feedback . The type of training used in a system can have a strong effect on performance. The actual form of environment used will depend on the particular learning paradigm. if the feedback is noisy or unreliable. or they may contain a variety of facts and details including irrelevant data. There are several important factors which influence a system’s ability to learn in addition to the form of representation used. or until changes ceases to occur in the knowledge base after some chosen number of training examples have been observed. Training may consist of randomly selected instance or examples that have been carefully selected and ordered for presentation. given a task. There are Many forms of learning can be characterized as a search through a space of possible hypotheses or solutions. It is necessary to constrain this search process or reduce the search space. some representation language must be assumed for communication between the environment and the learner. the feedback may be completely reliable. Also . If proper learning was accomplished. is then sent by the critic module to the learner component for its subsequent use in modifying the structures in the knowledge base. indicating whether or not the performance was acceptable . and the learning algorithms used. The instances may be well focused using only relevant information. To make learning more efficient. They include the types of training provided. . such as solving a problem playing a game. the system’s performance will have improved with the changes made to the knowledge base. The critic module then evaluates this response relative to an optimal response. Other systems will use program modules to simulate a particular environment. This usually results in a simpler implementation since it is not necessary to transform between two or more different representations. otherwise the system carries useful information. In any case. the type of feedback provided. Intuitively . Inputs to the learner component may be physical stimuli of some type or descriptive . the learner should also to build up a useful corpus of knowledge quickly. The instances may be positive examples of some concept or task a being learned.general learning model is depicted in figure 4. until a known learning goal has been reached.