Professional Documents
Culture Documents
CONTENTS
Sl. No. 1. 2. 3. 4. 5. Topic Abstract Definition & Scope Pg. No. 3 3 3 What is an ontology(Conceptualization)? 5 What is a biological ontology? 5 Components of Ontology 6 Individuals 6-7 Classes 7 Attributes 8 Relationships 9-10 Classification: Traditional ontology languages Principles for the Design of Ontologies Used for 10
Knowledge Sharing Design criteria for ontologies Ontologies Advantages Ontology Example Use of ontologies Conclusion References
2|Page
Abstract
The current interest in ontologies is the latest version of AIs alternation of focus between content theories and mechanism theories. Sometimes, the AI community gets excited by some mechanism such as rule systems, frame languages, neural nets, fuzzy logic, constraint propagation, or unification. The mechanisms are proposed as the secret of making intelligent machines. At other times,we realize that, however wonderful the mechanism, it cannot do much without a good content theory of the domain on which it is to work. Moreover, we often recognize that once a good content theory is available, many different mechanisms might be used equally well to implement effective systems, all using essentially the same content.
What is an ontology(Conceptualization)?
An ontology is a speci_cation of a conceptualization. Conceptualization: system of categories accounting for a particular view on the world Gruber: conceptualization is a structure < D; R >, where D is a domain and R _ Dn a set of relations on D. Guarino: intensional account of conceptualization
3|Page
An ontology (arti_cial intelligence) is based on a vocabulary L. Ontological commitment: relates vocabulary to conceptualization A language L commits to a conceptualization C by means of an ontological commitment K. How does a term t relate to a domain?
4|Page
Components of Ontology
Common components of ontologies include:
5|Page
Classes: sets, collections, concepts, types of objects, or kinds of things.[1] Attributes: aspects, properties, features, characteristics, or parameters that objects (and classes) can have Relations: ways in which classes and individuals can be related to one another Function terms: complex structures formed from certain relations that can be used in place of an individual term in a statement Restrictions: formally stated descriptions of what must be true in order for some assertion to be accepted as input Rules: statements in the form of an if-then (antecedent-consequent) sentence that describe the logical inferences that can be drawn from an assertion in a particular form Axioms: assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application. This definition differs from that of "axioms" in generative grammar and formal logic. In these disciplines, axioms include only statements asserted as a priori knowledge. As used here, "axioms" also include the theory derived from axiomatic statements. Events: the changing of attributes or relations
Individuals
Individuals (instances) are the basic, "ground level" components of an ontology. The individuals in an ontology may include concrete objects such as people, animals, tables, automobiles, molecules, and planets, as well as abstract individuals such as numbers and words (although there are differences of opinion as to whether numbers and words are classes or individuals). Strictly speaking, an ontology need not include any individuals, but one of the general purposes of an ontology is to provide a means of classifying individuals, even if those individuals are not explicitly part of the ontology. In formal extensional ontologies, only the utterances of words and numbers are considered individuals the numbers and names themselves are classes. In a 4D ontology, an individual is identified by its spatio-temporal extent. Examples of formal extensional ontologies are ISO 15926 and the model in development by the IDEAS Group.
Classes
Classes concepts that are also called type, sort, category, and kind can be defined as an extension or an intension. According to an extensional definition, they are abstract groups, sets, or collections of objects. According to an intensional definition, they are abstract objects that are defined by values of aspects that are constraints for being member of the class. The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a combination of both. Some examples of classes:
Person, the class of all people, or the abstract object that can be described by the criteria for being a person. Vehicle, the class of all vehicles, or the abstract object that can be described by the criteria for being a vehicle. Car, the class of all cars, or the abstract object that can be described by the criteria for being a car.
6|Page
Class, representing the class of all classes, or the abstract object that can be described by the criteria for being a class. Thing, representing the class of all things, or the abstract object that can be described by the criteria for being a thing (and not nothing).
Ontologies vary on whether classes can contain other classes, whether a class can belong to itself, whether there is a universal class (that is, a class containing everything), etc. Sometimes restrictions along these lines are made in order to avoid certain well-known paradoxes. The classes of an ontology may be extensional or intensional in nature. A class is extensional if and only if it is characterized solely by its membership. More precisely, a class C is extensional if and only if for any class C', if C' has exactly the same members as C, then C and C' are identical. If a class does not satisfy this condition, then it is intensional. While extensional classes are more well-behaved and well-understood mathematically, as well as less problematic philosophically, they do not permit the fine grained distinctions that ontologies often need to make. For example, an ontology may want to distinguish between the class of all creatures with a kidney and the class of all creatures with a heart, even if these classes happen to have exactly the same members. In most upper ontologies, the classes are defined intensionally. Intensionally defined classes usually have necessary conditions associated with membership in each class. Some classes may also have sufficient conditions, and in those cases the combination of necessary and sufficient conditions make that class a fully defined class. Importantly, a class can subsume or be subsumed by other classes; a class subsumed by another is called a subclass (or subtype) of the subsuming class (or supertype). For example, Vehicle subsumes Car, since (necessarily) anything that is a member of the latter class is a member of the former.
Attributes
Objects in an ontology can be described by relating them to other things, typically aspects or parts. These related things are often called attributes, although they may be independent things. Each attribute can be a class or an individual. The kind of object and the kind of attribute determine the kind of relation between them. A relation between an object and an attribute express a fact that is specific to the object to which it is related. For example the Ford Explorer object has attributes such as:
<has as name> Ford Explorer <has by definition as part> door (with as minimum and maximum cardinality: 4) <has by definition as part one of> {4.0L engine, 4.6L engine} <has by definition as part> 6-speed transmission
The value of an attribute can be a complex data type; in this example, the related engine can only be one of a list of subtypes of engines, not just a single thing. Ontologies are only true ontologies if concepts are related to other concepts (the concepts do have attributes). If that is not the case, then you would have either a taxonomy (if hyponym relationships exist between concepts) or a controlled vocabulary. These are useful, but are not considered true ontologies.
7|Page
Relationships
Relationships (also known as relations) between objects in an ontology specify how objects are related to other objects. Typically a relation is of a particular type (or class) that specifies in what sense the object is related to the other object in the ontology. For example in the ontology that contains the concept Ford Explorer and the concept Ford Bronco might be related by a relation of type <is defined as a successor of>. The full expression of that fact then becomes:
This tells us that the Explorer is the model that replaced the Bronco. This example also illustrates that the relation has a direction of expression. The inverse expression expresses the same fact, but with a reverse phrase in natural language. Much of the power of ontologies comes from the ability to describe relations. Together, the set of relations describes the semantics of the domain. The set of used relation types (classes of relations) and their subsumption hierarchy describe the expression power of the language in which the ontology is expressed.
An important type of relation is the subsumption relation (is-a-superclass-of, the converse of is-a, is-a-subtype-of or is-a-subclass-of). This defines which objects are classified by which class. For example we have already seen that the class Ford Explorer is-a-subclass-of 4Wheel Drive Car, which in turn is-a-subclass-of Car. The addition of the is-a-subclass-of relationships creates a taxonomy; a tree-like structure (or, more generally, a partially ordered set) that clearly depicts how objects relate to one another. In such a structure, each object is the 'child' of a 'parent class' (Some languages restrict the isa-subclass-of relationship to one parent for all nodes, but many do not). Another common type of relations is the mereology relation, written as part-of, that represents how objects combine together to form composite objects. For example, if we extended our example ontology to include concepts like Steering Wheel, we would say that a "Steering Wheel is-by-definition-a-part-of-a Ford Explorer" since a steering wheel is always one of the components of a Ford Explorer. If we introduce meronymy relationships to our ontology, the hierarchy that emerges is no longer able to be held in a simple tree-like
8|Page
structure since now members can appear under more than one parent or branch. Instead this new structure that emerges is known as a directed acyclic graph.
As well as the standard is-a-subclass-of and is-by-definition-a-part-of-a relations, ontologies often include additional types of relations that further refine the semantics they model. Ontologies might distinguish between different categories of relation types. For example:
relation types for relations between classes relation types for relations between individuals relation types for relations between an individual and a class relation types for relations between a single object and a collection relation types for relations between collections
CycL DOGMA (Developing Ontology-Grounded Methods and Applications) F-Logic (Frame Logic) KIF (Knowledge Interchange Format) o Ontolingua based on KIF KL-ONE KM programming language LOOM (ontology) OCML (Operational Conceptual Modelling Language) OKBC (Open Knowledge Base Connectivity) PLIB (Parts LIBrary) RACER
DAML+OIL Ontology Inference Layer (OIL) Web Ontology Language (OWL) Resource Description Framework (RDF) RDF Schema SHOE
By Description logic-based
structure:
Frame-based
F-Logic, OKBC, and KM are completely or partially frame-based languages. Description logic provides an extension of frame languages, without going so far as to take the leap to first-order logic and support for arbitrary predicates. Examples include KL-ONE, RACER, and OWL.
9|Page
Gellish is an example of a combined ontology language and ontology that is description logic based. It distinguishes between the semantic differences among others of:
relation types for relations between concepts (classes) relation types for relations between individuals relation types for relations between individuals and classes
First-order logic-based
Common Logic, CycL and KIF are examples of languages that support expressions in firstorder logic, and, in particular, allow general predicates.
10 | P a g e
inferred from the axioms contradicts a definition or example given informally, then the ontology is incoherent. 3. Extendibility: An ontology should be designed to anticipate the uses of the shared vocabulary. It should offer a conceptual foundation for a range of anticipated tasks, and the representation should be crafted so that one can extend and specialize the ontology monotonically. In other words, one should be able to define new terms for special uses based on the existing vocabulary, in a way that does not require the revision of the existing definitions. 4. Minimal encoding bias: The conceptualization should be specified at the knowledge level without depending on a particular symbol-level encoding. An encoding bias results when a representation choices are made purely for the convenience of notation or implementation. Encoding bias should be minimized, because knowledge-sharing agents may be implemented in different representation systems and styles of representation. 5. Minimal ontological commitment: An ontology should require the minimal ontological commitment sufficient to support the intended knowledge sharing activities. An ontology should make as few claims as possible about the world being modeled, allowing the parties committed to the ontology freedom to specialize and instantiate the ontology as needed. Since ontological commitment is based on consistent use of vocabulary, ontological commitment can be minimized by specifying the weakest theory (allowing the most models) and defining only those terms that are essential to the communication of knowledge consistent with that theory.
Ontologies Advantages
The advantage of an ontology is that it represents real world information in a manner that is machine process able. This leads to a variety of interesting applications for the benefit of the target user groups. For example, using the ontology in figure in ontologies examples, we can ask the following questions for information discovery purposes:
Give me all courses of the computer science department Give me all courses in year 4 Give me all courses of a specific subject area
The above are machine implement able for user friendly results. The reason ontologies are becoming popular is largely due to what they promise: a shared and common understanding of a domain that can be communicated between people and application systems. Specifically, ontologies offer the following benefits:
They assist in the communication between humans. Here, an unambiguous but informal ontology may be sufficient. They achieve interoperability among computer systems achieved by translating between different modelling methods, paradigms, languages and software tools. Here, the ontology is used as an interchange format. They improve the process and/or quality of engineering software systems.
Re-Usability: the ontology is the basis for a formal encoding of the important entities, attributes, processes and their inter-relationships in the domain of interest. This formal representation may be a reusable and/or shared component in a software system. Search: an ontology may be used as metadata, serving as an index into a repository of information. Knowledge Acquisition: using an existing ontology as the starting point and basis for guiding knowledge acquisition when building knowledge-based systems may increase speed and reliability.
The development of an ontology is an interdisciplinary research process that involves computer scientists as well as experts for the specific area that is addressed. Once an ontology has been created it may be used for information discovery in the following ways:
To pose queries for either metadata, concepts or properties of the ontology As a conceptual framework to help the user think about the information repository and formulate queries As a guide to understand the ontology-driven metadata To drive the user interface for creating and refining queries
Ontology Example
Following is an example of the use of attributes and entities for the representation of the real world: consider the address of a person. Is it an entity, relationship, or attribute?
Consider address for a telephone company database, which has to keep track of how many and what type of phones are available in any one household, who lives there (there may be several phone bills going to the same address) etc. for this case, address is probably best treated as an entity. Or, consider an employee database, where for each employee you maintain personal information, such as her address. Here address is best represented as an attribute. Or, consider a police database where we want to keep track of a persons whereabouts, including her address (i.e., address from Date1 to Date2, address from Date2 to Date3, etc.) Here, address is treated best as a relationship.
Ontologies are used in our everyday lives, although most of us do not realize it. An example of how this happens follows:
The department name The year in of academic studies in which the course is taught The subject area And the course number
12 | P a g e
13 | P a g e
Use of ontologies
In AI, knowledge in computer systems is thought of as something that is explicitly represented and operated on by inference processes. However, that is an overly narrow view. All information systems traffic in knowledge. Any software that does anything useful cannot be written without a commitment to a model of the relevant worldto entities, properties, and relations in that world. Data structures and procedures implicitly or explicitly make commitments to a domain ontology. It is common to ask whether a payroll system knows about the new tax law, or whether a database system knows about employee salaries. Information-retrieval systems, digital libraries, integration of heterogeneous information sources, and Internet search engines need domain ontologies to organize information and direct the search processes. For example, a search engine has categories and subcategories that help organize the search. The search-engine community commonly refers to these categories and subcategories as ontologies. Object-oriented design of software systems similarly depends on an appropriate domain ontology. Objects, their attributes, and their procedures more or less mirror aspects of the domain that are relevant to the application. Object systems representing a useful analysis of a domain can often be reused for a different application program. Object systems and ontologies emphasize different aspects, but we anticipate that over time convergence between these technologies will increase. As information systems model large knowledge domains, domain ontologies will become as important in general software systems as in many areas of AI. In AI,while knowledge representation pervades the entire field, two application areas in particular have depended on a rich body of knowledge. One of them is natural-language understanding. Ontologies are useful in NLU in two ways. First, domain knowledge often plays a crucial role in disambiguation. A well designed domain ontology provides the basis for domain knowledge representation. In addition, ontology of a domain helps identify the semantic categories that are involved in understanding discourse in that domain. For this use, the ontology plays the role of a concept dictionary. In general, for NLU,we need goals of
14 | P a g e
similar types. These reasoning strategies were also characterized by their need for specific types of domain factual knowledge. It soon became clear that strategic knowledge could be abstracted and reused. With few exceptions, the domain factual knowledge dimension drives the focus of most of the AI investigations on ontologies. This is because applications to language understanding motivates much of the work on ontologies. Even CYC, which was originally motivated by the need for knowledge systems to have world knowledge, has been tested more in natural-language than in knowledge-systems applications.
Conclusion
Ontology is a branch of Philosophy concerned with the study of what exists. Formal ontologies have been proposed since the 18th century, including recent ones such as those by Carnap (1968) and Bunge (1977). From a computational perspective, a major benefit of such formalizations has been the development of algorithms which support the generation of inferences from a given set of facts about the world, or ones that check for consistency. Such computational aids are clearly useful for knowledge management, especially when one is dealing with large amounts of knowledge. Various methods have been devised to support knowledge organization and interchange. Controlled vocabularies provide a standardized dictionary of terms for use during for example indexing or retrieval. Dictionaries can be organized according to specific relations to form taxonomies. Ontologies further specify the semantics of a domain in terms of conceptual relationships and logical theories. Ontologies may be constructed for different purposes, for example to enable information sharing and to support specification. When we want to enable sharing and reuse, we define an ontology as a specification used for making ontological commitments (Gruber, 1993). Ontological commitment is an agreement to consistently use a vocabulary with respect to a theory specified by an ontology. In order to support a specification we define ontology as a conceptualization, i.e., ontology defines entities and relationships among them. Every information base is based on either implicit or explicit conceptualization. Research within artificial intelligence has formalized many interesting ontologies and has developed techniques for analyzing knowledge that has been represented in terms of these. Along a very different path, Wand (1989; 1990) studied the adequacy of information systems to describe applications based on a general ontology, such as that proposed by Bunge (1977).
References
1. D.B. Lenat and R.V. Guha, Building Large Knowledge-Based Systems: Representation and Inference in the CYC Project, Addison- Wesley, Reading, Mass., 1990. 2. B. Chandrasekaran, AI, Knowledge, and the Quest for Smart Systems, IEEE Expert,Vol. 9, No. 6, Dec. 1994, pp. 26. 3. J. McCarthy and P.J. Hayes, Some Philosophical Problems from the Standpoint of Artificial Intelligence, Machine Intelligence Vol. 4, B. Meltzer and D. Michie, eds., Edinburgh University Press, Edinburgh, 1969, pp. 463502. 4. D. Marr, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information,W.H. Freeman, San Francisco, 1982. 5. A. Newell, The Knowledge Level, Artificial Intelligence,Vol. 18, 1982, pp. 87127.
15 | P a g e