Professional Documents
Culture Documents
ABSTRACT:
This topic introduces one of the toughest problems that the web is facing today.
Search engines now a day’s depends and search only depending on the key words. Humans
are capable of using the Web to carry out tasks such as finding the Finish word for "monkey",
reserving a library book, and searching for a low price for a DVD. However, a computer
cannot accomplish the same tasks without human direction because web pages are designed
to be read by people, not machines. The semantic web is a vision of information that is
understandable by computers, So that they can perform more of the tedious work involved in
finding, sharing, and combining information on the web. Semantic Web which is machine
friendly and makes the computer to know what the user wants. Once your computer can
understand a person a place and event days it can help you interact with those things.This
topic introduces the semantic web concepts and how to implement it using RDFa (Resource
Description Framework) and foaf(friend of a friend) vocabulary on your webpage. So the
Semantic Web makes our life easier by helping computers helps us get what we want.
However Semantic Web technologies are still very much in their infancies, and the future of
the project in general appears to be bright.
1.INTRODUCTION:
Currently the focus of a W3C working group, the Semantic Web vision was
conceived by Tim Berners-Lee, the inventor of the World Wide Web. The World Wide Web
changed the way we communicate, the way we do business, the way we seek information and
entertainment – the very way most of us live our daily lives. Calling it the next step in Web
evolution, Berners-Lee defines the Semantic Web as “a web of data that can be processed
directly and indirectly by machines.”
In the Semantic Web data itself becomes part of the Web and is able to be processed
independently of application, platform, or domain. This is in contrast to the World Wide Web
as we know it today, which contains virtually boundless information in the form of
documents. We can use computers to search for these documents, but they still have to be
read and interpreted by humans before any useful information can be extrapolated.
Computers can present you with information but can’t understand what the
information is well enough to display the data that is most relevant in a given circumstance.
The Semantic Web, on the other hand, is about having data as well as documents on the Web
so that machines can process, transform, assemble, and even act on the data in useful ways.
The semantic web is comprised of a philosophy, a set of design principles,
collaborative working groups, and a variety of enabling technologies. Some elements of the
semantic web are expressed as prospective future possibilities that have yet to be
implemented or realized. Other elements of the semantic web are expressed in formal
specifications. Some of these include Resource Description Framework (RDF), a variety of
data interchange formats (e.g. RDF/XML, N3, Turtle, N-Triples), and notations such as RDF
Schema (RDFS) and the Web Ontology Language (OWL), all of which are intended to
provide a formal description of concepts, terms, and relationships within a given knowledge
domain.
The vision of the Semantic Web is a “web of data” that not only harnesses the
seemingly endless amount of data on the World Wide Web, but also connects that
information with data in relational databases and other non-interoperable information
repositories, for example, EDI systems. Considering that relational databases house the
majority of enterprise data today, the ability of Semantic Web technologies to access and
process it alongside other data from Web sites, other databases, XML documents, and other
systems increases the amount of useful data available exponentially. In addition, relational
databases already include a great deal of semantic information. Databases are organized in
tables and columns based on the relationships between the data they house, and these
Relationships reveal the meaning (the semantics) of the data. Data integration applications
offer the potential for connecting disparate sources, but they require one-to-one mappings
between elements in each different data repository. The Semantic Web, however, allows a
machine to connect to any other machine and exchange and process data efficiently based on
built-in, universally available semantic information that describes each resource. In effect, the
Semantic Web will allow us to access all the information listed above as one huge database.
Semantics ?
When I searched the dictionary for the meaning of the word semantics I discovered
that it is an adjective and it meaning is as follows
1. Of or relating to meaning, especially meaning in language.
2. Of, relating to, or according to the science of semantics.
So Semantics is an adjective which gives the meaning of something. Semantics is
actually related to syntax. In most languages syntax is how you say something. Semantics is
the meaning behind what you have said. Let’s take a phrase “I Love technology” as an
example. The syntax is all the letters, words, punctuation marks in the sentence. The
semantics is what the sentence actually mean. In this case that means you enjoy learning
about and using new technology. Now if we would change the sentence using different
symbol for the word love we are changing the sentence however note that the semantics of
the sentence still the same. When you write “I ♥ technology” it still means that you enjoy
learning and using new technology.
The Internet
When we talk about the syntax and semantics what we really talking about is
communication when you want to communicate with somebody else you use your voice to do
so. The internet created a standard way to communicate with each other. In other words it
gave a voice to the computer so that they may talk to each other and exchange information.
However much like a parrot mimic you and sounds without understanding them. Computer
nearly mimic the human information to one another so while the internet enables computer to
talk to one another it was not designed to teach them what the information actually means.
The Web
When the web came along and created a very quick and easy way for us to retrieve
and view information. You can think web as a huge document storage and retrieval system.
When you put a website address into your browser so it sends the request to a website. The
request basically states that you would like the document located at the address that you gave.
The website retrieves the document and sends it back to your web browser. This document is
written in a language called HTML. The html language that defines syntax that computer can
understand. It tell the computer how to display the document to you so the two really needed
things that the web did is create a way to get any document on the internet and also created a
syntax called html that is used to display the documents for you.
In comparison shopping also, the Market cap is 502.70$. Also in WWW who can you
trust to send you e-mail and how can we know for sure if a transaction really occurred. So
what’s the big deal. We have the internet that talks to each other we have the web that store
and retrieve any documents on the internet and the search engines which can find any of the
website that we want. The web is already pretty good how we are going to make it any better
the answer lies in semantics. ”Remember computer today just blindly retrieves and shows
information that’s the problem”. Computers don’t understand the meaning behind the
WebPages that they are showing us. While they may understand the syntax the semantics last
on them. Now if we can get computers to recognize what in a webpage they could learn want
they are interested in if they know that they can help us get what we want. They would
change passively helping us to actively helping us.
2. RDFa
RDFa
RDFa is based on RDF, RDF stands for Resource Description Framework. That’s the
fancy way of saying that it can describe any concept relationship or thing that exists in the
universe the idea RDF is sample and it does very easy to grasp. RDF has features that
facilitate data merging even if the underlying schemas differ, and it specifically supports the
evolution of schemas over time without requiring all the data consumers to be changed. RDF
extends the linking structure of the Web to use URIs to name the relationship between things
as well as the two ends of the link (this is usually referred to as a “triple”).
Using this simple model, it allows structured and semi-structured data to be mixed,
exposed, and shared across different applications. This linking structure forms a directed,
labeled graph, where the edges represent the named link between two resources, represented
by the graph nodes. This graph view is the easiest possible mental model for RDF and is
often used in easy-to-understand visual explanations. There are three things in RDF subjects,
predicates and objects. If you remember to your elementary school English classes this
should sound pretty familiar to you the subject, object and predicate approach how most
western languages create basic semantics. The subject refers to the thing you are describing
the predicate usually refers to an attribute of a thing that you are describing and the object is
the thing which you are referring to with the predicate.
Consider a basic example
“Avinash likes sweets”
In the above statement Avinash is the subject, likes is the predicate and sweets is the object.
Using this simple idea we can describe anything so that is basically what RDF enables us to
do.
one form of URI: the URL or Uniform Resource Locator. A URL is an address that lets you
visit a webpage, such as: http://www.w3.org/Addressing/. If you break it down, you can see
that a URL tells your computer where to find a specific resource (in this case, the W3C's
Addressing website). Unlike most other forms of URIs, a URL both identifies and locates.
Contrast this with a "mid:" URI. A "mid:" URI identifies an email message, but it isn't able to
locate a copy of the message for you.
http://www.example.org/relly/long/urls/are/hard/to/type/ect/
URI can be very long and annoying the type out. This is why we have a new concept in rdfa
called curi. curi’s are a short hand ways of writing a long URI. curi is the abbreviation of
compact URI. An example of curi is foaf:name.
In this case of curi foaf expands to a much linger URL and name is appended to the URL.
You don’t need to know what URI means right now just the that the ting on the left side of
the colon expands to a long URL and the thing on the right side of the colon is added to the
end of the expanded URL.
The prefix line lets us know what the short hand is for all the curies in the documents in other
words it defines what the name spaces are moving to the next line we found the actual triple
the first item is subject and is enclosed in angle brackets to specify. The second item is the
predicate which is also a URI,the third Item is the object is also a URI, finally there is a
period at the end of the triple to end the statement. There can be many triples associated with
a particular subject the more triples there are the more we know about the subject. If we look
closely at this example you will also notice that the predicate points to something called
vocabulary.
</body>
Looking back over the example we define the namespace at the top of the document
we then set the subject using the about attribute this followed by starting the predicate using a
curi which uses the namespace at the top of the document finally we specify the object by
rapping in the span element.
Any browser reading this webpage will know the Avinash full name.
by stating that Avinash knows someone else on the page named Rahul, to do this we setup
another person on the page named Rahul in the same way we setup Avinash.
Now all that is left to link them together to do this we use the combination of the rel attribute
and the resource attribute there is a vocabulary term that foaf defines called knows to state
that one resource known another resource we can use the foaf known as the predicate and link
the resource using rel.
In this case we use recourse to target object for the rel , Rahul could just easily as
have been mentioned on the another website and their remote website URL could have been
used instead of the local URL.
3.Components
XML provides an elemental syntax for content structure within documents, yet
associates no semantics with the meaning of the content contained within. XML is not at
present a necessary component of Semantic Web technologies in most cases, as
alternative syntaxes exists, such as Turtle. Turtle is a de facto standard, but has not been
through a formal standardization process.
XML Schema is a language for providing and restricting the structure and content of
elements contained within XML documents.
RDF is a simple language for expressing data models, which refer to objects
("resources") and their relationships. An RDF-based model can be represented in XML
syntax.
RDF Schema extends RDF and is a vocabulary for describing properties and classes
of RDF-based resources, with semantics for generalized-hierarchies of such properties
and classes.
OWL adds more vocabulary for describing properties and classes: among others,
relations between classes (e.g. discontinues), cardinality (e.g. "exactly one"), equality,
richer typing of properties and characteristics of properties (e.g. symmetry), and
enumerated classes.
SPARQL is a protocol and query language for semantic web data sources.
Rule Interchange Format (RIF) as the Rule Layer of the Semantic Web Stack
The intent is to enhance the usability and usefulness of the Web and its
interconnected resources through:
Servers which expose existing data systems using the RDF and SPARQL standards.
Many converters to RDF exist from different applications. Relational databases are an
important source. The semantic web server attaches to the existing system without
affecting its operation.
Documents "marked up" with semantic information (an extension of the
HTML <meta> tags used in today's Web pages to supply information for Web search
5.Challenges
Some of the challenges for the Semantic Web include vastness, vagueness, uncertainty,
inconsistency, and deceit. Automated reasoning systems will have to deal with all of these
issues in order to deliver on the promise of the Semantic Web.
Vastness: The World Wide Web contains at least 24 billion pages as of this writing
(June 13, 2010). The SNOMED CT medical terminology ontology contains 370,000 class
names, and existing technology has not yet been able to eliminate all semantically
duplicated terms. Any automated reasoning system will have to deal with truly huge
inputs.
Vagueness: These are imprecise concepts like "young" or "tall". This arises from the
vagueness of user queries, of concepts represented by content providers, of matching
query terms to provider terms and of trying to combine different knowledge bases with
overlapping but subtly different concepts. Fuzzy logic is the most common technique for
dealing with vagueness.
Uncertainty: These are precise concepts with uncertain values. For example, a patient
might present a set of symptoms which correspond to a number of different distinct
diagnoses each with a different probability. Probabilistic reasoning techniques are
generally employed to address uncertainty.
Inconsistency: These are logical contradictions which will inevitably arise during the
development of large ontologies, and when ontologies from separate sources are
combined. Deductive reasoning fails catastrophically when faced with inconsistency,
because "anything follows from a contradiction". Defensible reasoning and Para
consistent reasoning are two techniques which can be employed to deal with
inconsistency.
Deceit: This is when the producer of the information is intentionally misleading the
consumer of the information. Cryptography techniques are currently utilized to alleviate
this threat.
This list of challenges is illustrative rather than exhaustive, and it focuses on the challenges to
the "unifying logic" and "proof" layers of the Semantic Web. The World Wide Web
Consortium (W3C) Incubator Group for Uncertainty Reasoning for the World Wide Web
(URW3-XG) final report lumps these problems together under the single heading of
"uncertainty". Many of the techniques mentioned here will require extensions to the Web
Ontology Language (OWL) for example to annotate conditional probabilities. This is an area
of active research.
In the following, we focus our discussion on the following challenges that we are facing now:
the development of ontologies, and the development of the formal semantics of Semantic
Web languages, and the development of trust and proof models.
It is well recognized within the Semantic Web community that ontologies will play an
essential role in the development of the Semantic Web. Various effort has been devoted to the
research of different aspects of ontologies, including ontology representation languages
(Corcho,2000),ontology development (Jones,etal,1998), ontology learning approaches
(Maedche&Staab,2001), and ontology library systems (Ding & Fensel, 2001), which manage,
adapt, and standardize ontologies.
Management:
The main purpose of ontologies is to enable knowledge sharing and re-use, hence a typical
ontology library system supports open storage and organization, identification and
versioning. Open storage and organization address how ontologies are stored and organized
in a library system to facilitate access and management of ontologies. Identification
associates each ontology with a unique identifier. Versioning is an important feature since
ontologies evolve over time and a versioning mechanism can ensure the consistency of
different versions of ontologies.
Adoption:
Since ontologies evolve over time, how to extend and update existing ontologies is an
important issue. This includes the searching, editing and reasoning of ontologies in an
ontology library system.
Standardization:
Integration and interoperability is always the concern of any open system. This is especially
the concern of the Semantic Web, an open system that has to be scalable at the Internet level.
Currently, a number of ontology representation languages have been proposed and various
ontology library systems have been built. The question is what would be the standardized
ontology representation language. Each of them seems to have its advantages and
disadvantages, and has its proponents and opponents. This might be a feature of our human
being society: each of us has his/her preference. Since the Semantic Web is still at its early
stage, it might be too early to enforce any standardization. Each representation language can
grow on its own and the one or a few ones who win will become the de facto standards. XML
might serve as the meta-languages of these representations to facilitate future interoperation
and integration.
The functional architecture of the Semantic Web has three layers: the metadata layer, the
schema layer and the logical layer. Currently, the RDF (Resource Description Framework,) is
believed to be the most popular data model for the metadata layer. Although it is believed
that the RDF data model is enough for defining and using metadata, the semantics of
reification (statement about statement) is yet to be defined. RDFS (RDF Schema) extends
RDF and is currently a popular schema layer language. It has been recognized that RDFS
lacks a formal semantics and one proposal is to define metamodeling architecture for RDFS
similar to the one for UML (Universal Modelling Language), and hence defines a formal
semantics. This approach, although formal, is complicated and not intuitive. Although RDFS
has been blamed for its semantic confusion and some apparent paradox, it has not been
shown that a formal semantics is impossible. To smooth the way of developing the Semantic
Web, we believe that the semantics of RDFS need to be resolved: either a formal semantics is
defined for it, or the problem of RDFS is pinned down so that the semantic issue can be
resolved.
As an open and distributed system, the Semantic Web bears the spirit that "anybody can say
anything on anybody". People all over the world might assert some statements which can
possibly conflict. Hence, one needs to make sure that the original source does make a
particular statement (proof) and that source is trustworthy (trust).
• Proof. Digital signatures will play an important role in proof. The source has to sign
the statement he makes so that agents can check if information really comes from the
source it claims to be. In addition, other security technologies like encryption and
access control can be used to ensure confidentiality of information.
• Trust. Everybody should be able to define a trust model for himself, i.e., one can
define how much trust he would put on each source on the Semantic Web. Since it is
unrealistic to define the extent of trust for each source, a mechanism is necessary to
derive the degree of trust for each new source. One solution is the notion of "web of
trust": when one trust a source A, he also trusts all other sources that are trusted by
source A, but to a lower extent, in this way, a huge and hierarchical network is created
which facilitates agents infer information based on their trusted knowledge.
Currently, the notion of proof and trust has yet to be formalized, and a theory that integrates
them into inference engines of the Semantic Web is yet to be developed. However, these
technologies are very important and are the foundation of building real commercial
applications (e.g., B2B and B2C systems).
6.The Possibilities
Things get really exciting when we start exploring to explore the possibilities of the
semantic web. Once your computer understand a person a place and a event days, it can help
you interacting with those things 6 for example if a birthday party is marked up as an event
With a date and a place you can tell your computer to save the date in your calendar. Another
example is in the world of music blogs, Music blogs usually list songs and album reviews on
their front page. If the blog marked up the song in the artist semantic web technology you can
tell page to search the internet for other albums by the same artist. Search engines would also
become a great deal more accurate than they are today. When you search you can search for
any person, place or any particular song. Search engine could then refer to you a website with
a far more accuracy, because it wouldn’t have just depend on keyword in WebPages any
more it could also depends on the semantics on the webpage so that semantic web holds a
great deal of proms and making our life’s easier by helping computers help us get what you
want.
6.1 Web-services
Among the most important web resources on the Semantic Web are those so called web-
services. Here, web-services refers to "web sites that do not merely provide static information
but allow one to effect some action or change in the world". The Semantic Web will enable
users to locate, select, employ, compose, and monitor web-services automatically
The industry has already seen the potential market enabled by web-services and some efforts
have been put to the development of standards for electronic commerce, in particular for the
description of web-services. For example, Microsoft, IBM and Ariba proposed UDDI
(Universal Description, Discovery, and Integration, 2000) to describe a standard for an online
registry, and the publishing and dynamic discovery of web-services offered by businesses;
Microsoft and IBM proposed WSDL (Web Service Definition Language) as an XML
language to describe interfaces to web-services registered with a UDDI database; the DAML
Services Coalition proposed DAML-S (Darpa Agent Markup Language - Service, 2001) as
an ontology to describe web-services; and OASIS and the United Nations developed ebXML
(Electronic Business XML Initiative, 2000) to describe business interactions from a workflow
perspective. A number of communication protocols have been developed for the invocation
The Semantic Web will use ontologies to describe various web resources; hence, knowledge
on the Web will be represented in a structured, logical, and semantic way. This will change
the way that agents navigate, harvest and utilize information on the Web. On one hand, the
Semantic Web is a web of distributed knowledge bases, and agents can read and reason about
published knowledge with the guidance of ontologies. On the other hand, the Semantic Web
is a collection of web-services described by ontologies like DAML-S (Darpa Agent Markup
Language - Services)) and this will facilitate dynamic matchmaking among heterogeneous
agents: service provider agents can advertise their capabilities to middle agents; middle
agents store these advertisements; a service requester agent can ask a middle agent whether it
knows of some provider agents with desired capabilities; and the middle agent matches the
request against the stored advertisements and returns the result, a subset of the stored
advertisements.
When agents are equipped with intelligence and mobility, the conventional client/server
computing paradigm might be replaced by an agent-based distributed computing paradigm, in
which agents can migrate from one site to another, carrying their codes, data, running states
(including internal beliefs), and intelligence (specified by the users), and fulfil their missions
autonomously and intelligently. Many researchers have speculated that mobile agents are
inevitable for an open and distributed environment like the Semantic Web and have seen the
advantages of this new computing paradigm including:
Search engines are among the most useful resources on the Web and currently there are two
types of search engines:
• Large-scale robot-based search engines. These systems rely on robots to retrieve Web
pages and store them in a centralized database. The advantage of this mechanism is
that it increases recall (the proportion of relevant documents that are actually
retrieved, since robots can almost retrieve all web pages on the Web, while the
disadvantage is the precision (the proportion of retrieved documents that are actually
relevant) of the search result might be low.
• Small-scale reviewer-based search engines. A category hierarchy is created and each
category is described by a set of keywords. Reviewers will review each web page
(submitted by web page authors) and associate it with appropriate categories. The
advantage is that precision is increased, but the disadvantage is that recall might be
low since it is impossible to review and include every single relevant web page on the
Web.
Both types of search engines are based on keywords, and hence are subject to the two well-
known linguistic phenomena that strongly degrade a query's precision and
recall: polysemy (one word might have several meanings) and synonymy (several terms, i.e.
words or phrases, might designate the same concept). A number of stemming algorithms)
have been developed to address the synonymy issue including suffice removal, strict
truncation of character strings, word segmentation, letter bigrams and linguistic morphology.
The idea is that different derivations of a word are similar to each other in their forms (e.g.
they have the same prefix) and can be traced back to the same root (stem) using these
stemming methods. However, these methods are subject to the following stemming errors:
words with different meaning might be reduced to the same root. For example,
words general, generous, generation, and generic might be reduced to the same root. On the
other hand, different words with the same meaning cannot be reduced to the same root. For
example, an automobile and a car. The situation becomes worse for the large-scale robot-
based search engines. Only limited semantics can be derived from
the lexical or syntactic content of the web pages.
Several systems have been built to overcome these problems based on the idea of annotating
Web pages with special HTML tags to represent semantics, including SHOE (Simple HTML
Ontology Extensions) system, GDA system. However the limitation of these systems is that
they can only process web pages that are annotated with these HTML tags, and so far there is
no agreement upon a universally acceptable set of HTML tags.
XML is a promising technique since it keeps content, structure, and representation apart and
is a much more adequate means for knowledge representation. However, XML can represent
only some semantic properties through its syntactic structure. XML queries need to be aware
of this syntactic structure. With the advent of the Semantic Web, resources on the Web will
be represented semantically in ontologies. Semantics-based web search engines can be built
in which each query is executed within the context of some ontology. The guidance from
ontologies will increase recall and precision of the search result. For example, one might pose
a query "return all the reviewers for book 'The Semantic Web: an Introduction'" to a
semantics-based web search engine, then the engine will return only reviewers for this book
instead of returning web pages that contain keyword "reviewer" and/or term "The Semantic
Web: an Introduction". For another example, if one pose query "return all the chairs", with
the guidance of a furniture ontology, only those furniture chairs are returned; and with the
guidance of a person ontology, only people who are chairs of some organizations will be
returned. In contrast, Keyword-based search engines will return web sites that contain
keyword "chairs", including chairs that refer to furniture and chairs that refer to people. It is
worth mentioning that some systems that use ontologies to enhance web search engines have
been developed. Since ontologies are built on a domain basis, web search engines might be
also built on a domain basis, and hence metasearch engines, which interface with multiple
remote search engines and select and rank remote search engines intelligently, might be very
useful.
Digital multimedia data in various formats has increased tremendously in recent years on the
Internet. With the development of digital photography, more and more people are able to
store their personal photographs on their PCs. Sharing of picture albums and home videos on
the Internet become more and more popular. Furthermore, many organizations have large
image and video collections in digital format available for online access. Film producers want
to advertise movies through interactive preview clips. Travel agencies are interested in digital
achieves of holiday resorts photographs. Hospitals would like to build medical image
databases. These emerging applications for multimedia digital libraries require
interdisciplinary research in the areas of image processing, computer vision, information
retrieval and database management. Semantics-based retrieval of multimedia digital content
is important for efficient use of the multimedia data repositories. Traditional content-based
multimedia retrieval techniques describe images/videos based on low level features (such as
colour, texture, and shape) and support retrieval based on these features. However, human
typically does not view images/videos in terms of low-level features. A semantics-based
query capability is highly desirable. For example, one might want to formulate a query like
"return all the scenes in clip 1 in which a boy is riding a bicycles". Retrieving images/videos
based on low-level features cannot provide satisfactory results. Effective and precise
multimedia retrieval by semantics remains an open and challenging problem.
Recently, ontologies begin to be used in the context of digital libraries. For example,
ScholOnto is an ontology-based digital library that supports scholarly interpretation and
discourse, and ARION, another ontology-based digital library that supports search and
navigation of geospatial data sets and environmental applications.
We believe that various digital libraries will become another major web resource of the
Semantic Web. The challenges here are: (1) The development of efficient and effective
classification and indexing mechanism for each type of digital library, and (2) The semantic
interoperability between digital libraries of similar types and between a digital library and the
Semantic Web.
Subsequent practitioners of Semantic Web have also been making possible of their
predictions and actions in a variety of ways. For instance, with the application of Semantic
Web technologies, it is possible to automate operations, say, from completing all that you
need for a travel to updating of your personal records. Semantic Web then can be defined as a
web of information on the Internet and Intranet that contains characteristics of annotation
which enables accessing of precise information that you need.
Semantic Web in action has been conferring advantages in an ongoing basis and in initial
stages had proved beneficial in sophisticated operations as logistics planning in military
operations. The US military was the first one to adopt it; however it has been extending such
benefits to other applications also some of which are presented below:
In Health Care and Life Sciences its application is advantageous because these disciplines
have to deal with data from multiple sources, which have multiple applications and there is
no completeness in such data.
Engineering Analysis:
• The data could be tailored to return in a uniform manner across the companies
• A variety of checklist analysis could be conducted simultaneously
Data Warehousing:
In data warehousing it has specific functional utilities. There is no need for a data base
schema; this means you can dispense away the necessity to make a decision about the
structure and recording of data; in addition data can be distributed over the Web, because
Semantic Web is neutral to data security.
It has cost advantages too for business operations, for the businesses can precisely define the
processes suited for them and the system would do exactly what that business specifies and
nothing more and nothing less. This makes technology affordable for small businesses also.
Semantic Web has developed specific Semantic Web Technologies that could be
implemented free of cost that could result in huge savings in the way the Web functions. An
example of this is SPARQL, a query language
It would however be erroneous to assume that Semantic Web is something that has descended
from nowhere to usher in a rethinking in everything. One may be tempted to use such terms
as that a revolutionary mind set would be needed to its application etc. or it represents a
paradigm shift which are all not correct and would only confuse and mask the real advantages
it is offering. It is neither a total replacement nor would it substitute all that has come before
it, which would and continue to exist. No doubt, there could be changes, but, these changes
would build and bridge the gap by leveraging the existing assets rather than replacing them.
8.Conclusion
In this paper, we reported a survey of recent research on the Semantic Web. In particular, we
presented the opportunities that this new revolution will bring to us, and the challenges that
we are facing during the development of the Semantic Web. We hope that this paper will
shed some light on the direction of future work.
The Semantic Web is still a vision. We believe that the Web will grow towards this vision in
a way like the development of the real world: Semantic Web communities will appear and
grow first, and then the interaction and interoperation among different communities will
finally interweave them into the Semantic Web.
References:
http://en.wikipedia.org/wiki/Semantic_Web
http://orange.eserver.org/issues/3-2/emonds-banfield.html
http://www.semanticfocus.com/blog/entry/title/introduction-to-semantic-web-vision-and-
technologies-part-4-protege-101-screencast/
http://en.wikipedia.org/wiki/Semantic_Web_Stack
http://anale.feaa.uaic.ro/anale/resurse/info5svarlan.pdf