The Ontology Inference Layer
Presentation at the NKOS Workshop, ACM DL’00, San Antonio, June 7, 2000
Sean Bechhofer
The University of Manchester
One of the major issues of the World Wide Web as it exists today is that it is hard to automate any tasks that one has to perform on the web. So far, the web is mainly built as a forum for human interaction; because most web documents are written for human consumption, the only available form of searching on the web (for example) is to simply match words or sentences contained in documents. Anyone who has used a web search service like AltaVista or HotBot knows that typing in a few keywords and receiving a couple of thousand "hits" is not necessarily very useful. A lot of manual "weeding" of information has to happen after that; it may also happen that the keywords for which you are searching are not prominent in the relevant document itself.
A possible solution for the search problem - and for the general issue of letting automated "agents" roam the web performing useful tasks - is to provide a mechanism which allows a more precise description of things on the web. This, in turn, could elevate the status of the web from machine-readable to something we might call machine-understandable.
The Semantic Web is a vision articulated by Tim Berners-Lee, the inventor of the web, but held by many. The idea is having data on the web defined and linked in a way that it is machine-understandable. For example, e-commerce requires much richer data: retailers require data to flow from wholesalers, and wholesalers require data to flow from producers. Data-exchange of this kind is currently very limited, consisting of tab-delimited dumps or product-specific tables. Using specific XML formats for each exchange task improves the situation, but XML is a syntax, not a means of sharing meaning.
We know the technologies we need to realise the Semantic Web. Specifically we need to support data, information and knowledge exchange, using metadata, ontologies and terminologies.
The whole vision depends on agreeing on common standards - something that is used and extended everywhere.
Together with Stanford University, The Free University of Amsterdam, and The University of Karlsruhe, Manchester is leading development of a language for describing and exchanging ontologies, and providing a reasoning mechanism for the web. The language is called OIL (Ontology Inference Layer), and effectively unifies frames with description logics. OIL has its primitives, syntax and semantics already defined, together with a mapping to RDF and RDF-Schema. Because it maps to a description logic reasoning engine, specifically FaCT, we can reason about ontologies built in OIL. The W3C are now looking at how they can use OIL to extend RDF-Schema to be a "knowledge representation system for the web".
See the OIL page for further details.