14.5 Outlook 315 Annotation Tools Currently, the semantic markup of Web pages is still a bottleneck hindering the proliferation of the Semantic Web. It should basically be possible to automate the semantic markup. This is well conceivable for new Web applications, for instance, by using a content management system based on an ontology. In this case, the contents would be semantically marked up by indexing them according to that ontology. This can be done as early as during the contents preparation. Things get more difficult if very complex statements have to be represented, such as statements for operating instructions or scientific discussions. In these cases, we will have to fall back on manual annotation, and the annotator will have to stick to guidelines specifying how the underlying ontology should be used. Examples of such tools include the Annotea project (http://www.w3.org/2001/Annotea/). In its current version, Annotea provides mechanisms for general annotations based on a pre-defined RDF schema. In addition, there are electronic tools for scientific group work (Kirschner et al. 2003), and a general multimedia annotation tool specialized for culture and arts (http://www.cultos.org). There has also been a large body of work on annotating scholarly discourse (http://kmi.open.ac.uk/projects/scholonto/scholontoarchive.html). 14.5 Outlook With the integration of knowledge representation, the Web hits the limits of its original design philosophy. As we all know, the original idea was to structurally map knowledge connections between documents with related contents or document sections to overcome physical boundaries (article A refers to an experiment described in B, and the data for B can be read in the log or minutes of C, and each of these documents resides in a different directory and possibly even on a different server). Consequently, the original philosophy was to transform existing knowledge into a reachable structure. Hypertext navigation was then primarily intended to substitute search processes. The new form of the Web, however, promises new qualities: by explicitly describing the relationship of articles, experiments, and experiment logs, we can check and recombine statements (in articles), arrangements (in experiments), and results (in protocols). This opens up new avenues, for instance in medical research, because stronger structuring of information facilitates new evaluation methods (e.g., clinical meta-studies). The philosophy of the Semantic Web is based on converting the contents of nodes contained in this reachable structure so that a node can be interpreted by machines. This is the point where the conceptual difference to the traditional Web comes to light. In the traditional Web, the function of any hyperlink connection was of type GO TO , e.g. GO TO and GO TO . The typing of the Web pages (e.g. experiment or results-log ) was interpretable by humans only. In the Semantic Web, the article becomes a self-describing entity ( I m a scientific article based on experiment B and the associated log C ). The consequences are fundamental: first, if the referenced entities B and C have unique names, it doesn t matter where the documents are actually stored as long as they are somewhere where they can be found (by machines). The effect is that the explicit linking of information becomes secondary, while the effectiveness of the calculable links (based on self-describing entities) becomes primary. Second, if self-description (gradually) improves, it will eventually be unnecessary, because it will be deducible from the content itself. Once arrived at this point, each new piece of content
Note: If you are looking for good and high quality web space to host and run your jsp application check Lunarwebhost jsp web hosting services