Email this Article Email   

CHIPS Articles: The Lazy Person's Guide to the Semantic Web

The Lazy Person's Guide to the Semantic Web
By Dale Long - April-June 2008
Communication, as we may remember, consists of four parts: sender, receiver, message and medium. The sender constructs a message that consists of specific content and transmits that content through a particular medium: voice, symbols, letters, waving flags, Morse code, whatever it takes to deliver the message. For communication to be considered effective, the receiver must both receive and understand the message.

With that in mind, I would submit that humans communicate with computers, albeit in a very limited way. Yes, we send them lots of messages: "Remember this text I'm typing," "Save this file," "Open that file," "Print this picture," which our computers will execute. But the average computer, other than doing what the print command demands, is not aware of what the word "print" means outside the context of a machine executable command.

However, creating sentient, self-aware, artificial intelligence (AI) is the Holy Grail of computing. We are intrigued by this notion and have created many fictional characters with artificial intelligence that have the ability to understand meaning well enough to hold rational conversations, including the paranoid HAL 9000 (heuristically programmed algorithmic computer) from 2001: A Space Odyssey, the intelligent car KITT (short for "Knight Industries Two Thousand"), from Knight Rider, and Data (an android who serves as the second officer and chief operations officer) from Star Trek, to name just a few. But how to create artificial intelligence in the real world is the question of the day.

In the last issue of CHIPS we looked at how humans use computers as calculators and external memory storage, two functions that do not require true cognitive behavior on the part of the machine. In this issue, we will look at how we might develop systems capable of more than just crunching numbers or storing static bits of information, systems capable of understanding the meaning of what we tell them in addition to merely doing what we command.

To Serve Man

As happens sometimes, my choice of topic was inspired by a visit with Zippy, Zippette, and their now six-year old twins, Paul and Cassie. They also have one other new family member: a four-foot tall humanoid robot they call "Alfie."

"It's really cool," Zippy said. "Watch this! Alfie, bring me some tortilla chips and medium salsa."

Alfie ambled into the kitchen and returned about two minutes later with two bowls, one filled with tortilla chips and the other with the appropriate salsa. Of course, all the food in the kitchen had to be properly labeled and stored in the correct locations for it to do this, and all the terms had to be pre-programmed into the robot. But Zippy was correct, it was very cool to have our own robotic butler.

Zippy continued to put Alfie through his paces, and the little robot soon had us supplied with enough drinks and munchies to make it through the first half of the Super Bowl. Things were going well until Zippy spilled some salsa on his shirt and decided to have Alfie clean him up with the command, "Alfie, wipe off my shirt."

Fortunately for Zippy, Alfie was not particularly strong, though the shirt needed several new buttons after the robot tried to do what Zippy told it to do instead of what Zippy meant it to do. Computers do have a tendency to take things literally.

The Meaning of Meaning

The word "semantic" is defined as: "Of or relating to meaning, especially meaning in language." The "Semantic Web" concept is based on the idea that we can evolve our current static, separate stores of language-based information and build a system that approximates how a human mind relates information in memory based on the meaning of its contents.

For a more detailed explanation, here are some excerpts from the World Wide Web Consortium (W3C®) introductory page on the Semantic Web:

“The Semantic Web is about two things. It is about common formats for integration and combination of data drawn from diverse sources, where on the original Web mainly concentrated on the interchange of documents. It is also about language for recording how the data relates to real world objects. That allows a person, or a machine, to start off in one database, and then move through an unending set of databases which are connected not by wires but by being about the same thing.”

The ability to relate pieces of information based on meaning is essentially how humans think. The trick is to replicate information in a rigid, binary electronic system to mimic what the chemically-based biological computers inside our skulls learn to do over many years of trial and error.

Creating the Semantic Web as an evolutionary successor to the World Wide Web will require developing a way to express and access Web content in natural language and in a format that can be read and used by automated tools. This is not a trivial task.

Before we go into specific tools and methods, let’s look at how a Semantic Web would work by asking the question: “When is the next space shuttle launch?”

If you ask that question by typing “when is the next space shuttle launch” into an Internet search engine, the top result should be the NASA shuttle operations page, which will show any current missions and the date of the next shuttle launch. The search engine likely found this page by identifying a page that had the closest match to this combination of terms, not because it really understood the question.

But using the Semantic Web, you would receive a simple, direct answer. At the time I wrote this article, the response would have simply been: March 11, 2008. The difference between the two responses is that in the first, the search engine found a page with the best match for the terms.

A semantic engine, however, would go through each term to find the meaning of the question:

– When: this is a time-based query
– is: state of occurrence
– the: singular event
– next: related to when, means the first occurrence of the singular event after the present
– space: too many meanings, need more information
– shuttle: related to space, “space shuttle” is identifiable as a specific object
– launch: takeoff, as opposed to any other activity

At this point, a semantic engine would read the NASA page, extract only the information relevant to the question and construct and present a meaningful answer to the question, not a page that happens to contain the information. The simple answer would be a date: “March 11, 2008.”

As the date draws closer, a semantic engine that knows weather is a factor in the launch could factor in weather forecasts and offer an opinion on how likely the shuttle is to actually takeoff on schedule based on the forecast. The more nuanced answer would be the projected date with conditions: “March 11, 2008, weather permitting.”

Humans learn to do this by rote, trial and error, and eventually intuition. Computers, on the other hand, currently have no way of learning other than by human input. One of the greatest barriers to the Semantic Web, therefore, is that we must somehow teach computers what everything means in relation to everything else.

So how would we teach a computer to search the way we do?

Machine Learning

Let’s try another question: “What is today’s special at the five restaurants closest to my office?”

If I were going to answer that question myself, I would go to Google maps, enter the location of my office, and search for restaurants. Then I would click on the five closest results, one at a time, look for today’s updated menu, look for the term “special” and note the results for each.

To teach that to a computer, I would first have it record and mimic my clicks and keystrokes. Then I would assign meanings to the terms in each step and relate them to each term in the question in much the same way the previous query related the terms “space,” “shuttle” and “launch” to develop meaning.

If I can teach the computer how to execute this specific search, the next level is to replace search terms and see if it can find something different, like “soup of the day” or changing the number or type of restaurants. If the semantic engine can successfully answer the question, “what are today’s dinner specials at the three Italian restaurants closest to my office” without further training from me, I will have succeeded in teaching the semantic engine a new skill without having to individually program every variation of every query about restaurants.

The problem with learning by mechanical or unthinking routine is that it is a slow, literal process that takes a long time to teach each action. Trial and error would not be much better because we would still have to instruct the computer in every new skill.

That leaves intuition. But how do you give a computer intuition?

Structuring Meaning

Human intuition is based on many things, but where language is concerned much of it comes from an extensive knowledge of the meaning of words in relation to other words. The W3C has various projects underway to provide semantic applications with a formal description of concepts, terms and relationships within given knowledge domains. These tools and methods include the Resource Description Framework (RDF), a variety of data interchange formats (RDF/XML, N3, Turtle, N-Triples), and notations such as RDF Schema (RDFS) and Web Ontology Language (OWL).

The key to the Semantic Web will be RDF, according to the W3C.

“RDF is intended for situations in which this information needs to be processed by applications, rather than being only displayed to people. RDF provides a common framework for expressing this information so it can be exchanged between applications without loss of meaning. Since it is a common framework, application designers can leverage the availability of common RDF parsers and processing tools. The ability to exchange information between different applications means that the information may be made available to applications other than those for which it was originally created.”

RDF is based on the idea of identifying things using Web identifiers (called Uniform Resource Identifiers, or URIs), and describing resources in terms of simple properties and property values. This enables RDF to represent simple statements about resources as a graph of nodes and arcs representing the resources, and their properties and values.

To explore how this might work in practice, it is time to ask our imaginary semantic engine another question: “Who will play in the next Super Bowl?”

First, answering this question requires a very specialized inference to turn the words “super” and “bowl” into a term that means something entirely different from their literal meaning. However, the computer does get a hint that this term is a compound object because both words are capitalized. Therefore, the term “Super Bowl” would have its own unique URI, as would each team, and football terms like touchdown, interception, field goal, and so on.

Once the semantic engine figured out that we are referring to the annual championship game between the American and National Football Conference champions, the easy part of answering this question is the time factor. There is generally a single, well-publicized date associated with the event. Based on our question, we want the next date this event occurs. From there, the system can work backwards based on the season’s schedule.

Knowing that the game is contested by only two teams at a time also helps narrow down the answer to the two most likely candidates. But which two teams will play?

To answer this question, the system will need access to schedules and results for every team and game during the season, divisional standings, and as much other data about the teams that it can incorporate into its prediction algorithm, including weighting performance at the end of the season higher than performance at the beginning.

Once the common information framework from all the various sources is in place, teaching a computer how to weigh the data is no different than teaching a person to weigh data, with the possible exception of the computer being more likely to calculate complex math accurately.

However, instead of us telling it where to look, we will want the systems to find all these sources on its own by looking around the entire Web and selecting relevant sites based on what we have taught it about how schedules are constructed and the NFL so it can automatically incorporate data it finds that we may not have known about when we programmed it.

If it is sophisticated enough, it could also factor in the opinions of various sports pundits, weighted of course by their percentage of correctly predicted games during the season.

The answer the system provides to our question will likely change over time as the season progresses. Before any games are played, the system will have very little pertinent information. As more games are played, the system’s opinion will change; just as a human fan’s opinion will change based on how various teams perform.

Predicting football results is still a fairly limited exercise and one that most football fans can do fairly quickly every week during football season. Also, what we’ve described still requires explicit markup of metadata to teach our system what it needs to know about football to return a realistic result.

Explicit Versus Implicit

All of our sample questions so far have been based on the concept of explicit (unambiguous) semantics, where we determine in advance the meaning of various words and terms for the system. However, to take our semantic engines to a level approximating human thought, we need a way to develop implicit (implied, but not explicitly expressed) semantics, where our system can draw inferences based on prior experience without specific direction.

Implicit semantics are created by people simply by doing what they do. User voting and editing on Wikipedia and similar Web sites are a social version of implicit markup, while autonomic examples of this would be Google’s PageRank™ system or Facebook’s “social graph.” But for the closest thing to autonomic implicit markup, we turn to the world of international news: Reuters and its Calais Web service.

In February 2008, Reuters opened access to the application programming interfaces of its OpenCalais project. Calais allows anyone to send text-based content, for example, this article, a blog, weather report, football statistics, and have Calais analyze it and generate metadata. The words are tagged representing people, places, facts and events to increase the document’s search relevance and accessibility.

The short description of this process is: raw text goes in; tagged text full of meaning comes out. Dropping a document into Calais is the equivalent of immersing a person in a social community to absorb meaning and content from that community.

If you want to try it yourself at home, gather a large number of documents and run them through Calais. Then scan and analyze the tagged documents with a spatial search tool like RDF-Gravity, a free, open source application that analyzes RDF formatted documents and returns a visual, graphical representation of the data. You may find the results interesting.

Let’s say we have a terabyte of unstructured data on the weapons, units, weather, terrain, and other factors from various battles. If we could abstract information from all this raw data and graphically display it in relation to the actual outcomes of the battles, the results might show some insights applicable to future conflicts.

In Closing

We have really just scratched the surface of the Semantic Web, particularly since we only looked at a few limited text-based applications. Every year we increase the number of devices that are potential sensors for our “global brain,” but we limit ourselves by structuring data only for use within single applications.

Our goal should be the fictional intelligence gathering system depicted in the movie The Bourne Ultimatum, one that will alert us if someone, somewhere in the world, does the right combination of things that provides the answer to the question: “Where and when will al Qaeda attack next?”

While no one human could likely survey, analyze, and summarize all the public and private information available from news, transportation, telecommunications, financial, intelligence, and other relevant sources, a truly semantic system might be able to provide an answer. However, until we fully develop semantics-based technologies capable of thinking like we do, we must still rely on humans to extract meaning from data.

Computers cannot think like we do. Yet.

Until next time, Happy Networking!

Long is a retired Air Force communications officer who has written regularly for CHIPS since 1993. He holds a Master of Science degree in Information Resource Management from the Air Force Institute of Technology. He is currently serving as a telecommunications manager in the U.S. Department of Homeland Security.

The views expressed here are solely those of the author, and do not necessarily reflect those of the Department of the Navy, Department of Defense or the United States government.

Related CHIPS Articles
Related DON CIO News
Related DON CIO Policy
CHIPS is an official U.S. Navy website sponsored by the Department of the Navy (DON) Chief Information Officer, the Department of Defense Enterprise Software Initiative (ESI) and the DON's ESI Software Product Manager Team at Space and Naval Warfare Systems Center Pacific.

Online ISSN 2154-1779; Print ISSN 1047-9988