# My Best Guesses About Wolfram Alpha

Last updated on 19 May 2009Following the “soft launch” of Wolfram Alpha a few days ago, I’ve been wondering exactly how Wolfram Alpha works. From the description of all the hype leading up to it, it seemed like a question answering application that is specialized in answering questions, as opposed to say Google, which is more into finding and presenting data, leaving the interpretation to humans. Now that it’s here, we can have a play around and see what it can do.

Firstly, I tried to ask Wolfram Alpha how it worked, and it just gave me a puzzled look. So I can only guess, and guess I shall.

The Wolfram Alpha website lays it all out on the table, with regards to what it’s all about. Wolfram Alpha’s goal “is to accept completely free-form input, and to serve as a knowledge engine” to answer search queries entered. From the very limited playing I’ve done, it seems pretty cool.

Following my move from Perth to Texas, I typed in “”http://www.wolframalpha.com/input/?i=austin+to+perth">Austin to Perth" into Wolfram Alpha just to see what it told me. I got back map, with a line drawn between the cities. I got the distance in miles between the two and the flight time. I got a side by side comparison of the local times, populations and approximate elevations.

Google however, delivered me what I was actually looking for, sort of. Maybe that’s just a difference in expectations and thinking.^{1}

It’s clear that Wolfram Alpha is pretty cool, and will do some awesome things with the data before it returns it back to you, but it doesn’t really help me figure out how it works. My thoughts are that Wolfram Alpha combines a large database of indexed knowledge in the form of a knowledge base and a powerful logical language, the language being Stephen Wolfram’s own Mathematica, naturally. As he describes it, Wolfram Alpha is the “killer-app” for Mathematica.

First, lets discuss the vast store of data available to Wolfram Alpha, most likely in the form of a knowledge base. A knowledge base is an information representation scheme designed to allow information to be operated on using logic. Information is stored in “sentences” in a knowledge representation language. The knowledge base allows logical agents, in this case the Wolfram Alpha application, to receive queries, and then search and analyse the knowledge base, performing logical inferences and information compounding, then collating it and returning the results. The logical agent does all the heavy lifting, but the very structure of the knowledge base helps the agent.

The heart of the logical agent is it’s knowledge representation language. It allows the agent to “reason” through sentences and make inferences in order to derive new facts. Proposition logic and First-order logic are both knowledge representation languages. Knowledge representation languages use syntax and semantics to define knowledge, where syntax outlines the structure of a language, and the semantics are the meaning.

Logic is a very mathematical concept, and mathematics can be seen as a kind of logic that operates on numbers. Mathematics has a syntax, that defines the infix notation of logical operators on operands, and the semantics are what those operators do. 1 + 1 = 2 is an example of a complex logical language, where the syntax defines that the + operator acts on each 1, and the = operator defines the result of the previous operation. The semantics define that the value of 1 + 1 is 2.

Mathematica, the heart of Wolfram Alpha, is a computational language, according to it’s Wikipedia page. It’s built on top of the Symbolic Manipulation Program, designed by both Chris A. Cole and Stephen Wolfram. The Symbolic Manipulation Program is a “computer algebra system”, which facilitates symbolic computation, allowing computers to operate on symbols and manipulate expressions instead of operating on their values, which sounds a heck of a lot like something that could be used to create a logic that can operate on a knowledge base.

Going back to logical agents and knowledge bases, there are a number of ways logical agents can extract and infer information from knowledge bases. The topic of knowledge based logical agents is a pretty vast one, and I couldn’t possibly explain it completely in a single post, even if I knew more than just a fraction of it, which I don’t. But I can summarize what I do understand, and how it applies to Wolfram Alpha.

As referred to often in the post so far, inference or *entailment* plays a large part of how a logical agent can extract “new” information from a knowledge base. Inference is deeply rooted in formal logic, and techniques have been developed to allow automated software processes to apply it to a knowledge base.

Logical entailment is the idea that a sentence “logically follows” another sentence. If a given sentence **a** is true, and the truth of sentence **b** depends on the truth of **a**, then **b** must be true, and also that if **b** is true, then logically **a** must also be true. This is useful if the knowledge base doesn’t contain the fact that one of **a** or **b** is true, but it does contain definition that the truth of **b** depends on the truth of **a**.

The knowledge base is designed to be a model of reality, and holds sentences about the world intended to describe it to a level of detail. Sentences can be as atomic as facts, such as “the sky is blue”, or can be logical sentences such as “if the sun is in the sky, it is not night”, which of course has exceptions (such as in certain seasons in the polar regions), which also need to be modeled in the knowledge base.

This is relevant because, with a finite model represented by the knowledge base, there are a finite number of entailment and inferences that can be made on the model, which means that a logical agent such as Wolfram Alpha can reduce the world to a finite search space, and can run a search over this space looking for information that follows on from the set of facts in the knowledge base.

Search techniques for searching the knowledge base are similar as search techniques in other AI based problems, such as constraint satisfaction problems and touring problems. These searching algorithms include backtracking and local search methods, such as hill climbing.

When searching this knowledge base, the logical agent needs to abide by a series of rules, known as inference rules, in order to guarantee the the knowledge it entails and infers is valid.

Firstly, the logical agent needs to be aware of the equivalence of two logical sentences. If two sentences are both true under the same model, such as the Wolfram Alpha knowledge base, then they are logically equivalent, and can be used to format facts in a different way to aid inference.

Secondly, the logical agent needs to be aware of the concept of validity. A sentence is valid if it is true in every model, which essentially means it’s a tautology and is always true. These tautologies are useful because they aid in validating inferences, where **a** infers **b** if a implies **b** is a tautology, where implication is such that **b** is true if and only if **a** is true. This can get a little confusing, and if you can, I’d suggest you read Chapter 7 in AI:AMA

.

Thirdly, it needs to ensure the sentence is satisfiable, which means it’s true in at least one model, which is basically just searching for inferences and determining that, for a given search space state, the inference being suggested is true.

The logical agent and inference/entailment from a knowledge base can be observed in Wolfram Alpha with a fairly simple query: “”http://www.wolframalpha.com/input/?i=red+%2B+yellow">red + yellow". In this query, the logical agent would probably hit the knowledge base and tell it that both red and yellow are true, meaning that we want to find information about those. It would then search for all sentences that are entailed or logical structures that mention the fact red or yellow, such as a sentence “orange is a mix of red and yellow”. It would then tell the knowledge base that this fact had been entailed, and tell it that orange is true. Then, when it finds the sentence “blue is complementary to orange”, it can add that to the record set. Obviously there will be other sentences in the knowledge base, and the agent will have to use a filtering algorithm to determine which sentences and facts are relevant to the query, otherwise it might return results that the user is not interested in.

To successfully parse the search query, it would also need a measure of natural language processing, so that it can convert a human-readable query into a machine readable query. This, combined with a large, extensive knowledge base, a fast logical language such as Mathematica and AI techniques for inference and entailment of knowledge mean that Wolfram Alpha can answer a wide range of questions that a simple indexed and categorised search engine couldn’t.

However that’s just my best guess. And keep in mind who I am: I’m just a young web developer with an interest in AI. I don’t actively participate in AI research, and I have no formal training. My guesses and musings are probably widely off the mark, and if anyone has any other ideas or corrections, clarifications or discussions, I’d love to hear them in the comments.

_{
}

- If you mistype a search query into Wolfram Alpha, curiously it won’t suggest a correction. Considering how relatively easy this task is (just use it’s Levenshtein Distance), I find it odd that it doesn’t support this.