The IBM project that this week led to a spectacular demonstration of machine Question Answering (yet another “QA” acronym) has a lot of serious computing science behind it. Much of that technology—although none of the techniques, naturally—is lucidly explained in a recent article in AI Magazine.
The overarching principles in DeepQA are massive parallelism, many experts, pervasive confi-dence estimation, and integration of shallow and deep knowledge.
- Massive parallelism: Exploit massive parallelism in the consideration of multiple interpretations and hypotheses.
- Many experts: Facilitate the integration, application, and contextual evaluation of a wide range of loosely coupled probabilistic question and content analytics.
- Pervasive confidence estimation: No component commits to an answer; all components produce features and associated confidences, scoring different question and content interpretations. An underlying confidence-processing substrate learns how to stack and combine the scores.
- Integrate shallow and deep knowledge: Balance the use of strict semantics and shallow semantics, leveraging many loosely formed ontologies.
An intriguing aspect of IBMs technology is the central role of confidence-estimation at every point in the question interpretation and answer selection. This probabilistic weighting of competing interpretations of the question and candidates for an answer allows a machine that does not experience anything and therefore cannot intuit —i.e. rapidly infer an answer from experience —to nevertheless mimic the intuition that probably allows the best human competitors to buzz–in first with a high probabilty of a correct answer.