Elementary “Watson”

IBM's architecture for the 'Watson' QA machine

The IBM project that this week led to a spec­tac­u­lar demon­stra­tion of machine Ques­tion Answer­ing (yet anoth­er “QA” acronym) has a lot of seri­ous com­put­ing sci­ence behind it. Much of that technology—although none of the tech­niques, naturally—is lucid­ly explained in a recent arti­cle in AI Mag­a­zine.

The over­ar­ch­ing prin­ci­ples in Deep­QA are mas­sive par­al­lelism, many experts, per­va­sive con­fi-dence esti­ma­tion, and inte­gra­tion of shal­low and deep knowl­edge.
  • Mas­sive par­al­lelism: Exploit mas­sive par­al­lelism in the con­sid­er­a­tion of mul­ti­ple inter­pre­ta­tions and hypothe­ses.
  • Many experts: Facil­i­tate the inte­gra­tion, appli­ca­tion, and con­tex­tu­al eval­u­a­tion of a wide range of loose­ly cou­pled prob­a­bilis­tic ques­tion and con­tent ana­lyt­ics.
  • Per­va­sive con­fi­dence esti­ma­tion: No com­po­nent com­mits to an answer; all com­po­nents pro­duce fea­tures and asso­ci­at­ed con­fi­dences, scor­ing dif­fer­ent ques­tion and con­tent inter­pre­ta­tions. An under­ly­ing con­fi­dence-pro­cess­ing sub­strate learns how to stack and com­bine the scores.
  • Inte­grate shal­low and deep knowl­edge: Bal­ance the use of strict seman­tics and shal­low seman­tics, lever­ag­ing many loose­ly formed ontolo­gies.

An intrigu­ing aspect of IBMs tech­nol­o­gy is the cen­tral role of con­fi­dence-esti­ma­tion at every point in the ques­tion inter­pre­ta­tion and answer selec­tion. This prob­a­bilis­tic weight­ing of com­pet­ing inter­pre­ta­tions of the ques­tion and can­di­dates for an answer allows a machine that does not expe­ri­ence any­thing and there­fore can­not intu­it —i.e. rapid­ly infer an answer from expe­ri­ence —to nev­er­the­less mim­ic the intu­ition that prob­a­bly allows the best human com­peti­tors to buzz–in first with a high prob­a­bilty of a cor­rect answer.

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *