Difference between revisions of "Vulcan/SystemPrototype"

From Knowitall
Jump to: navigation, search
(Status)
(Status)
Line 25: Line 25:
 
* <b>Does it work?</b>  
 
* <b>Does it work?</b>  
 
<blockquote>
 
<blockquote>
* Tuffy gets it right 2/3 questions. i.e., it assigns higher probabilities for the correct proposition.
+
* Tuffy gets it right for 2 out of 3 questions. i.e., it assigns higher probabilities for the correct proposition.
 
* Facts inferred by larger number of steps have a lower score compared to those inferred by a smaller number of steps.
 
* Facts inferred by larger number of steps have a lower score compared to those inferred by a smaller number of steps.
 
</blockquote>
 
</blockquote>

Revision as of 21:40, 27 August 2013

Overview

The prototype is designed to work on three questions. We want the system to output the following:

  • Score for the input proposition.
  • New facts inferred.
  • Facts and rules used in scoring.

Status

Ran Tuffy on three example questions. It failed on one question.

  • Hand generated the input evidence for the propositions (one correct and one incorrect) for three questions.
  • Hand generated the MLN rules based on Stephen's human-readable rules.The MLN rules can be found here.
  • Ran Tuffy to obtain the inference probabilities on the propositions.
  • The system also outputs:
  • All inferred facts along with their probabilities.
  • All rules that are reachable from the query fact. i.e., Clauses in the MLN that are relevant to the inference of the query fact.
  • Does it work?
  • Tuffy gets it right for 2 out of 3 questions. i.e., it assigns higher probabilities for the correct proposition.
  • Facts inferred by larger number of steps have a lower score compared to those inferred by a smaller number of steps.
  • Why does it fail on the one question?
  • Don't know yet.
  • Both "iron nail" and "plastic cup" get similar weights (iron nail is slightly higher).
  • Based on manual inspection the plastic cup proposition should not get any score at all. I don't yet understand the scoring enough to explain this. Will dig in when I come back.
  • What diagnostics do we NOT have?
  • Connections between the clauses in the MLN.
  • A reconstruction/visualization of the MLN network. Working with Tuffy developers on this.
  • What next?

<blockqoute>

  • Fix the

</blockquote>

  • What does this exercise suggest?
  • Need to figure out how the weights on the MLN rules and evidence are use. [I assigned them arbitrarily for this round.]
  • Use predicates with small arity. For example, avoid writing rules entire nested tuples as predicates.
  • The only reason we'd need a nested tuple is for the purpose of computing the score. For now we can compute this from the score of its components: Score(nested_tuple) = Score(top tuple) * Score (nested).