Difference between revisions of "Vulcan/MeetingNotes/Aug09 2013"

From Knowitall
Jump to: navigation, search
(Agenda)
(Notes)
Line 43: Line 43:
 
e.g., (x, helps, a fox find food), where x is one of {sense of smell, thick fur, long tail, pointed teeth}<br/>
 
e.g., (x, helps, a fox find food), where x is one of {sense of smell, thick fur, long tail, pointed teeth}<br/>
  
We don't find sentences that directly state that "sense of smell helps <b>fox</b> find food". However, several sentences say
+
We don't find sentences that directly state that "sense of smell helps <b>fox</b> find food". <br/>
 +
However, several sentences say
 
"sense of smell helps <b>animals</b> find food". <br/>
 
"sense of smell helps <b>animals</b> find food". <br/>
  

Revision as of 22:41, 9 August 2013

Update

Focusing on system development.
1. Inferencer stub using Jena. Stub takes in axioms and rules and outputs a derviation.
2. Tested stub with axioms and rules that would help us solve the iron nail example.
3. Built Proposition extractor stub that converts the answer assertions into propositions represented as Open IE 4.0 tuples
4. Exploring other triplestore/inference systems (OWLIM and Seasame). Jena API doesn't readily support multiple derivations. Ask Jena community to find out if this is possible.
Resource collection
1. Gathered assertions from Peter/Phil. Each assertion corresponds to a single multiple choice answer.
2. Found RDF representations for WordNet and imported them into Jena.
Analysis
1. Selected 10 propositions that are single Open IE tuples as starting targets.
2. Started to write down steps involved in verifying these propositions.

Notes

1. Do we really needed an inference engine (Jena)?

We need a way to scale down the search space for BLP and MLN. Inference engine is one way to do this.

Discussed the steps involved for answering 5 different questions.

What happens when deductive inference fails?

Approach A:
Identify axioms that are highly "similar" to some node in the backward chained derivation graph. Add weak entailment rules (axiom -> derivation node) scored using edit distance.

Approach B:
Find the answer that is most plausible.

e.g., (x, helps, a fox find food), where x is one of {sense of smell, thick fur, long tail, pointed teeth}

We don't find sentences that directly state that "sense of smell helps fox find food".
However, several sentences say "sense of smell helps animals find food".

"smell helps * find food" returns 7 million hits on Google.
"fur helps * find food" returns no hits.

This is a form of abductive reasoning using linguistically motivated templates.

Implement Approach B as a standalone method for answering questions.

We an use this as part of the larger inference-based solution.


To Do

System building
1. Create a system architecture page with a figure and overview of the main components.
2. Continue system building.
  • Create a derivation scorer stub. This will be replaced with a MLN or a BLP scorer.
  • Test with iron nail example.
3. Jena API doesn't readily support multiple derivations.
  • Ask Jena community to find out if this is possible.
  • See if OWLIM can be used as a replacement.
4. Try out Tuffy MLN implemenatation.
  • Use output of iron nail example
  • If easy to use write wrappers around Tuffy to hook into our system.
5 Write evaluation code.
  • Check with Peter.
Resource Collection
Experiments