Difference between revisions of "Vulcan/MeetingNotes/Aug16 2013"

From Knowitall
Jump to: navigation, search
(Update)
(Update)
Line 13: Line 13:
 
: 1. Framework: Vulcan has a good evaluation interface setup. We will use this for starters.<br/>
 
: 1. Framework: Vulcan has a good evaluation interface setup. We will use this for starters.<br/>
 
: 2. Data: Training/Test splits set up by Vulcan. <br/>
 
: 2. Data: Training/Test splits set up by Vulcan. <br/>
 +
<blockquote>
 
# Training questions = <b>474</b><br/>
 
# Training questions = <b>474</b><br/>
 
# Test questions = <b>290</b><br/>
 
# Test questions = <b>290</b><br/>
Line 18: Line 19:
 
The questions cover 4-12th and AP exams. Training data distribution:
 
The questions cover 4-12th and AP exams. Training data distribution:
  
<blockquote>
 
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-

Revision as of 18:13, 16 August 2013

Update

System development (See detailed architecture and status)
1. Online inference components implemented.
  • Proposition generator -- Extract tuples from input sentence and convert into a proposition.
  • Evidence finder -- Tuple matching over Open IE Clueweb data.
  • MLN Inference -- A wrapper around Tuffy's MLN inferencer.
2. Offline components -- axioms and rule generation -- NOT implemented.
Experiments and Evaluation
1. Framework: Vulcan has a good evaluation interface setup. We will use this for starters.
2. Data: Training/Test splits set up by Vulcan.
  1. Training questions = 474
  2. Test questions = 290

The questions cover 4-12th and AP exams. Training data distribution:

Grade All Questions #Mult.Choice and
Non-diag.
4th grade 249 108
8th grade 476 125
12th grade 446 160
AP 116 81
All 1287 474
3. Method: Input sentences that correspond to each assertion. Score assertions using our system and submit to Vulcan's web interface.
Design questions.
1. Why not use MLN directly? Why use a backward chained inferencer (such as Jena) as an intermediate step?
  • Looks like a separte backward-chained inferencer won't be necessary.
  • Tuffy, an MLN implementation, does KBMC to scale MLN inference. Details [1]
2.
Analysis
1. Selected 10 propositions that are single Open IE tuples as starting targets.
2. Wrote down steps involved in verifying these propositions.

Agenda

To Do (Copied over from previous week)

System building
1. Implement "template matching" using the ClueWeb corpus.Pending
  • URL for Open IE backend is available.
  • For an assertion A, find sentences that have high overlap. Generate regex patterns for the proposition. Score sentences by how well they match the regex patterns.
2. Continue system building.
  • Create a derivation scorer stub. This will be replaced with a MLN or a BLP scorer. Done.
  • Test with iron nail example.
3. Jena API doesn't readily support multiple derivations.
  • Ask Jena community to find out if this is possible. Done. Not possible.
  • OWLIM as replacement. Done. Doesn't look promising. No response from community.
4. Try out Tuffy MLN implemenatation. Done.
  • Use output of iron nail example
  • If easy to use write wrappers around Tuffy to hook into our system.
5 Write evaluation code. Vulcan has a good interface set up.
  • Check with Peter.
6. Create a system architecture page with a figure and overview of the main components.

Created a System status page instead.

  • Created a figure. Added it to system design document.
  • Need to create a wiki page for system architecture and overview.
Experiments Pending
1. Run template matching approach as a baseline.
2. Run inference system as a baseline.