Difference between revisions of "Vulcan/MeetingNotes/Aug16 2013"
From Knowitall
(→To Do (Copied over from previous week)) |
(→Update) |
||
Line 3: | Line 3: | ||
; Focusing on system development. | ; Focusing on system development. | ||
− | : 1. Online components implemented. | + | : 1. Online inference components implemented. |
<blockquote> | <blockquote> | ||
* Proposition generator<br/> | * Proposition generator<br/> | ||
− | * Evidence finder -- Tuple matching over Open IE Clueweb data. <br/> | + | * Evidence finder -- Tuple matching over Open IE Clueweb data.<br/> |
− | * <br/> | + | * MLN Inference -- A wrapper around Tuffy's MLN inferencer.<br/> |
− | |||
</blockquote> | </blockquote> | ||
+ | See [[SystemStatus | System development status]] for details.<br/> | ||
− | : 2. | + | : 2. Offline axioms and rule generation NOT implemented. |
− | |||
− | |||
− | |||
− | |||
− | + | : 3. Experiments and Evaluation. | |
− | |||
− | : 3. Evaluation | ||
<blockquote> | <blockquote> | ||
− | * Vulcan has a good evaluation interface setup. We will use this for starters.<br/> | + | * Framework: Vulcan has a good evaluation interface setup. We will use this for starters.<br/> |
− | * <br/> | + | * Data: Training/Test splits set up by Vulcan. See details [[]]<br/> |
− | * <br/> | + | * Method: Input sentences that correspond to each assertion. Score assertions using our system and submit to Vulcan's web interface.<br/> |
</blockquote> | </blockquote> | ||
Line 43: | Line 37: | ||
: 2. Wrote down [http://homes.cs.washington.edu/~niranjan/vulcan/aug09/stepsinvolved.docx steps involved] in verifying these propositions. | : 2. Wrote down [http://homes.cs.washington.edu/~niranjan/vulcan/aug09/stepsinvolved.docx steps involved] in verifying these propositions. | ||
− | |||
== Agenda == | == Agenda == |
Revision as of 17:23, 16 August 2013
Update
- Focusing on system development.
- 1. Online inference components implemented.
- Proposition generator
- Evidence finder -- Tuple matching over Open IE Clueweb data.
- MLN Inference -- A wrapper around Tuffy's MLN inferencer.
See System development status for details.
- 2. Offline axioms and rule generation NOT implemented.
- 3. Experiments and Evaluation.
- Framework: Vulcan has a good evaluation interface setup. We will use this for starters.
- Data: Training/Test splits set up by Vulcan. See details [[]]
- Method: Input sentences that correspond to each assertion. Score assertions using our system and submit to Vulcan's web interface.
- Design questions.
- 1. Why not use MLN directly? Why use a backward chained inferencer (such as Jena) as an intermediate step?
- Looks like a separte backward-chained inferencer won't be necessary.
- Tuffy, an MLN implementation, does KBMC to scale MLN inference. Details [1]
- Analysis
- 1. Selected 10 propositions that are single Open IE tuples as starting targets.
- 2. Wrote down steps involved in verifying these propositions.
Agenda
To Do (Copied over from previous week)
- System building
- 1. Implement "template matching" using the ClueWeb corpus.Pending
- URL for Open IE backend is available.
- For an assertion A, find sentences that have high overlap. Generate regex patterns for the proposition. Score sentences by how well they match the regex patterns.
- 2. Continue system building.
- Create a derivation scorer stub. This will be replaced with a MLN or a BLP scorer. Done.
- Test with iron nail example.
- 3. Jena API doesn't readily support multiple derivations.
- Ask Jena community to find out if this is possible. Done. Not possible.
- OWLIM as replacement. Done. Doesn't look promising. No response from community.
- 4. Try out Tuffy MLN implemenatation. Done.
- Use output of iron nail example
- If easy to use write wrappers around Tuffy to hook into our system.
- 5 Write evaluation code. Vulcan has a good interface set up.
- Check with Peter.
- 6. Create a system architecture page with a figure and overview of the main components.
Created a System status page instead.
- Created a figure. Added it to system design document.
- Need to create a wiki page for system architecture and overview.
- Experiments Pending
- 1. Run template matching approach as a baseline.
- 2. Run inference system as a baseline.