Fact Extraction from Wikipedia Text
The DBpedia Extraction Framework is pretty much mature when dealing with Wikipedia semi-structured content like infoboxes, links and categories.
However, unstructured content (typically text) plays the most crucial role, due to the amount of knowledge it can deliver, and few efforts have been carried out to extract structured data out of it.
For instance, given the Germany Football Team article, we want to extract a set of meaningful facts and structure them in machine-readable statements.
The following sentence:
In Euro 1992, Germany reached the final, but lost 0–2 to Denmark
would produce statements like:
<Germany, defeat, Denmark>
<defeat, score, 0–2>
<defeat, winner, Denmark>
<defeat, competition, Euro 1992>
Introduced yourself in the DBpedia GSoC mailing list? If not, please do so!
INPUT = Wikipedia corpus (e.g., the latest Italian chapter)
- Verb Extraction
- Extract raw text from Wikipedia dump, forked from this repo
- Extract a sub-corpus based on Wiki IDs
- Verb Extraction
- Verb Ranking
- Build a frequency dictionary of lemmas
- TF/IDF-based token ranking, using the TF/IDF module forked from this repo
- Lemma ranking based on the token-to-lemma map
- Training Set Creation
- Build CrowdFlower input spreadsheet
- Frame Classifier Training
- Translate CrowdFlower results into training data format
- Train classifier
- Frame Extraction
- TreeTagger
- libsvm
- csplit from GNU coreutils 8.23 or later
See the issues.
Committers should follow the standard team development practices:
- Start working on a task
- Branch out of master
- Commit frequently with clear messages
- Make a pull request
Please use 4 spaces (soft tab) for indentation. Pull requests not complying to this will be ignored.
- FrameNet: A Knowledge Base for Natural Language Processing
- Outsourcing FrameNet to the Crowd
- Frame Semantics Annotation Made Easy with DBpedia
The source code is under the terms of the GNU General Public License, version 2.