This is our Bachelor's graduation thesis from Yildiz Technical University on question answering systems with deep learning. Betül Ön and I collected 610 paragraphs from Turkish Wikipedia and extracted 5,000 question-answer pairs from them. With this dataset we fine-tuned Turkish BERT, ALBERT, and ELECTRA for the question-answering task. We achieved 68% exact-match and 81% F1 score, 49% exact-match and 68% F1 score, and 66% exact-match and 82% F1 score with BERT, ALBERT, and ELECTRA respectively. We are thankful to our supervisor Prof. Dr. Banu Diri for her excellent mentorship. Thank you to Bayerische Staatsbibliothek and Loodos for this great pre-trained turkish models. And thank you to Huggingface for the all transformers and tokenizers.
-
Notifications
You must be signed in to change notification settings - Fork 1
We extracted 5,000 question-answer pairs from Turkish Wikipedia and fine-tuned Turkish BERT, ALBERT, ELECTRA for the question-answering task.
License
fzehracetin/turkish-question-answering
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
We extracted 5,000 question-answer pairs from Turkish Wikipedia and fine-tuned Turkish BERT, ALBERT, ELECTRA for the question-answering task.
Topics
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published