This project consist of the following tasks:
- Fine-tune German BERT on Legal Data,
- Create a minimal front-end that accepts a German sentence and shows its NER analysis.
- The entire process of fine-tuning German BERT on Legal Data is available in german_bert_ner.ipynb.
- This notebook also contains abstract descriptions whenever deemed necessary.
To run this project on localhost
, follow these simple steps:
- Create a virtual enviroment using:
conda create -n german_bert_ner python=3.9
- Activate this virtual enviroment:
conda activate german_bert_ner
- Clone this repo:
git clone https://github.com/harshildarji/German-NER-BERT.git
cd
to repo:
cd German-NER-BERT
- Install required packages using:
pip3 install -r requirements.txt
- Next, we need three important files;
model.pt
,tag_values.pkl
, andtokenizer.pkl
. One can either generate these files by executing through german_bert_ner.ipynb which will take 45-60 minutes or download the latest versions of these files from my DropBox using:
wget https://www.dropbox.com/s/vos8pqwmlbqe0wf/model.pt
wget https://www.dropbox.com/s/u2oojgmmprt0a9d/tag_values.pkl
wget https://www.dropbox.com/s/uj15pab78emefoq/tokenizer.pkl
- Once above-mentioned files are generated/downloaded, run
app.py
as:
python3 app.py
-
Once
app.py
is successfully executed, head over tohttp://localhost:5000/
. -
In the provided text-area, input a German (law) sentence, for example:
1. Das Bundesarbeitsgericht ist gemäß § 9 Abs. 2 Satz 2 ArbGG iVm. § 201 Abs. 1 Satz 2 GVG für die beabsichtigte Klage gegen den Bund zuständig .
-
Final output:
- Leitner, Elena, Georg Rehm, and Julián Moreno-Schneider. "A dataset of german legal documents for named entity recognition." arXiv preprint arXiv:2003.13016 (2020).