You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are trying to solve a similar (although not the same) problem and would like to see if we can use your benchmarks.
However, the link to the dataset from the paper is broken: https://exascale.info/TRank
Is is possible to repair the link or access the dataset elsewhere?
Many thanks in advance,
Natalie
The text was updated successfully, but these errors were encountered:
thanks for your message and the interest in our work. Unfortunately, I was not able to find a copy of the data hosted on that website, but looking at my own backups I could find the ground truth data that we collected by means of crowdsourcing. If this is the data you were looking for, you can download it here https://filesender.aarnet.edu.au/?s=download&token=9c3be2dd-802d-4728-97b9-773cf1b6f471 (link expires in 40 days). The files contain the document ID (if appropriate), the entity URI, the type URI, and how many crowd workers voted for each type being relevant for that entity (in the context where it appears, if appropriate). Any question, please, let us know.
In the meantime, Philippe has sent me some data which I believe is the dataset that was compiled from the raw crowdsourcing results and is working on repairing the link. But this additional data is very interesting!
Dear authors,
We are trying to solve a similar (although not the same) problem and would like to see if we can use your benchmarks.
However, the link to the dataset from the paper is broken: https://exascale.info/TRank
Is is possible to repair the link or access the dataset elsewhere?
Many thanks in advance,
Natalie
The text was updated successfully, but these errors were encountered: