Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions About the Reported Numbers in Mutox #520

Open
ASMIftekhar opened this issue Oct 17, 2024 · 8 comments
Open

Questions About the Reported Numbers in Mutox #520

ASMIftekhar opened this issue Oct 17, 2024 · 8 comments

Comments

@ASMIftekhar
Copy link

Hello @avidale,
Thanks a lot for releasing the audio toxicity benchmark. In the paper you reported performances from Mutox and ASR-Mutox. I am trying to understand their configurations. Based on my understanding Mutox only takes audio files as input and ASR-Mutox takes the whishper generated text transcripts as input, is this understanding correct?

@avidale
Copy link
Contributor

avidale commented Oct 17, 2024

Hi ASMIftekhar!

Based on my understanding Mutox only takes audio files as input and ASR-Mutox takes the whishper generated text transcripts as input

Yes, this understanding is correct.

@ASMIftekhar
Copy link
Author

Thanks for responding quickly. Can you please point to the source code you used for evaluating the performance? In specific, how did you calculate recall at a particular precision? Did you try different threshold values and select one that gives best precision and use that threshold to calculate recall?

@avidale
Copy link
Contributor

avidale commented Oct 22, 2024

We didn't release the source code for the evaluation.
But, if I get the idea right (@mfcoria please correct me if not), we computed all possible precision-recall pairs, (e.g. using precision_recall_curve) and reported the highest recall value that corresponds to a precision no lower than max(0.3, precision of ETOX on this data part).

@mfcoria
Copy link

mfcoria commented Oct 22, 2024

@avidale Yes, that's exactly how we benchmarked it.

Note that most languages are using a 0.3 precision threshold, with only spa, eng and deu working with 0.4~.

@ASMIftekhar
Copy link
Author

Thanks a lot for the clarification, it was helpful. I can see you have provided public links to the data, unfortunately many of the links are not available anymore. Is it possible to share the extracted test/devtest data in some other ways? It would be extremely valuable to the community for creating a robust benchmark.

@avidale
Copy link
Contributor

avidale commented Oct 24, 2024

@ASMIftekhar unfortunately, it is very difficult for us to publish speech data.
However, if you extracted the audios from the links that are still alive and published them elsewhere (e.g. as a Huggingface dataset), we would be grateful for such a contribution and would include a reference to your dataset in this repo.

@ASMIftekhar
Copy link
Author

Thanks, will try to do that.

@ASMIftekhar
Copy link
Author

ASMIftekhar commented Oct 30, 2024

Sorry for reopenning the issue but the transcripts provided in the tsv file, are they human annotated transcripts? @avidale

@ASMIftekhar ASMIftekhar reopened this Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants