You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+3-2
Original file line number
Diff line number
Diff line change
@@ -5,12 +5,13 @@ We have designed the toolkit to be able to perform various forms of analysis on
5
5
So far, we have been able to add the features of -
6
6
1. Sentiment Analysis - telling the sentiment of the audio, and how negative or positive it is
7
7
2. Named Entity Recognition - Telling the named enity like name of a person or a place that has been mentioned in the audio
8
+
3. Emotion Detection - Finding the emotion present in the audio or the text transcripted from the audio
8
9
<br>
9
10
Features in Development -
10
11
11
12
1. Aspect Based Sentiment Analysis - finding the sentiment of the audio with respect to a given term present in it
12
13
2. Keyword Identification - Given a set of words, the model would tell if this word of words similar to the given words have been used in the audio.
13
-
3.Emotion Detection - Finding the emotion profile of a given audio
14
+
3.Audio Similarity - Comparing the Similarity between the context in audio files.
14
15
## ASR Model
15
16
16
17
For the purpose of Automatic Speech Recognition we are using DeepSpeech model which utilizes the CTC loss. We have trained the model on Common Voice Dataset made available by the Mozilla Foundation.
@@ -26,4 +27,4 @@ We are using the DistilBERT model again for this task as well.
0 commit comments