-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Batch + VAD issue caused by merge_segments #1270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi and thanks for the investigation, I'm aware that the VAD filter in batch mode is not actually filtering anything, it's only used for segmentation, the correct way to do it is to merge the speech segments only and later correct the timestamps to account for the removed silence, this is the approach currently used in the non-batch mode, I already experimented with implementing this but found that it made no difference in WER, but it should be implemented nonetheless because my testing is not extensive, I'm willing to review and accept any PRs that implement this |
IMO, that's expected behaviour.
It's to cut audio to chunks.
Without it performance can be much slower, so that would negate whole idea behind batched mode.
It's expected that batched mode produce more errors, you sacrifice a bit of transcription quality for speed. |
First of all, I appreciate you looking into this. I think I understand now: merge_segments is designed to merge smaller chunks after VAD to create a smaller number of larger clips (closer to 30 sec) for more efficient parallel processing. So in perfect world, 2 changes would improve quality and performance:
|
I meant it as a whole, whole point of it is to get chunks. That's why you can't disable VAD in batched.
I'm sure it would decrease quality of the timestamps. |
I was investigating an issue when batch mode would miss some phrases in the beginning of the audio. The results were coming back in non-batch mode when VAD filter was enabled.
It comes down to
merge_segments
function. I'm not 100% what it was designed to do (probably merge segments located close to each other if the results are still withinmax_speech_duration_s
set to achunk_size
for batch mode).But there are several problems with it:
vad_options.speech_pad_ms
seems excessive because the same padding is applied just by running VAD filter itself, so thisedge_padding
will basically double itmax_speech_duration_s
(needlessly, even when they are not close to each other). It basically invalidates and ignores VAD results. As soon as there are 2 segments withinmax_speech_duration_s
, it leaves segments without changes. Here are some examples (I added debug statements for Batch mode totranscribe.py
):-- Good example, several first segments are left as they are --
--- Bad example, first 3 segments are merged into 1, effective ignoring VAD results even though there is 1.5 sec between 1st and 2nd and 5 sec between 2nd and 3rd ---
Do we even need this function? I tried to change it to dummy function and return input right away and it seems like it solved my issue with missing transcription. That's because in this case same effective audio is processed, the only difference is that batch mode limits segment side with
chunk_size
and non-batch does not.Update: dropping the function fixed one place but broke another. In All cases non-batch method with VAD produced correct transcript.
The text was updated successfully, but these errors were encountered: