Fine tuning Whisper for Speech Transcription

  Рет қаралды 15,636

Trelis Research

Trelis Research

Күн бұрын

Get Life-time access to the ADVANCED Transcription Repo:
- trelis.com/advanced-transcrip...
Video Resources:
- Dataset: huggingface.co/datasets/Treli...
- Slides: docs.google.com/presentation/...
- Simple Whisper Transcription Notebook: colab.research.google.com/dri...
- Basic fine-tuning notebook: colab.research.google.com/git...
- PEFT Example: colab.research.google.com/dri...
Other links:
➡️ Trelis Resources and Support: Trelis.com/About
Chapters
0:00 Fine-tuning speech-to-text models
0:17 Video Overview
1:39 How to transcribe KZfaq videos with Whisper
7:39 How do transcription models work?
20:08 Fine-tuning Whisper with LoRA
43:32 Performance evaluation of fine-tuned Whisper
48:32 Final Tips

Пікірлер: 70
@SiD-hq2fo
@SiD-hq2fo 5 ай бұрын
I cant thanks enough for the quality content you are providing please continue to upload such video!!
@gautammandewalker8935
@gautammandewalker8935 3 ай бұрын
Great video! You are one of the best teachers I have ever heard.
@AbdennacerAyeb
@AbdennacerAyeb 5 ай бұрын
easy, simple, well organized. Thank you
@anasdavoodtk3160
@anasdavoodtk3160 5 ай бұрын
Great explanation. the drum story!. good work
@miblish5168
@miblish5168 5 ай бұрын
This video really saved my @$$. I had Whisper&CoLab running a few moinths ago, but it broke. Your video and notebooks showed me why, and taught me several new tricks! Keep it up please.
@scifithoughts3611
@scifithoughts3611 3 ай бұрын
@Trellis have you considered instead of fine tuning, use an LLM to correct the spelling of Whisper output? (Prompt it to fix “my strell” to “mistrell”, etc.)
@scifithoughts3611
@scifithoughts3611 3 ай бұрын
Or another alternative is to prompt Whisper with the context and correct spelling of its common transcript mistakes?
@heski6847
@heski6847 6 ай бұрын
great, thx! I needed it.
@master2054
@master2054 5 ай бұрын
good job!!
@m_tron99
@m_tron99 4 ай бұрын
Great video. Can you do one on using WhisperX for diarisation and timestamping?
@dachuandu6539
@dachuandu6539 Ай бұрын
best explanation ever
@onursarikaya1385
@onursarikaya1385 3 ай бұрын
Thank you! It's a great investment :)
@TrelisResearch
@TrelisResearch 3 ай бұрын
you're welcome
@user-kr2ec9sd8u
@user-kr2ec9sd8u 5 ай бұрын
This video was very instructive, thanks! For my case, I need a model that recognize items on a list, it consists mainly of medical vocabullary, so a simple whisper model does not get them. Regarding the terms and their pronunciation I will record them in a later moment, but are they inserted in the "DatasetDict()" part of the code instead of Hugging Face's "common_voice"? Also, how is the taught model saved and used in a new project? Untill now I've only used a simple model = whisper.load_model("small") code line in my projects
@TrelisResearch
@TrelisResearch 5 ай бұрын
Your training data will need to be prepared and included into the huggingface dataset (like the new dataset I created). To re-use the model, it's easiest to push it to huggingface hub as I do here, and then you can load it back down by using the same loading code I used for the base model. Technically I think it's possible to convert back to the openai format as well and then load it using a code snippet like you did. See here: github.com/openai/whisper/discussions/830#discussioncomment-4652413
@seancarmody1506
@seancarmody1506 2 ай бұрын
Loved the video, I'm wondering if it's possible to do something similar using a vision model. Say for example a resnet trained for a certain task. Do you think it would be possible to train an adapter to allow the llm to understand the resnet features? I watched your Llava training video but the concept seemed a little different than I expected
@TrelisResearch
@TrelisResearch 2 ай бұрын
I suppose the original resnet didn't include attention, so probably that would be a disadvantage to transformers used now. But yes, in principle you could attach a resnet to the inputs of an llm - but I think it would be done something like in my llava / idefics video.
@RustemShaimagambetov
@RustemShaimagambetov 6 ай бұрын
Great video! How much data(rows) do we need to train to get acceptable results? Is it enough 5-6 rows ??
@TrelisResearch
@TrelisResearch 6 ай бұрын
Yes, even 5-6 can be enough to add knowledge of a few new words. I only had 6 rows. Probably 12 or 18 would have been better here.
@LinkSF1
@LinkSF1 5 ай бұрын
Do you know if there’s a way to downsample the frequencies? Eg if I have a 24khz sample I want to downsample to 16khz, what would be the preferred way of doing this?
@TrelisResearch
@TrelisResearch 5 ай бұрын
Howdy! Actually you can check in this vid there's a part towards the middle where I show how to downsample
@user-yu8sp2np2x
@user-yu8sp2np2x 6 ай бұрын
Recently I faced a situation where I fine-tuned a model on a training set and it returns good results from the training set example or validation set examples but when I give an input which he has never seen then it tends to produce contextually irrelevant results. Could you suggest what one should do in such a case? One thing that we can do is to make our training dataset more extensive but other than else can we so something else?
@TrelisResearch
@TrelisResearch 6 ай бұрын
create a separate validation set using data that is not from your training or validation set (could just be wikitext) and measure the validation loss on that during training. If it is rising quickly, then you are overtraining and need to train for less epochs and/or lower learning rate
@nysoc
@nysoc Ай бұрын
Thanks for this amazing video. I am trying to tune whisper to understand slurred speech (e.g. cerebral palsy). Would a small data sample work for that scenario too. Thanks!
@TrelisResearch
@TrelisResearch Ай бұрын
Yes, I think so!
@jetpro
@jetpro 4 ай бұрын
Do you know how to export it to ONNX and correctly use it in deployment? Helpful video!
@TrelisResearch
@TrelisResearch 4 ай бұрын
I haven't dug into that angle for ONNX but here's the guide for getting back from huggingface to whisper and probably you can go from there? github.com/openai/whisper/discussions/830#discussioncomment-4652413
@javadasoodeh
@javadasoodeh Ай бұрын
Thank you for your explanation. Imagine I’m gonna start training whisper on a low resource language. I don’t have the whole entire dataset at first to feed for training. If I do the same as you do, meaning correct the transcription and pair it with the voice, and finally give it to model for fine tuning. If I do these several times, the model doesn’t forget the pervious learning or wouldn’t overfit? By and large, I would like to create a pipeline to collect pair of voices with their manual transcriptions, and then fine tune the model each time. Could you guide me what do I need to do this in this way?
@TrelisResearch
@TrelisResearch Ай бұрын
to avoid forgetting and overfitting you should blend in about 5% of original/english type voice data into your new dataset.
@Rems766
@Rems766 4 ай бұрын
I've trouble fine tuning the large-v3 model. When I am evalutating, the compute_metrics function do not call properly the tokenizer method and it do not work. Any idea why?
@TrelisResearch
@TrelisResearch 4 ай бұрын
hmm that's odd, I haven't trained the large myself, I assume you tried posting on the github repo? any joy there, feel free to share the link if you create an issue
@imranullah3097
@imranullah3097 6 ай бұрын
Kindly make a video on the following. Hifi-gan with transformer Multi model (text+image)
@TrelisResearch
@TrelisResearch 6 ай бұрын
thanks, I'll add to my list. I was already planning on multi-modal some time. will take me a bit of time before getting to it
@mrsilver8151
@mrsilver8151 Ай бұрын
thanks for your great work, is there any way to convert the finetuned model to run with faster whisper, or there is another way to fine tune for faster whisper ?
@TrelisResearch
@TrelisResearch Ай бұрын
yup - see here: github.com/SYSTRAN/faster-whisper/issues/248 opennmt.net/CTranslate2/python/ctranslate2.converters.TransformersConverter.html If you try it, could you let me know if that really gives a 4x speed up on a GPU?
@user-xd1ic9qk8d
@user-xd1ic9qk8d 2 ай бұрын
good job!! but I'm not finding the checkpoints folders
@TrelisResearch
@TrelisResearch 2 ай бұрын
They'll be generated when you run through the training . Also, you need to set save_dir output_dir to somewhere you want the files to be.
@PierreDELOM
@PierreDELOM 5 ай бұрын
Very instructive videos. Next one with Diarization ?
@TrelisResearch
@TrelisResearch 5 ай бұрын
interesting idea, I'll add to my notes
@imranullah3097
@imranullah3097 5 ай бұрын
For low resource language how to train tokenizer and add and then fine tune whisper.?
@TrelisResearch
@TrelisResearch 5 ай бұрын
oooh, yeah low resource is going to be tough. Probably the approach depends on language and whether it has close languages. Ideally you want to start with a tokenizer and fine-tuned model for a close language. If you do need to train a tokenizer, you can check this vid out here: huggingface.co/learn/nlp-course/chapter6/2?fw=pt
@AndrewBawitlung
@AndrewBawitlung 4 ай бұрын
What to do when my language is not in the whisper tokenizer?
@TrelisResearch
@TrelisResearch 4 ай бұрын
Probably imperfect, but maybe you could choose the closest language and then fine-tune from there.
@tariqyahia9039
@tariqyahia9039 4 ай бұрын
Question, does the training file have to be in vtt format? or can it be in .txt?
@TrelisResearch
@TrelisResearch 4 ай бұрын
has to have time stamps, so vtt (or srt and you can convert to vtt).
@estherchantalamungalaba5295
@estherchantalamungalaba5295 Күн бұрын
Hi. I’m fine tuning Whisper for transcription on a mac, using Hugging Face Transformers. I can’t seem to figure out how to get the model and the data either both passed to the cpu or the gpu. Loved this tutorial on fine tuning and was able to follow along well until I hit this snag. And there doesn’t seem to be wide enough support on the internet for this specific problem. Can you point me to any communities where I might be able to find help on specifically using Apple machines to fine tune models? I’d really appreciate the help.
@TrelisResearch
@TrelisResearch Күн бұрын
Cheers for the comment! For data processing, you shouldn't need to do anything, it should work fine on a mac. In principle, I think you could fine-tune the model using transformers on a Mac but it requires some digging around how to use MPS (the mac gpu) properly and would require an M1, M2 or M3 Mac (anything other than that will be very slow). Can I suggest you just run in a free colab or kaggle notebook?
@estherchantalamungalaba5295
@estherchantalamungalaba5295 4 сағат бұрын
@@TrelisResearch I’m running an M2 Mac so I will look into mps a little more. Thank you! Also, I hadn’t considered Kaggle notebooks. That might be a better alternative to Colab as Colab pro is not yet available in my region and the free tier resource allocation is just not good enough. Thanks again!
@TrelisResearch
@TrelisResearch 3 сағат бұрын
@@estherchantalamungalaba5295 Great, best of luck
@ASphoton_energy
@ASphoton_energy Ай бұрын
Thanks so much, great description but I'm a little confused. At 10'58'' when discussing the breakdown of the frequencies you point to the blue graph and say: "here you can see its just an amplitude graph". I'm confused, I thought the red graph in front of 'Time' would have been the amplitude graph?
@TrelisResearch
@TrelisResearch Ай бұрын
yeah, sorry, it's a bit unclear. Indeed in the 3D graph, the red is amplitude and blue is frequency.
@ASphoton_energy
@ASphoton_energy 24 күн бұрын
@@TrelisResearch Thanks for that. I noticed you uploaded a single mp3 file for training/testing. I have 8 hours of mp3 files; will the repo allow for an entire folder of many mp3s for training/testing data?
@TrelisResearch
@TrelisResearch 24 күн бұрын
@@ASphoton_energy you can use chatgpt to tweak the code so that it loops through multiple mp3 files. If you have trouble, you can create a comment in the github repo after buying and I'll add that capability.
@kavins8054
@kavins8054 23 күн бұрын
Coool Video man!! but during my training the WER going up to 100 while both training and validation loss decreases. Help me
@TrelisResearch
@TrelisResearch 19 күн бұрын
Hard to say without having more details but often it can be because of small errors in the formatting of the timestamps in the training data.
@bryantgoh1888
@bryantgoh1888 Ай бұрын
May i ask, why the PEFT example given is different from the one you used in the video?
@TrelisResearch
@TrelisResearch Ай бұрын
It’s a more generic example that I then adapted/integrated for this video
@AndrewBawitlung
@AndrewBawitlung 6 ай бұрын
can u compare it with XLS-R?
@TrelisResearch
@TrelisResearch 6 ай бұрын
thanks for the tip, will be a while before I get back to speech but I have noted that as a topic
@_loong9906
@_loong9906 Ай бұрын
Great video! But, in my checkpoints, there's no 'added_tokens.json' or 'config.json' and so on. what's happening to me. what did i miss?
@TrelisResearch
@TrelisResearch Ай бұрын
You mean you are running training but not finding those files in your saved checkpoints? whereas you see them in my video when I do the same?
@simonsu-yz9vo
@simonsu-yz9vo 4 ай бұрын
is it possible to fine tuning for speech translation?
@TrelisResearch
@TrelisResearch 4 ай бұрын
yes, you just need to format the Q&A for that.
@sumitjana7794
@sumitjana7794 4 ай бұрын
I have transcripted text in .srt format, can I train with it??
@TrelisResearch
@TrelisResearch 4 ай бұрын
Yes! And for this script you can just convert srt to vtt losslessly using an online tool.
@sumitjana7794
@sumitjana7794 4 ай бұрын
thanks a lot @@TrelisResearch
@ivor1113
@ivor1113 Ай бұрын
Can you share the code in ADVANCED-transcription?
@TrelisResearch
@TrelisResearch Ай бұрын
Howdy! Yes, the code is in the ADVANCED-transcription repo, which you can buy lifetime access to (incl. future updates). If you buy and something is missing, you can create an issue in the repo.
@matbeedotcom
@matbeedotcom 5 ай бұрын
Would a DPO method theoretically work for more effectively fine tuning whisper?
@TrelisResearch
@TrelisResearch 5 ай бұрын
yeah DPO could be good for general performance improvement. for adding sounds/words, standard finetuning is probably best (SFT).
Text to Speech Fine-tuning Tutorial
1:15:44
Trelis Research
Рет қаралды 938
Can Whisper be used for real-time streaming ASR?
8:41
Efficient NLP
Рет қаралды 4,2 М.
КАК ДУМАЕТЕ КТО ВЫЙГРАЕТ😂
00:29
МЯТНАЯ ФАНТА
Рет қаралды 8 МЛН
Slow motion boy #shorts by Tsuriki Show
00:14
Tsuriki Show
Рет қаралды 2,1 МЛН
DO YOU HAVE FRIENDS LIKE THIS?
00:17
dednahype
Рет қаралды 113 МЛН
УГАДАЙ ГДЕ ПРАВИЛЬНЫЙ ЦВЕТ?😱
00:14
МЯТНАЯ ФАНТА
Рет қаралды 1,4 МЛН
Fine-tune Multi-modal LLaVA Vision and Language Models
51:06
Trelis Research
Рет қаралды 18 М.
Fine-tuning Whisper to learn my Chinese dialect (Teochew)
28:10
Efficient NLP
Рет қаралды 4,9 М.
World’s Fastest Talking AI: Deepgram + Groq
11:45
Greg Kamradt (Data Indy)
Рет қаралды 39 М.
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 274 М.
Best Voice Transcription AI is now the FASTEST - WHISPER JAX!
8:16
Top Ten Fine Tuning Tips
24:58
Trelis Research
Рет қаралды 2,2 М.
Choose a phone for your mom
0:20
ChooseGift
Рет қаралды 7 МЛН
iPhone socket cleaning #Fixit
0:30
Tamar DB (mt)
Рет қаралды 14 МЛН
EXEED VX 2024: Не өзгерді?
9:06
Oljas Oqas
Рет қаралды 43 М.