OpenAI Whisper Speaker Diarization - Transcription with Speaker Names

  Рет қаралды 51,514

1littlecoder

1littlecoder

Жыл бұрын

High level overview of what's happening with OpenAI Whisper Speaker Diarization:
Using Open AI's Whisper model to seperate audio into segments and generate transcripts.
Then generating speaker embeddings for each segments.
Then using agglomerative clustering on the embeddings to identify the speaker for each segment.
Speaker Identification or Speaker Labelling is very important for Podcast Transcription or Conversations Audio Transcription. This code helps you do that.
Dwarkesh's Patel Tweet Announcement - / 1579672641887408129
Colab - colab.research.google.com/dri...
huggingface.co/spaces/dwarkes...

Пікірлер: 83
@DwarkeshPatel
@DwarkeshPatel Жыл бұрын
Thanks so much for making this video and highlighting my code! Really cool to see it's useful to other peopl!
@integralogic
@integralogic Жыл бұрын
Thank you!
@kushagragupta149
@kushagragupta149 Ай бұрын
why its down
@estrangeiroemtodaparte
@estrangeiroemtodaparte Жыл бұрын
As always, delivering the goods! Thanks 1littlecoder!
@1littlecoder
@1littlecoder Жыл бұрын
Any time!
@stebe8271
@stebe8271 Жыл бұрын
I was working on a model to do this exact thing, as we speak. Thanks for the resource, this will save me lots of time
@1littlecoder
@1littlecoder Жыл бұрын
Glad to be helpful
@serta5727
@serta5727 Жыл бұрын
Whisper is very useful ❤
@1littlecoder
@1littlecoder Жыл бұрын
It's very useful ☺️
@kmanjunath5609
@kmanjunath5609 9 ай бұрын
I almost did it manually 1.Created rttm file using pyannote. 2.Slice full length audio using rttm file in and out for each. 3.Run thru wisper for transcription. It was like 5 times slower. I was thinking hard how to do other way around. First generate full transcript and then separate segments. Some how i saw your video and am impressed and the AgglomerativeClustering at he end blowed my mind. Thanks for sharing knowledge.
@IWLTFT
@IWLTFT Жыл бұрын
Hi Everyone, Thanks 1littlecoder and Dwarkesh, this is fantastic, I managed to get it working and it is helping me immensely and I am learning a lot. I am struggling with Google as I always end up with 0 compute units and that causes all sorts of issues and I am unable to complete the transcriptions (i have quite large files I am processing, several 1 hr coaching sessions). Does AWS have a better option going? And the next question would be how easy would it be porting this to an AWS linux environment? if that is an option
@serychristianrenaud
@serychristianrenaud Жыл бұрын
Thanks
@1littlecoder
@1littlecoder Жыл бұрын
You're welcome
@user-ud7cq3lq4l
@user-ud7cq3lq4l 10 ай бұрын
i started to love it when you used bruce wayne's clip
@rrrila8851
@rrrila8851 Жыл бұрын
It is interesting, although, i think it would be way better to autodetect how many speakers are there, and then start the transcription.
@SustainaBIT
@SustainaBIT 7 ай бұрын
Thank you so much, by any chance, do you think there could be a method to make it do all of that in real time, during a call let's say. Any ideas of where could I start would be very helpful ❤❤
@MixwellSidechains
@MixwellSidechains Жыл бұрын
I'm running it locally in Jupyter notebook but I can't seem to find an offline model PreTrainedSpeakerEmbedding
@IgorGeraskin
@IgorGeraskin 8 ай бұрын
Thank you for sharing your knowledge! Everything works fine, but an error started appearing: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts. llmx 0.0.15a0 requires cohere, which is not installed. llmx 0.0.15a0 requires openai, which is not installed. How to fix?
@JohnHumphrey
@JohnHumphrey Жыл бұрын
I'm not much of a dev myself, but it seems like it might be simple to add a total time spoken for each speaker. I would love to be able to analyze podcasts in order to understand how much time the host is speaking relative to the guest. In fact it would be very cool if someone built an app that would remove one of the speakers from a conversation and create a separate audio file consisting of only what the remaining speaker/s said.
@Michallote
@Michallote 10 ай бұрын
It sounds like an interesting project. Becoming a dev is more about trying even though you don't know what you are doing, googling stuff and going step by step on what does each line does broadly. Progress is gradual! I encourage you to try it yourself 😊
@ChopLabalagun
@ChopLabalagun 9 ай бұрын
The code is DEATH no longer works :(
@mjaym30
@mjaym30 Жыл бұрын
Amazing videos!! Keep going!. I have a request though. Could you please publish a video for customizing GPT-J-6B on colab using 8bit version.
@geoffphillips5293
@geoffphillips5293 9 ай бұрын
None of the ones I've played with cope particularly well with more complicated situations. For instance where one person interrupts another, or if there are three people or more. They can all cope with two very clearly different speakers, but then I figure I could do that with old school techniques like simply averaging the frequency. It's weird because the text to speech itself is enormously clever, it's just surprising that the AI can't distinguish voices well.
@klarinooo
@klarinooo 11 ай бұрын
I m trying to upload a wav file of 5MB and I m getting "RangeError: Maximum call stack size exceeded" .. is this only for tiny file sizes?
@jesusjim
@jesusjim 7 ай бұрын
working with large files of 1hr recording seems to be a problem. i was hoping i could run this locally without google colab
@vinsmokearifka
@vinsmokearifka Жыл бұрын
Thank you Sir, its only for English?
@user-zw3xh2bo2z
@user-zw3xh2bo2z 11 ай бұрын
hello sir...i have small doubt...if we have no of speaker more than "2" ..how could the parameter "num_speakers" be vary..
@gaurav12376
@gaurav12376 10 ай бұрын
The colab notebook is not accessible. Can you share the new link.
@jhoanlagunapena429
@jhoanlagunapena429 4 ай бұрын
Hi! Thanks a lot for this video! I really liked it and I appreciate what you've just shared here. I have a question: what if you are not quite sure about the number of speakers? Sometimes it's not so easy to distinguish one voice from another if there are a lot of people talking (like a focus group). What can I do in that case?
@nealbagai5388
@nealbagai5388 4 ай бұрын
Having this same issue - wondering if its a more widespread issue or specific to this configuration of pyannote. Thinking about trying NeMo if this poses a problem
@Sanguen666
@Sanguen666 9 ай бұрын
smart pajeeet!
@cluttercleaners
@cluttercleaners 9 ай бұрын
is there an updated hugging face link?
@dreamypujara3384
@dreamypujara3384 Жыл бұрын
[ERROR] embeddings = np.zeros(shape=(len(segments), 192)) for i, segment in enumerate(segments): embeddings[i] = segment_embedding(segment) embeddings = np.nan_to_num(embeddings) i am getting assertion error here. embeddings[i] = segment_embedding(segment). i am using a hindi audio clip and base model. colab platform and GPU T4 runtime.
@JoseMorenoofi
@JoseMorenoofi Жыл бұрын
change the audio to mono
@yayaninick
@yayaninick 11 ай бұрын
@@JoseMorenoofi Thanks, it worked
@warrior_1309
@warrior_1309 11 ай бұрын
@@JoseMorenoofi Thanks
@gauthierbayle1508
@gauthierbayle1508 11 ай бұрын
@@JoseMorenoofi Hi ! I get that assertion error @dreamypujara3384 mentionned above and I'd like to know where did you make that change from audio to mono... Thanks : )
@traveltastequest
@traveltastequest 10 ай бұрын
​@@JoseMorenoofican you please explain how to do that
@jordandunn4731
@jordandunn4731 Жыл бұрын
on the last cell I get an error for UnicodeEncodeError: 'ascii' codec can't encode characters in position 5-6: ordinal not in range(128) any ideas whats going wrong and how do I fix it?
@latabletagrafica
@latabletagrafica Жыл бұрын
same problem.
@humbertucho2724
@humbertucho2724 Жыл бұрын
Same problem, in my case I am using spanish, looks like the problem is with tildes e.g:"ó".
@humbertucho2724
@humbertucho2724 Жыл бұрын
I solved the problem replacing the line f = open("transcript.txt", "w") by the following f = open("transcript.txt", "w", encoding="utf-8")
@latabletagrafica
@latabletagrafica Жыл бұрын
@@humbertucho2724 that worked for me, thanks.
@thijsdezeeuw8607
@thijsdezeeuw8607 Жыл бұрын
@@humbertucho2724 Thanks mate!
@kmanjunath5609
@kmanjunath5609 9 ай бұрын
What if new speaker enters in between then number of speakers will become? old + new or old?
@DestroyaDestroya
@DestroyaDestroya Жыл бұрын
Anyone tried this recently? Code no longer works. Looks to me some dependencies have been upgraded.
@gerhardburau
@gerhardburau Жыл бұрын
True. Code does not work. please fix. I need it.
@frosti7
@frosti7 Жыл бұрын
It doesn't work well (detects language as Malay, also, does not offer custom names for speakers) - anyone has a better working solution?
@cho7official55
@cho7official55 11 күн бұрын
This video is one year old, is there now some improvement to opensource easily diarization as good as krisp product, and locally ?
@ciaran8491
@ciaran8491 11 ай бұрын
How can i use this on my local installation?
@divyanshusingh7893
@divyanshusingh7893 4 ай бұрын
Can we automatically detect number of speakers in an audio?
@rehou45
@rehou45 Жыл бұрын
i tried to execute the code on google colab but it buffers during more than 1 hour and still not executed...
@raghvendra87
@raghvendra87 Жыл бұрын
it does not work anymore
@DRHASNAINS
@DRHASNAINS 8 ай бұрын
How i can run same on Pycharm ? Can anyone guide
@yangwang9688
@yangwang9688 Жыл бұрын
Does it still work if the speech exists overlapping?
@1littlecoder
@1littlecoder Жыл бұрын
I'm not sure if it'd work fine then. I have not checked it.
@Labbsatr1
@Labbsatr1 10 ай бұрын
It is not working anymore
@AyiteB
@AyiteB 6 ай бұрын
Thiis kept crashing at the embeddings section for me. And the hugginface link isn't valid anymore
@sarfxa9974
@sarfxa9974 5 ай бұрын
you just have to add [if waveform.shape[0] > 1: waveform = waveform.mean(axis=0, keepdims=True)] right before the return of [segment_embedding()]
@user-jf5ru5ow8u
@user-jf5ru5ow8u 5 ай бұрын
bhai MP3 sites wala links v daal detay
@ebramsameh5349
@ebramsameh5349 Жыл бұрын
why he give me error in this line embeddings[i] = segment_embedding(segment)
@NielsPrzybilla
@NielsPrzybilla Жыл бұрын
same error here: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 1 embeddings = np.zeros(shape=(len(segments), 192)) 2 for i, segment in enumerate(segments): ----> 3 embeddings[i] = segment_embedding(segment) 4 5 embeddings = np.nan_to_num(embeddings) 1 frames /usr/local/lib/python3.9/dist-packages/pyannote/audio/pipelines/speaker_verification.py in __call__(self, waveforms, masks) 316 317 batch_size, num_channels, num_samples = waveforms.shape --> 318 assert num_channels == 1 319 320 waveforms = waveforms.squeeze(dim=1) AssertionError: And yes :-) I am na totally new coder
@geckoofglory1085
@geckoofglory1085 Жыл бұрын
The same thing happened to me.
@Hiroyuki255
@Hiroyuki255 Жыл бұрын
same, I noted it gives such an error with more or less big audio files. smaller ones worked fine. not sure if it's the reason though.
@Hiroyuki255
@Hiroyuki255 Жыл бұрын
Guess I found the answer! If you convert the initial audio file from stereo to mono it works fine
@guillemarmenteras8105
@guillemarmenteras8105 Жыл бұрын
@@Hiroyuki255 how do you do it?
@datasciencewithanirudh5405
@datasciencewithanirudh5405 Жыл бұрын
did anyone encounter this error
@olegchernov1329
@olegchernov1329 Жыл бұрын
can this be done locally?
@mgomez00
@mgomez00 10 ай бұрын
If you have enough GPU power locally, YES. I also assume you have Python and all the required libraries installed.
@JorgeLopez-gw9xc
@JorgeLopez-gw9xc Жыл бұрын
for this code you use whisper model for openai ? why you are not using api key ?
@1littlecoder
@1littlecoder Жыл бұрын
I'm using it from the model (not the API)
@JorgeLopez-gw9xc
@JorgeLopez-gw9xc Жыл бұрын
@@1littlecoder the whister model then is not the one on openai st. In my case I am looking for a method based on openai, the reason is to be able to ensure the privacy of the information since I want to do it with company data, do you know if it is possible?
@user-nl2ic1kb7v
@user-nl2ic1kb7v 2 ай бұрын
@@JorgeLopez-gw9xc Hi Jorge, I am having the same question as yours, I would be really appreciative if you have figured out the solution and happy to share some ideas with me
@mramilvideo
@mramilvideo 3 ай бұрын
There are no speaker names. How we can identify speakers?
@abinashnayak8132
@abinashnayak8132 Жыл бұрын
How to change speaker1 and speaker2 to the actual person's names ?
@mllife7921
@mllife7921 Жыл бұрын
you need to store person's voice embeddings and then compare (some similarity) and map with the one in the samples generated
@AndreAngelantoni
@AndreAngelantoni Жыл бұрын
Export into text then do a search and replace.
@OttmarFlorez
@OttmarFlorez Жыл бұрын
Malísimo
@user-kn8zd2qj9x
@user-kn8zd2qj9x 8 ай бұрын
About the embedding issue, here is a replacement for the cell that converts to wav using ffmpeg, I have included a conversion to mono. This will fix the issue with no problems, simply replace the cell that converted to .wav with the following: # Convert to WAV format if not already in WAV if path[-3:] != 'wav': subprocess.call(['ffmpeg', '-i', path, 'audio.wav', '-y']) path = 'audio.wav' # Convert to mono subprocess.call(['ffmpeg', '-i', path, '-ac', '1', 'audio_mono.wav', '-y']) path = 'audio_mono.wav'
Speaker diarization -- Herve Bredin -- JSALT 2023
1:18:56
Center for Language & Speech Processng (CLSP), JHU
Рет қаралды 4,1 М.
LangGraph 101: it's better than LangChain
32:26
James Briggs
Рет қаралды 55 М.
HOW DID HE WIN? 😱
00:33
Topper Guild
Рет қаралды 33 МЛН
MEGA BOXES ARE BACK!!!
08:53
Brawl Stars
Рет қаралды 35 МЛН
Always be more smart #shorts
00:32
Jin and Hattie
Рет қаралды 46 МЛН
THEY made a RAINBOW M&M 🤩😳 LeoNata family #shorts
00:49
LeoNata Family
Рет қаралды 29 МЛН
What is RAG? (Retrieval Augmented Generation)
11:37
Don Woodlock
Рет қаралды 107 М.
OpenAI Whisper: Robust Speech Recognition via Large-Scale Weak Supervision | Paper and Code
1:02:42
I Built a Personal Speech Recognition System for my AI Assistant
16:32
Best FREE Speech to Text AI - Whisper AI
8:22
Kevin Stratvert
Рет қаралды 910 М.
Fine tuning Whisper for Speech Transcription
49:26
Trelis Research
Рет қаралды 14 М.
I wish every AI Engineer could watch this.
33:49
1littlecoder
Рет қаралды 62 М.
Building an Audio Transcription App with OpenAI Whisper and Streamlit
21:36
Automata Learning Lab
Рет қаралды 31 М.
HOW DID HE WIN? 😱
00:33
Topper Guild
Рет қаралды 33 МЛН