is there are any options for fine-tuning much longer eg: a few hours for better results?
@filip19982207 ай бұрын
Is there an example of a narration script that produces the best cloning results? Perhaps one that includes all the phonemes?
@GS1957 ай бұрын
I want Prompt To Voice please. Do you know how hard it is to find a voice according to my specifications?
@theentirecircus66236 ай бұрын
Great video, once saved the model, can we make inference locally using the tts module?
@user-nb9zd2ft7m5 ай бұрын
I fine tuned the model and saved it in my device . but in every time I want to use the model I should provide the speaker_wav which is from data used for fine tuning and this process(analyses the record) take a long time. so, how can I use the model with my own speaker id to avoid providing it the speaker wav ???
@aigaming63105 ай бұрын
Great notebook & video ! Unfortunate, it seems notebook gradio not support Thai yet. Could you please also guide me how to do it in another language ?
@user-wr2cd1wy3b7 ай бұрын
Also, it doesn't seem to run locally after downloading the model.pth, vocab.json and config.json Do you need to download the whisper model for it to work locally or is that just for training? Edit: No the whisper model didn't change it, I was desperate, figured maybe it needed to check you had requirements for training in order to inferencing or something, that that didn't do it. Removing the quotation makes from the paths made it look like it was loading for a moment, but then after 4 seconds it just says "error." When loading the finetuned model.
@joeyhandles7 ай бұрын
prob just drop those files over the xtts model you have installed locally.
@maker_pt8 ай бұрын
It seems to work really nicely. But how can I run the model in coqui? E.g. using the python api or the tts server?
@YevgeniyChannel7 ай бұрын
Me too.
@danemmer96866 ай бұрын
for api, replace the files in your computer with the files you've downloaded@@YevgeniyChannel
@jakobejensen67656 ай бұрын
This might sound like a dumb question, but how would you load the fine tuned model in other python programs. I know we get a config.json, vocab.json and a modal.pth files after the fine tuning process, but would we use the TTS.api?
@corpse22224 ай бұрын
Not sure where or how to properly submit a ticket for the colab creators, but, the colab has been broken for weeks now. I check every couple days and it's only getting worse, stopping with errors sooner and sooner in the process.
@drewthomasson94922 күн бұрын
I’m currently working on a fixed colab version, I’ll give you a heads up if I get it working Seems that we need to force downgrade all the packages that were changed in recent google colab updates
Guys, I repeated your lesson, but in the files of all CMS models wherever I see, there is an index file / how to create it or is it not needed???))))
@torusx85644 ай бұрын
? wdym
@handcraft.corner6 ай бұрын
Is this Fine Tuning for XTTS v1 or v2?
@aurelianobuendia245 ай бұрын
it would be amazing to know how to load the model in a local enviroment now that i´ve train it
@sayedyasser25 ай бұрын
any idea on this? Ive got the fine tuned model and the json files, where do I load it for future use?
@torusx85644 ай бұрын
Download the model. You can transfer into your google drive lol. If you could not this video would have 0 sense@@sayedyasser2
@captainlavenderVHS8 ай бұрын
Very very cool!!!! Doesn't work when dataset language is set to ja on 1st tab though - doesn't seem to be able to populate the metadata_eval.csv
@Gobolinn8 ай бұрын
encountered the same issue looks like ja isnt supported yet
@Otome_chan3116 ай бұрын
@@Gobolinn disappointing. i've been looking for a good ja->en voice clone tts. best I've found so far is moegoe which ends up being a bit weird sounding with the pacing when doing inference in english (but the sound of the voice is spot on). Every other voice clone thing I've tried doesn't seem to match the voice at all. I was hoping this would work but it seems not?
@xiunianwang7 ай бұрын
when I ran the cell 1,I got this error message Building wheel for docopt (setup.py) ... done ERROR: pip's legacy dependency resolver does not consider dependency conflicts when selecting packages. This behaviour is the source of the following dependency conflicts. lida 0.0.10 requires fastapi, which is not installed. lida 0.0.10 requires kaleido, which is not installed. lida 0.0.10 requires python-multipart, which is not installed. lida 0.0.10 requires uvicorn, which is not installed. librosa 0.10.1 requires numpy!=1.22.0,!=1.22.1,!=1.22.2,>=1.20.3, but you'll have numpy 1.22.0 which is incompatible. plotnine 0.12.4 requires numpy>=1.23.0, but you'll have numpy 1.22.0 which is incompatible. pywavelets 1.5.0 requires numpy=1.22.4, but you'll have numpy 1.22.0 which is incompatible. tensorflow 2.15.0 requires numpy=1.23.5, but you'll have numpy 1.22.0 which is incompatible. gruut 2.2.3 requires networkx=2.5.0, but you'll have networkx 3.2.1 which is incompatible.
@user-ng4fk5hd6m8 ай бұрын
I gave up on it, i tried to train it on a 2:30 duration audio that was cleaned properly and it just was still training after 20 minutes on the default settings
@erogol8 ай бұрын
fine-tuning takes time. You need to wait for a bit.
@torusx85644 ай бұрын
depends it takes around 5 min for with 10min audio for fine tuning. Just make sure you use T4 GPU@@erogol
@pylotlight4 ай бұрын
@@torusx8564 got kicked out by google due to some error about free limits sadly.
@YevgeniyChannel7 ай бұрын
I need help please
@torusx85644 ай бұрын
lol on what
@YevgeniyChannel4 ай бұрын
To make effects and AI voices @@torusx8564
@pylotlight4 ай бұрын
Got kicked out by google due to free tier limits..
@HyperUpscale7 ай бұрын
I love the Coqui performance, results and ease of use, but Is it possible to be even easier? Like 1 file for input file for training or microphone input, button 2 for training, and 3 type and speak. I am not sure why in year 2024 we still need to copy and paste text ...
@james-hunter-carter7 ай бұрын
The thing you are looking at is not meant for end-users, it's for developers.
@BlenderBeanie7 ай бұрын
This is currently the peak of technology. The top of tech available to the public. It's the very first iteration of the ui too. So in the future it might get easier, the more people want to use it. Like automatic1111s ui used to be barebones and hard to use. But now it's become a lot more user friendly. Just a few months ago all this was pure command line
@HyperUpscale7 ай бұрын
Maybe you just found out about it😄 People are already making money from the same peak technology. I paid months ago for this type of peak technology and moths in the age of AI means long time ago :)
@BlenderBeanie7 ай бұрын
@@HyperUpscale Perhaps I should have clarified myself more. I meant the peak open source versions, that are accessable for everyone for free. If you take the older models of coqui for example, just a few months ago it would have taken you many hours to train a proper model that works well on any ways. A year ago even just a basic voice was considered a big step towards open source AI technology. I am well aware of AI-Voices being used for many many years now, however, technology of this caliber were not yet accessable to the everyday user for free, only through paid alternatives.