No video

Loki - Live Portrait - NEW TALKING FACES in ComfyUI !

  Рет қаралды 3,539

FiveBelowFiveUK

FiveBelowFiveUK

Күн бұрын

Пікірлер: 34
@dadekennedy9712
@dadekennedy9712 Ай бұрын
So good!
@ArrowKnow
@ArrowKnow Ай бұрын
Thank you for this! I was playing with the default workflow from LivePortrait but your workflow fixed all of the issues I was having with it. Perfect timing. Love it
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
Glad it helped! the credit goes to the author as we used his nodes to fix the framerate :) thanks so much tho - this is exactly why i make mildly custom editions for my packs. I just want to share these tools and see what everyone can do !
@GamingDaveUK
@GamingDaveUK Ай бұрын
Got all excited for this as it looked to be exactly what iwas looking for... a way to create an animated avatar reading along to a mp3/wav speech file... sadly it looks like a video to video. looks cool... but the search to create a video based on a tts sound file continues lol
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
we covered that previously, you can use HEDRA to do TTS or use your TTS with a picture, this will generate the talking heads also. In this video we are specifically looking at ComfyUI, where we used the Hedra to animate our puppet target character. In the previous deep dive we are exploring 2D puppet animation, with motion tracked talking heads. I have also recorded myself mimicking the words from an audio file, this can then drive the speaking animation :) -- it can work !
@DaveTheAIMad
@DaveTheAIMad Ай бұрын
@@FiveBelowFiveUK Just tried Hedra and the result was really good...but limited to 30 seconds, slicing the audio up could work but i am likely to have a lot of these to do over time. The more I look into this, the more it seems like there is no local solution where you can just feed in an image and a wav/mp3 file and get a resulting video. hedra did impress me though. I rember years ago using something called "crazy talk" that worked well but you had to mask the avatar, set the face locations yourself etc....which honestly i would be ok with doing in comfyui lol. Every solution either fails (dblib for dreamtalk node for example) or needs a video as a driver. Its actually all rather frustrating. maybe someone will solve it down the line.
@9bo_park
@9bo_park Ай бұрын
How were you able to capture your own movements and include them in the video? I’m curious about how you managed to show your captured video itself in the bottom right corner.
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
I have never shown how i create my avatar on screen, it is myself and was captured using a google Pixel 5 phone. I have also started using Motion tracking with the DJI Osmo Pocket 3, which is excellent for this. The process has been refined from a multi-software Adobe method to a 100% in ComfyUI approach. It used to be left on all night to finish a 1 minute animation, but now i can complete 600 frames in just 200 seconds. We need 30FPS so we are close to but not quite reaching 30FPS for Live Rendering. The process is simpler now, however originally it involved Large sequences of images, with Depth/Pose and a lot of manual rotoscoping. Before i would have to do a lot of editing and use Adobe Photoshop, Premiere and After Effects. Now i can just load the video from my cameras into the workflow and it does all the hard work, leaving me with assets to place into the scenes.
@sejaldatta463
@sejaldatta463 Ай бұрын
Hey great video - you mention the liquifying and using dewarp stabilizers. What nodes would you recommend in comfyui to help resolve this?
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
unfortunately i might have been unclear, afaik there are not any nodes for that (yet haha) but i would use Adobe Premiere/After Effects or Davinci Resolv or some other dedicated video editing softwares to achieve that kind of post processing. In previous videos we have looked at using Rotoscoping and motion tracking with generated 2D assets for webcam driven puppets etc, thing like this. Recently my efforts were to hunt down and build some base packs to replace those actions in comfyui, eliminating most of the work down with paid software or online services. short answer is, we fixed that in post :)
@adamsmith-lb9zv
@adamsmith-lb9zv 24 күн бұрын
What,Prompt outputs failed validation: Return type mismatch between linked nodes: images, LP OuT != IMAGEWHs_VideoCombine :Return type mismatch between linked nodes: images, LP OUT != IMAGE
@FiveBelowFiveUK
@FiveBelowFiveUK 24 күн бұрын
which workflow in the pack is giving this error?
@adamsmith-lb9zv
@adamsmith-lb9zv 23 күн бұрын
@@FiveBelowFiveUK V12
@adamsmith-lb9zv
@adamsmith-lb9zv 23 күн бұрын
@@FiveBelowFiveUK V12 workflow, on the liveportrait node conversion composite video in the process of this error, update and re-add models and so on are this error.
@FiveBelowFiveUK
@FiveBelowFiveUK 13 күн бұрын
There will be an update to this pack, because we switch the backend to mediapipe (opensource), the old ones used inswapper (research model). This can happen from time to time when the authors made significant changes to the code. Thanks for letting me know
@guillaumebieler7055
@guillaumebieler7055 Ай бұрын
What kind of hardware are you running this on? It's too much for my A40 Runpod instance 😅
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
even my 4090 can actually bottleneck on the CPU side with more than ~1000 frames in a single batch. this used the video input loader, the default will use the whole source clip. if you used more than 10-20 seconds at 30fps, it might start to struggle even with a nice setup. I split my source clips up and use the workflow like that. alternatively with a longer source clip, use 600 frame cap and use the start frame skip 0, 600, 1200, 1800, etc adding 600 frames. then you can join the results later. I'll include a walkthrough in the next Loki video, it splits the job into parts which are more manageable :)
@sprinteroptions9490
@sprinteroptions9490 Ай бұрын
great stuff.. works well.. but the workflow's a lot slower than the standalone when just trying out different photos to sync.. it's like it's processing the video again every time? With the demo animating a new image takes roughly 10 seconds after a video has been processed the first time.. so the comfy workflow takes over a minute every time no matter what.. maybe i tripped something ? i dunno
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
if you used my demo video head, it's quite long and it's possible to setup a frame limit, then batch them by moving the start frames. I used the default of the whole source clip, which might be hundreds of frames. If you see slowness in general there is a note about ONNX support and a link to how to fix it in the LivePortrait github, i believe this is to do with the reactor backend stack, which is similar - With Loki Face Swap, you should see almost instant face swapping, when using a presaved face model that you loaded.
@adamsmith-lb9zv
@adamsmith-lb9zv Ай бұрын
blogger, this node can only be used on Apple devices OS can be used, workflow node through, but there is an error message is not associated with the MPS
@Avalon19511
@Avalon19511 Ай бұрын
How did you get one image in the results, mine is split between the source and target?
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
if you are using the workflow provided (links in description), i have made the changes shown in this video - those changes were: 1. removed the split view (we want the best resolution for use later) 2. added FPS sync with the Source video 3. Connected the Audio, so the final video used the input speech.
@Avalon19511
@Avalon19511 Ай бұрын
@@FiveBelowFiveUK All good just copied yours, definitely not as smooth as hedra but it's a start:)
@Avalon19511
@Avalon19511 Ай бұрын
also your video combine is different from mine, mine says image, audio, meta_batch, vae, is it possible to change the connections?
@veltonhix8342
@veltonhix8342 Ай бұрын
Yes, right click the node and select convert widget to input.
@Avalon19511
@Avalon19511 Ай бұрын
@@veltonhix8342 thank you, any thoughts about getting one image in the results?
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
download my modified workflow from the description :) it's on civit
@alirezafarahmandnejad6613
@alirezafarahmandnejad6613 Ай бұрын
why the face in my final video is covered with a black box?
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
this would indicate that something did not install correctly with your backend. check the github for the node you are using, and see if there are any reports from other people. Two people have reported this since i launched the video. github.com/Gourieff/comfyui-reactor-node contains good advice if you have problems with insightface (required)
@alirezafarahmandnejad6613
@alirezafarahmandnejad6613 Ай бұрын
@@FiveBelowFiveUK i dont think if it's a insightface issue cause i fixed it beforehand, i dont have issues with result coming out of others flows or nodes that include insightface, only this one, that's weird, i even tried the main flow, and user-made ones, same issue.
@alirezafarahmandnejad6613
@alirezafarahmandnejad6613 Ай бұрын
@@FiveBelowFiveUK never mind bro fixed it :) the issue was that i was using cpu for rendering , changed it to cuda, now works fine
@bugsycline3798
@bugsycline3798 Ай бұрын
hu?
@angloland4539
@angloland4539 Ай бұрын
@FiveBelowFiveUK
@FiveBelowFiveUK Ай бұрын
don't forget to check the latest video ! an alternative for talking with motion
Foda FLUX1 Pack V3 | Auto Prompt Enhancer & No Easy Use Versions #comfyui
25:15
Kind Waiter's Gesture to Homeless Boy #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 15 МЛН
WHO CAN RUN FASTER?
00:23
Zhong
Рет қаралды 45 МЛН
Prank vs Prank #shorts
00:28
Mr DegrEE
Рет қаралды 10 МЛН
AI Character Acting and Relighting Is Crazy Good
10:51
Theoretically Media
Рет қаралды 72 М.
Text to 3D is AWESOME now! - AI Tools you need to know
10:51
Olivio Sarikas
Рет қаралды 143 М.
How to use LivePortrait. Learn to Animate AI Faces in ComfyUI.
10:25
Sebastian Kamph
Рет қаралды 35 М.
This Free AI Video Tool Brings Characters to Life
10:32
Theoretically Media
Рет қаралды 47 М.
Will this AI Replace 3D Modeling?
6:44
RenderRides
Рет қаралды 21 М.
Live Portrait vs. Hedra: AI Facial Animation Showdown & Tutorial
20:30
Bob Doyle Media
Рет қаралды 14 М.
Flux ControlNet has Landed ! Depth, Canny & HED | Foda pack v11
27:12
FiveBelowFiveUK
Рет қаралды 5 М.
This AI deepfake is next level: Control expressions & motion
29:55
Kind Waiter's Gesture to Homeless Boy #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 15 МЛН