No video

ComfyUI - Live Portrait | Animate Character Face (Video to Video)

  Рет қаралды 7,167

CG TOP TIPS

CG TOP TIPS

Күн бұрын

Пікірлер: 55
@WiLDeveD
@WiLDeveD Ай бұрын
Very useful Tutorial. thanks.
@CgTopTips
@CgTopTips Ай бұрын
I am glad it was usefull
@gregosfr
@gregosfr Ай бұрын
great video and workflows, please don't forget youtube subtitles
@CgTopTips
@CgTopTips Ай бұрын
Sure, soon the videis will have voiceovers to make them easier to understand :)
@yuzhang9052
@yuzhang9052 4 күн бұрын
谢谢,很棒的教程,学会了~
@thehanspoon
@thehanspoon Ай бұрын
Thank you
@davimak4671
@davimak4671 Ай бұрын
thanks, bro
@ingeniosoleonalmeida5877
@ingeniosoleonalmeida5877 Ай бұрын
very nice tutoriales Real Time Camera to AI in ComfyUI
@DeMaddin81
@DeMaddin81 Ай бұрын
Hi. I had a look at the temporary files. I noticed something: With your method, it seems that for EVERY frame of the source video, a complete pass with the facial expressions of the target video is made. This means that at 30 fps and 1 second (for the result), for example, 30x30 = 900 images are generated per second (at 30 fps of the source video). However, only 30 images are needed for the result, the other 870 images are discarded. I understand that comparison images are needed for consistent movement. But wouldn't 5 comparison images be enough, for example, instead of the full second? If I render 10 seconds of video, for example (at 30 fps), the result for 300 video images is 10*30*30=9000 generated images. That's crazy and needs a lot of time. Do you have a solution for this.
@DarienLingstuyl
@DarienLingstuyl 21 күн бұрын
I am trying the LivePortrait Video on ComfyUI on pinokio, there is one thing I don't know how to fix, if the original video has other expressions like smiling, and in the driving video smile too, I get a really weird "joker" smile, like doubling the expressions between the original and the driving video, is there a way to fix that?
@CgTopTips
@CgTopTips 20 күн бұрын
You need to pay attention to two things: 1. The facial expression in the first frame of both videos should be the same. For example, if the mouth is closed in one video, it should also be closed in the second video, and so on. 2. Live Portrait works better with source videos where the face doesn't change much.
@DarienLingstuyl
@DarienLingstuyl 20 күн бұрын
@@CgTopTips Thanks!, i knew that the first frame was important and I read that it has to be an standard expression, but the video target started smiling, so I started the driving video smiling and it got better, only that when target smiles, I still get a little overreacting of smile. But thanks for your suggestion!
@daja74
@daja74 Ай бұрын
Great work. One difference I see in my ComfyUI setup is that each frame requires the whole Live Portrait step to complete. So for a 50-frame video, the workflow will run the Live Portrait step 50 times. Is that correct? Because in the You Tube Video, it looks like only one step of the Live Portrait completes the whole video render. I want to make sure I have not messed up somehow. I did need to install your version of Live Portrait and MixLab directly from your Git repos rather than through the ComfyUI manager.
@selenegarcia6321
@selenegarcia6321 13 күн бұрын
Error occurred when executing LivePortraitProcess: LivePortraitProcess.process() missing 1 required positional argument: 'crop_info' Can anybody help please???
@LeePreston-t1d
@LeePreston-t1d Ай бұрын
Anyone know if this can work on mac M3, I am recieving this error code currently if anyone knows how to help. "Error occurred when executing LivePortraitVideoNode: Torch not compiled with CUDA enabled File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 65, in map_node_over_list results.append(getattr(obj, func)(**input_data_all)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/live_portrait.py", line 468, in run live_portrait_pipeline = LivePortraitPipeline( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_pipeline.py", line 67, in __init__ self.live_portrait_wrapper: LivePortraitWrapper = LivePortraitWrapper(cfg=inference_cfg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_wrapper.py", line 29, in __init__ self.appearance_feature_extractor = load_model(cfg.checkpoint_F, model_config, cfg.device_id, 'appearance_feature_extractor') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/utils/helper.py", line 99, in load_model model = AppearanceFeatureExtractor(**model_params).cuda(device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in cuda return self._apply(lambda t: t.cuda(device)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 804, in _apply param_applied = fn(param) ^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in return self._apply(lambda t: t.cuda(device)) ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")
@SupermanRLSF
@SupermanRLSF 14 күн бұрын
Is there a way to get a webcam live feed into the first video section?
@CgTopTips
@CgTopTips 14 күн бұрын
Yes, for instructions, you can find the tutorial videos on KZfaq, Webcam Portrait Live
@user-jh8zy7oy5b
@user-jh8zy7oy5b Ай бұрын
tell me what the problem is, it takes a very long time to render for more than an hour and then such a result and there are no errors. the video that turned out to have a black square instead of a head.
@CgTopTips
@CgTopTips Ай бұрын
Prolonged render time could be due to one of the following options: 1. You are using the CPU instead of the GPU. 2. The program is downloading its required files for the first time (though the render should not be lengthy on the second run). 3. Your settings, such as video duration, size, or the number of steps, are high!
@timemirror_
@timemirror_ Ай бұрын
Thanks!! I have an issue btw. The "insightface" folder didn't appear in my "models" folder. I am sure I downloaded the nods you mentioned at the beginning of the video. Maybe I'm doing something wrong. What do you think?
@CgTopTips
@CgTopTips Ай бұрын
Manually create that folder and put the model
@DeMaddin81
@DeMaddin81 Ай бұрын
Hi. Great video. Where do I get the source video? I mean the dancing woman.
@CgTopTips
@CgTopTips Ай бұрын
www.pexels.com/search/videos/dancing/
@DeMaddin81
@DeMaddin81 Ай бұрын
@@CgTopTips Thank you! 😀👍
@user-pn6ey5dn4y
@user-pn6ey5dn4y Ай бұрын
Do you know of a way to crop/resize the video to a square shape that will work in 'this' workflow without distorting the original image? Usually, I'd use image resize or prepare images for clip vision, but they don't work here because of the connectors.
@CgTopTips
@CgTopTips Ай бұрын
Use ImageCrop node
@user-pn6ey5dn4y
@user-pn6ey5dn4y Ай бұрын
@CgTopTips thanks for a suggestion. Could you send a picture please? I tried to connect "load video and segment" to two different image crop nodes, but they don't connect. I could crop a video if I use a different load video node, but then I can't connect into the 'drive video' connector in the live portrait node. Thank you
@DeMaddin81
@DeMaddin81 Ай бұрын
I noticed something: When I tried with another dancer, her face was covered by her hand for a few frames. I immediately got an error message (like) "Face not recognized". The process aborted and the entire result was discarded. That means the entire rendering time was wasted. Do you have any idea how to make the tool continue rendering despite the error message? Perhaps an additional box in the ComfyUI that catches the error?
@CgTopTips
@CgTopTips Ай бұрын
Unfortunately, in this method, the face must be visible in all frames, and there is currently no solution for this issue !
@DeMaddin81
@DeMaddin81 Ай бұрын
@@CgTopTips THX!
@fatiheke
@fatiheke Ай бұрын
error not install "LivePortraitVideoNode"
@CgTopTips
@CgTopTips Ай бұрын
Always check the following points: - Ensure you install the requirement files for each custom node (pip install -r requirements.txt). - Download the necessary models for each custom node. - Verify that all custom nodes needed for the workflow are installed without issues (check through the manager panel). - Ensure there are no version conflicts between models; for example, if the checkpoint is SD1.5, the ControlNet should also be SD1.5. - Always follow the installation steps for each custom node precisely through its GitHub page. - The best way to understand an issue when you see an error message is the ComfyUI Terminal Panel. For example, sizes might not match, or you might not have selected the settings for a node correctly, and so on. - You can copy the error message and search for a solution on Google. Note: If the problem is still unresolved, please share a screen shot of error with your workflow via email so I can check it.
@user-ng3bi6xn2u
@user-ng3bi6xn2u Ай бұрын
I am getting the "No face detected in the source image" error while trying the Video-to-Video method.
@CgTopTips
@CgTopTips Ай бұрын
@@user-ng3bi6xn2u The face must be clearly visible, meaning the video quality should not be too low, or the character's head should be facing the camera
@user-tn3nc1mz3q
@user-tn3nc1mz3q Ай бұрын
I get an error: (IMPORT FAILED) comfyui-liveportrait so the liveportraitforvideo node can't work. It did't success to fix it automaticly. What can I do?
@CGITanous
@CGITanous 18 күн бұрын
I have the same issue, were you able to fix it?
@user-tn3nc1mz3q
@user-tn3nc1mz3q 17 күн бұрын
@@CGITanous Then I found another workflow that worked for me without using this node. But I can't remember where it was... the file is called liveportrait_video_example_02.json
@inteligenciafutura
@inteligenciafutura Ай бұрын
I have never used the tool, can it be installed with pinochio?
@CgTopTips
@CgTopTips Ай бұрын
You can install A1111 with Pinochio. ComfyUI is portable, and Live Portrait is a custom node and should install through comfyui
@inteligenciafutura
@inteligenciafutura Ай бұрын
@@CgTopTips (IMPORT FAILED) comfyui-liveportrait :(
@user-go5vl9rv4p
@user-go5vl9rv4p Ай бұрын
Set notebook specs?
@Alehantro
@Alehantro Ай бұрын
probably a stupid question, but i just downloaded comfy ui and i can't see the mixlab, the manager and the share tab. Any ideas on that?
@CgTopTips
@CgTopTips Ай бұрын
The easiest way is to download the Manager (github.com/ltdrdata/ComfyUI-Manager/archive/refs/heads/main.zip) and extract it into the custom node folder. Search for and install Mixlab through the Manager panel, or download it and extract it into the custom node folder.
@Alehantro
@Alehantro Ай бұрын
@@CgTopTips Thank you so much for that!
@Alehantro
@Alehantro Ай бұрын
@@CgTopTips I'm trying to add the liveportrait models, and but i can't see the folder "insightface" that you show
@CgTopTips
@CgTopTips Ай бұрын
@@Alehantro You can manually create that
@Alehantro
@Alehantro Ай бұрын
@@CgTopTips Once again, thank you so much for both your answers and your tutorial! It worked!!!
@mylittleheartscar
@mylittleheartscar Ай бұрын
What's the song tho?
@CgTopTips
@CgTopTips Ай бұрын
Which song ? I used 3 songs in the video :)
@kiya573
@kiya573 Ай бұрын
Can I use in colab
@CgTopTips
@CgTopTips Ай бұрын
Sorry, I have no information
@kneel.downnn
@kneel.downnn Ай бұрын
Whats your pc specs btw
@CgTopTips
@CgTopTips Ай бұрын
RTX 4060, 8GB VRAM
ComfyUI - Fast, Simple & Accurate Face Swap
9:29
CG TOP TIPS
Рет қаралды 8 М.
AI Character Acting and Relighting Is Crazy Good
10:51
Theoretically Media
Рет қаралды 72 М.
Pool Bed Prank By My Grandpa 😂 #funny
00:47
SKITS
Рет қаралды 20 МЛН
What will he say ? 😱 #smarthome #cleaning #homecleaning #gadgets
01:00
FLUX Fine Tuning with LoRA | Unleash FLUX's Potential
9:13
AINxtGen
Рет қаралды 12 М.
How to use LivePortrait. Learn to Animate AI Faces in ComfyUI.
10:25
Sebastian Kamph
Рет қаралды 35 М.
ComfyUI FLUX - Accurate & Easy Inpainting Technique
8:02
CG TOP TIPS
Рет қаралды 35 М.
ComfyUI - Easily Create Your Favorit Pose (3D Pose Editor)
12:40
CG TOP TIPS
Рет қаралды 3,1 М.
I animated this in 18 days... in Blender
32:46
tinynocky
Рет қаралды 4,2 МЛН
ComfyUI - Change Character Outfit with Magic Clothing
11:05
CG TOP TIPS
Рет қаралды 3,8 М.
Pool Bed Prank By My Grandpa 😂 #funny
00:47
SKITS
Рет қаралды 20 МЛН