Thx for this video. Btw i don't have the function " Zoom In effect" in the dropdown menu ( i see double-straight-line, straight-line and circle ). Any tips?
@tomanforever3 күн бұрын
straight to the point ! many thanks.
@kantube074 күн бұрын
Hello and thank you for your video. I'm trying to figure out Variation Seed right now. And I blended the dogs from your example. But then I tried to blend more complex prompt, and it did not work. I used the same model as you are, the first image was "Cyborg, night, Tokyo" the second one is "Samurai, bamboo forest" No blend. What I'm doing wrong?
@HAJJ1015 күн бұрын
I am so glad I found you because you are consistent and keep things simple. For comfyui, that combo is ALWAYS needed as much as possible!! Love you for these tutorials!! I will say that the only issue I am having is the inpainting isn't replacing like a black spot I want removed on an image. How could I fix that?
@musty55517 күн бұрын
Straight to the point thanks, finally can use those!
@graphguy7 күн бұрын
Just dipped my toe today into the comfyui and found your great channel - thanks! A question if you have time. Can you setup a workflow so that a batch renders out a sequence of images such that he image moves and/or zooms? The idea is to end up with a bunch of images to create a video.
@hyperdude1448 күн бұрын
Shout out to "tommygun-alcapone" for generating my cyborg pics!
@TheBestgoku9 күн бұрын
soo amazing, bro can you show a workflow of "removing a subject". for example how apple/android removes unwanted people from the background of a image. This video was soo useful, that would also be useful.
@sudabadri705110 күн бұрын
Nice i was doing double passes but this is much better
@Jorik-su1uc11 күн бұрын
What's the problem. Help please.
@Jorik-su1uc11 күн бұрын
Computing output(s) done. No module named 'in paint', some issue with generating in painted mesh All done.
@PromptingPixels11 күн бұрын
There may be a current issue with the extension. Here's an open issue on GitHub of others experiencing the same problem: github.com/thygate/stable-diffusion-webui-depthmap-script/issues/453 Recommend following that for updates.
@Jorik-su1uc11 күн бұрын
@@PromptingPixels Thank you. But the problem was not solved there.
@Jorik-su1uc10 күн бұрын
The developer has updated to the version DepthMap v0.4.7 (76a758c5).Everything works. Thank you.
@JustAI-fe9hh11 күн бұрын
Straight to the point, as always!
@RakibHassanAntu11 күн бұрын
my comfy auto disconnected after I hit run.. :(
@PromptingPixels11 күн бұрын
I'd recommend making sure ComfyUI is up to date and also check the logs if there are any errors as to why its getting disconnected. When preparing for this video, I found this custom node was causing a similar issue: github.com/Acly/comfyui-inpaint-nodes Hope this helps!
@ar.amarnath12 күн бұрын
Very good, simple, straight to the point. Thank you sir. I have a doubt - Yes I want to use a reference image, but also want to change the aspect ratio, size and resolution for the output. As we cannot connect empty latent node directly to ksampler now, how do i do it?
@JustAI-fe9hh13 күн бұрын
Simple and beautiful 👍
@aaronwang464113 күн бұрын
Thanks so much for the turorial, really helpful and easy to follow up step by step!
@andrewholdun791013 күн бұрын
Getting my feet wet and trying not to drown with all this well-presented material. Are you using Mac M1 as well as an Nvidia setup? It seems some of your tutorials are Mac and some Nvidia you can go through what setup would be the best or if one should just use rundiffusion online setups?
@PromptingPixels13 күн бұрын
Hey there - happy to hear the videos are helpful. So my personal setup is using a local PC with a RTX 3060 over the local network. When I boot comfy or automatic1111, i add the --listen command to open up LAN access. This allows me to generate via my MBP - or any other device. Some earlier tutorials I was just exclusively using a MBP. As far as RunDiffusion, Think Diffusion, etc. I need to do some videos on this. They are very beginner friendly, but can become a bit pricey and have some downsides that aren't really apparent at first (i.e. you can't access a file manager without booting up an instance). I think generally the best route is to scope out a project locally and offload the processing to them as it will help reduce total time use. Hope this answers some of your questions - if anything else, feel free to ask!
@Lveddie14 күн бұрын
Can img2img do which the img is Pop Art or Abstract Art?
@PromptingPixels13 күн бұрын
Yeah definitely. Easy to do with landscapes, scenes, etc. as you can raise the denoise (probably around .8-.9) and change your prompt accordingly. However, if doing this to characters, people, etc. you'll lose some of their likeness in the process. To counteract, I think using an IP Adapter or LoRA may help to counteract this.
@divye.ruhela15 күн бұрын
I thought you were AI-generated, dude lol.
@PromptingPixels13 күн бұрын
Prompt: average looking guy, glasses, middle age, short beard, sedentary lifestyle, photo grain, poor camera quality, single light source, grey shirt, plain white room Negative prompt: hair
@Lalotaotongpeanutbutter15 күн бұрын
Thanks for your amazing tutorials! Very straight forward and easy to follow! I've spent days looking for a tutorial that was easy to understand and actually worked! You're definitely the best! I'm a newbie, do you have more tutorials on what each node represents and how to download / setup models, loras, controlnet etc.? And also can we run ComfyUI in Google Colab Pro? Thanks, again!
@Beauty.and.FashionPhotographer17 күн бұрын
what you are showing with your test with the X/Y/Z plot are sequential numberings , not random numberings. but what would be actually of amazing value would be to figure out, how to generate grids with "actual" real random variations seeds.... not sequential ones. reason being: the pose of the dog or person would dramatically change, if there were a way to create seeds with total randomize numbers... for example 345..then 873487567,...then 21...then 10034354355,... etc etc,.. only this would show different shooting angles as well, that are so dramatically different that they can be used really well ... i am still searching for someone who has done this.... until then the option "Var. Seed", is actual a sequential seed, and not at all a "variation" seed .... and should be renamed as "Seq.Seed"...
@Mranshumansinghr18 күн бұрын
Differential Diffusion. It works with all SDXL Models.
@user-nd7hk6vp6q18 күн бұрын
Is it possible to do a background change, where it blends well with the subject with ipadapter, can you make a tutorial on that pls
@PromptingPixels18 күн бұрын
Interesting challenge. Let me play around with this idea. The one option would be to inverse select the subject and then apply changes. As for re-lighting the scene perhaps IC Light could then be added: github.com/kijai/ComfyUI-IC-Light-Wrapper This is just kicking around an idea but might be worth trying.
@tc855716 күн бұрын
I think latentvision himself did that in a video i cant remember which one
@user-nd7hk6vp6q16 күн бұрын
I tried it but it changes the subject too, maybe I'm not doing it well🤔
@RoopeBb18 күн бұрын
Fantastic video as always! Thanks!
@user-ek3mt7rm3n18 күн бұрын
please can you make a tutorial on how to upscale regular realistic videos. I tried your workflow, uploaded a short clip from a movie and the final result was very different from the original picture. I want to learn how to upscale a video while maintaining the originality of the original video.
@skycladsquirrel18 күн бұрын
Love it. thanks for the great post.
@JustAI-fe9hh22 күн бұрын
Straight to the point! Really enjoyed the video 😊
@Saroranch23 күн бұрын
it runs slowly is there any optimizations ?
@Beauty.and.FashionPhotographer24 күн бұрын
off topic question: there is no such thing for us guys who won a mac, to roght click the preview image and get a popup window with all these options to save the image and do other things wit it? Right?
@alzamonart24 күн бұрын
Just stumbled across this video and was short and to the point -- Many thanks :) Two things though: 1) I tried installing dependencies (torch, torchaudio etc) via pip - python script gave errors. Uninstalled and reinstalled via conda - solved. 2) Forcing a floating point flag as in 5:23 actually gave me errors. Simple main.py worked fine.
@anokhatv782925 күн бұрын
Loved the video, but can someone explain the "Cloning the ComfyUI Repo" part. Still stuck there. I'm new to macbook
@PromptingPixels25 күн бұрын
Cloning is the same as copying it to your hard drive. There are a couple ways you could do this - either with the `git clone` command which was presented in the video. Or under that same box you could download the zip of the repo and then just extract it to your preferred location on the hard drive. Downside to this approach is that you can't perform a `git pull` command to update the repo when the author publishes updates to the repo. Alternatively, if you are having a hard time getting ComfyUI to work on your machine, Pinokio (pinokio.computer/) is super easy and offers a 1-click installer.
@GES198526 күн бұрын
How do you add face swap into this, to control the identity of the character?
@PromptingPixels25 күн бұрын
Before the VHS Video Combine node at the end you can do a face swap to then modify the frames before they are stitched together.
@GES198526 күн бұрын
How do we connect an image input to use as the first frame when you added the empty latent image (big batch) @ 3:43?
@PromptingPixels25 күн бұрын
That would be an img2vid workflow rather than txt2img (as is outlined in this video). I was playing around with an img2vid workflow a few weeks ago. Will try to get a video about this posted up onto the channel as the process is going to be different than what was covered in this video. Method 1: You could use IPAdapter+AnimateDiff to convert an image into a short form video. This workflow goes through the steps: civitai.com/models/372584 Another option is to use Stable Video Diffusion (SVD) which takes an image as an input and outputs a video. The problem though is that you can't apply textual prompts (that I am aware of).
@GES198525 күн бұрын
@@PromptingPixels can you do textual prompts inside of the img2vid workflow you mentioned wanting to make a video on?
@PromptingPixels24 күн бұрын
@@GES1985 Yes, textual prompts should be supported to help inform the output.
@GES198526 күн бұрын
In regards to "Load Upscale Model": what all do I need to know, i.e. do I need to go to civitai/huggingface and download upscale models? Where do I put them, in the checkpoints folder?
@PromptingPixels25 күн бұрын
Hey GES1985 - you can place upscale models in the ComfyUI\models\upscale_models directory. Models are available from multiple places as you mentioned - civitai, huggingface, openmodeldb.info.
@mibesto803929 күн бұрын
Thanks for this concise and very helpful guide. I'm a total dweeb when it comes to working in the terminal, and your instructions made everything simple and straightforward. I still have so much to learn, but your video was the most helpful one I found to quickly and correctly load Comfy UI onto my MacBook Pro with an M1 processor.
@PromptingPixels25 күн бұрын
Thanks so much for the kind words! If you have any other questions, don't be a stranger - drop them in the comments/Discord/website etc and I'll try my best to get back.
@Beauty.and.FashionPhotographer29 күн бұрын
Question: if i would want to create my own Checkpoint or even LLM , and its content would be 300'000 professional high resolution images i shot throughout my 3 decade career as a fashion photographer. what solutions are there to make such a checkpoint or such a LLM . i am aware that it would never need 300k images, but how many would it need, for a checkpoint or a LLm, to end up with results identical to the originals in resolution and detail. i seem to never get precise answers out there. would You know anything about this?
@PromptingPixels25 күн бұрын
I haven't trained a checkpoint, only LoRAs. From my understanding, the question would largely be around the labeling of the images, minimum resolution depending on the model, and the types/diversity of images (whether its a general checkpoint that has all your images, or those separated based on the types of topics - i.e. wedding, wildlife, fashion, etc.). This article on HuggingFace (huggingface.co/docs/diffusers/en/tutorials/basic_training) demonstrates training a diffusion model based on 1k images from the Smithsonian Butterflies dataset. Sorry this is kind of a non-answer - but thought it may be helpful to share the thoughts above. Best of luck!
@Beauty.and.FashionPhotographer24 күн бұрын
@@PromptingPixels Thank You so much, its a first step into teh right direction i think, much appreciated
@user-dn8ml4uk6oАй бұрын
I get an error when trying to install conda 2:04 Could not solve for environment specs The following packages are incompatible ├─ pin-1 is installable and it requires │ └─ python 3.12.* , which can be installed; └─ torchvision is not installable because there are no viable options
@Film21ProductionsАй бұрын
where do we download the inpainting checkpoint?
@PromptingPixelsАй бұрын
This video in particular is using the Dreamshaper 8 Inapinting model which you can find on either HuggingFace and Civitai: huggingface.co/Lykon/dreamshaper-8-inpainting/tree/main civitai.com/models/4384?modelVersionId=131004
@alejandrogarcia9472Ай бұрын
Thanks you help me a lot
@FusionDraw9527Ай бұрын
I use the same prompt words as the model face_yolov8n to fix the face But why is the effect of using ComfyUI not as good as StableDiffusion? Thank you for your workflow and instructional videos. They are great.
@meadow-makerАй бұрын
do you realise you use meaningless 'go-ahead' almost once per sentence. It's so distracting, even at 4:46 you said it when it made no sense whatsoever! You're saying it instead of 'errrr'. Just say 'errrr'. 29 times in the tutorial!! good job I wasn't doing a drinking game along with it! 🥴🥴 So, go ahead and go ahead and get totally under the table! Go ahead and, Cheers!
@PromptingPixelsАй бұрын
Hey @meadow-maker, wow, I never noticed how much I say 'go-ahead' until you pointed it out! Thanks for the heads-up. I’ll definitely work on that. And yeah, a drinking game would be dangerous with my tutorials! 😂 Cheers! 🍻
@user-qu6eg3mb1bАй бұрын
What if u have 832x1216 image (SDXL 1.0)? How to upscale it to 1080x2800? getting kinda bad quality
@LovidarАй бұрын
Большое спасибо за подсказку. У меня получалось изображение не имеющее связи с образцом, оказалось что я забыл уменьшить шум с 1 до 0,5. Теперь все идет как нужно. Thanks a lot for the hint. Earlier, I received an image that has nothing to do with the sample, it turned out that I forgot to reduce the noise from 1 to 0.5. Now everything is going as it should.
@STaSHZILLA420Ай бұрын
@1:14 how did you get the auto complete?
@PromptingPixelsАй бұрын
I believe this is native to ComfyUI - don't recall a custom node adding this feature.
@STaSHZILLA420Ай бұрын
@@PromptingPixels Appreciate the response. I found the ComfyUI Custom Scripts that did the autocomplete.
@slingerduskx3370Ай бұрын
Is it possible to run these so you would have face detailer followed by hand & person version as well in one workflow? thanks for the video, quick information instead of loads of confusing stuff!
@kj-marslanderАй бұрын
I've watched dozens of comfyui tutorials. You're the best at explaining every step without rambling, or pretending we already know the basics. Gonna watch everything you got in this channel :)
@PromptingPixelsАй бұрын
Thank you for the nice feedback - happy to see the tutorial was useful for ya!