How to Create MEANINGFUL AI Art
6:26
Simple Outpainting With ComfyUI
2:43
How to Use AnimateLCM in ComfyUI
5:03
How To Do "Hires Fix" In ComfyUI
5:45
Пікірлер
@MontoyaFidel
@MontoyaFidel 2 сағат бұрын
I don't have Gaussian Bur Mask NODE
@hanqianggeng7940
@hanqianggeng7940 4 сағат бұрын
I made it! Thank you very much!
@mrtukk
@mrtukk 8 сағат бұрын
Thank you son much
@nunomota9838
@nunomota9838 Күн бұрын
Conda not found 😢
@cinghialeever
@cinghialeever 2 күн бұрын
Thx for this video. Btw i don't have the function " Zoom In effect" in the dropdown menu ( i see double-straight-line, straight-line and circle ). Any tips?
@tomanforever
@tomanforever 3 күн бұрын
straight to the point ! many thanks.
@kantube07
@kantube07 4 күн бұрын
Hello and thank you for your video. I'm trying to figure out Variation Seed right now. And I blended the dogs from your example. But then I tried to blend more complex prompt, and it did not work. I used the same model as you are, the first image was "Cyborg, night, Tokyo" the second one is "Samurai, bamboo forest" No blend. What I'm doing wrong?
@HAJJ101
@HAJJ101 5 күн бұрын
I am so glad I found you because you are consistent and keep things simple. For comfyui, that combo is ALWAYS needed as much as possible!! Love you for these tutorials!! I will say that the only issue I am having is the inpainting isn't replacing like a black spot I want removed on an image. How could I fix that?
@musty5551
@musty5551 7 күн бұрын
Straight to the point thanks, finally can use those!
@graphguy
@graphguy 7 күн бұрын
Just dipped my toe today into the comfyui and found your great channel - thanks! A question if you have time. Can you setup a workflow so that a batch renders out a sequence of images such that he image moves and/or zooms? The idea is to end up with a bunch of images to create a video.
@hyperdude144
@hyperdude144 8 күн бұрын
Shout out to "tommygun-alcapone" for generating my cyborg pics!
@TheBestgoku
@TheBestgoku 9 күн бұрын
soo amazing, bro can you show a workflow of "removing a subject". for example how apple/android removes unwanted people from the background of a image. This video was soo useful, that would also be useful.
@sudabadri7051
@sudabadri7051 10 күн бұрын
Nice i was doing double passes but this is much better
@Jorik-su1uc
@Jorik-su1uc 11 күн бұрын
What's the problem. Help please.
@Jorik-su1uc
@Jorik-su1uc 11 күн бұрын
Computing output(s) done. No module named 'in paint', some issue with generating in painted mesh All done.
@PromptingPixels
@PromptingPixels 11 күн бұрын
There may be a current issue with the extension. Here's an open issue on GitHub of others experiencing the same problem: github.com/thygate/stable-diffusion-webui-depthmap-script/issues/453 Recommend following that for updates.
@Jorik-su1uc
@Jorik-su1uc 11 күн бұрын
@@PromptingPixels Thank you. But the problem was not solved there.
@Jorik-su1uc
@Jorik-su1uc 10 күн бұрын
The developer has updated to the version DepthMap v0.4.7 (76a758c5).Everything works. Thank you.
@JustAI-fe9hh
@JustAI-fe9hh 11 күн бұрын
Straight to the point, as always!
@RakibHassanAntu
@RakibHassanAntu 11 күн бұрын
my comfy auto disconnected after I hit run.. :(
@PromptingPixels
@PromptingPixels 11 күн бұрын
I'd recommend making sure ComfyUI is up to date and also check the logs if there are any errors as to why its getting disconnected. When preparing for this video, I found this custom node was causing a similar issue: github.com/Acly/comfyui-inpaint-nodes Hope this helps!
@ar.amarnath
@ar.amarnath 12 күн бұрын
Very good, simple, straight to the point. Thank you sir. I have a doubt - Yes I want to use a reference image, but also want to change the aspect ratio, size and resolution for the output. As we cannot connect empty latent node directly to ksampler now, how do i do it?
@JustAI-fe9hh
@JustAI-fe9hh 13 күн бұрын
Simple and beautiful 👍
@aaronwang4641
@aaronwang4641 13 күн бұрын
Thanks so much for the turorial, really helpful and easy to follow up step by step!
@andrewholdun7910
@andrewholdun7910 13 күн бұрын
Getting my feet wet and trying not to drown with all this well-presented material. Are you using Mac M1 as well as an Nvidia setup? It seems some of your tutorials are Mac and some Nvidia you can go through what setup would be the best or if one should just use rundiffusion online setups?
@PromptingPixels
@PromptingPixels 13 күн бұрын
Hey there - happy to hear the videos are helpful. So my personal setup is using a local PC with a RTX 3060 over the local network. When I boot comfy or automatic1111, i add the --listen command to open up LAN access. This allows me to generate via my MBP - or any other device. Some earlier tutorials I was just exclusively using a MBP. As far as RunDiffusion, Think Diffusion, etc. I need to do some videos on this. They are very beginner friendly, but can become a bit pricey and have some downsides that aren't really apparent at first (i.e. you can't access a file manager without booting up an instance). I think generally the best route is to scope out a project locally and offload the processing to them as it will help reduce total time use. Hope this answers some of your questions - if anything else, feel free to ask!
@Lveddie
@Lveddie 14 күн бұрын
Can img2img do which the img is Pop Art or Abstract Art?
@PromptingPixels
@PromptingPixels 13 күн бұрын
Yeah definitely. Easy to do with landscapes, scenes, etc. as you can raise the denoise (probably around .8-.9) and change your prompt accordingly. However, if doing this to characters, people, etc. you'll lose some of their likeness in the process. To counteract, I think using an IP Adapter or LoRA may help to counteract this.
@divye.ruhela
@divye.ruhela 15 күн бұрын
I thought you were AI-generated, dude lol.
@PromptingPixels
@PromptingPixels 13 күн бұрын
Prompt: average looking guy, glasses, middle age, short beard, sedentary lifestyle, photo grain, poor camera quality, single light source, grey shirt, plain white room Negative prompt: hair
@Lalotaotongpeanutbutter
@Lalotaotongpeanutbutter 15 күн бұрын
Thanks for your amazing tutorials! Very straight forward and easy to follow! I've spent days looking for a tutorial that was easy to understand and actually worked! You're definitely the best! I'm a newbie, do you have more tutorials on what each node represents and how to download / setup models, loras, controlnet etc.? And also can we run ComfyUI in Google Colab Pro? Thanks, again!
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 17 күн бұрын
what you are showing with your test with the X/Y/Z plot are sequential numberings , not random numberings. but what would be actually of amazing value would be to figure out, how to generate grids with "actual" real random variations seeds.... not sequential ones. reason being: the pose of the dog or person would dramatically change, if there were a way to create seeds with total randomize numbers... for example 345..then 873487567,...then 21...then 10034354355,... etc etc,.. only this would show different shooting angles as well, that are so dramatically different that they can be used really well ... i am still searching for someone who has done this.... until then the option "Var. Seed", is actual a sequential seed, and not at all a "variation" seed .... and should be renamed as "Seq.Seed"...
@Mranshumansinghr
@Mranshumansinghr 18 күн бұрын
Differential Diffusion. It works with all SDXL Models.
@user-nd7hk6vp6q
@user-nd7hk6vp6q 18 күн бұрын
Is it possible to do a background change, where it blends well with the subject with ipadapter, can you make a tutorial on that pls
@PromptingPixels
@PromptingPixels 18 күн бұрын
Interesting challenge. Let me play around with this idea. The one option would be to inverse select the subject and then apply changes. As for re-lighting the scene perhaps IC Light could then be added: github.com/kijai/ComfyUI-IC-Light-Wrapper This is just kicking around an idea but might be worth trying.
@tc8557
@tc8557 16 күн бұрын
I think latentvision himself did that in a video i cant remember which one
@user-nd7hk6vp6q
@user-nd7hk6vp6q 16 күн бұрын
I tried it but it changes the subject too, maybe I'm not doing it well🤔
@RoopeBb
@RoopeBb 18 күн бұрын
Fantastic video as always! Thanks!
@user-ek3mt7rm3n
@user-ek3mt7rm3n 18 күн бұрын
please can you make a tutorial on how to upscale regular realistic videos. I tried your workflow, uploaded a short clip from a movie and the final result was very different from the original picture. I want to learn how to upscale a video while maintaining the originality of the original video.
@skycladsquirrel
@skycladsquirrel 18 күн бұрын
Love it. thanks for the great post.
@JustAI-fe9hh
@JustAI-fe9hh 22 күн бұрын
Straight to the point! Really enjoyed the video 😊
@Saroranch
@Saroranch 23 күн бұрын
it runs slowly is there any optimizations ?
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 24 күн бұрын
off topic question: there is no such thing for us guys who won a mac, to roght click the preview image and get a popup window with all these options to save the image and do other things wit it? Right?
@alzamonart
@alzamonart 24 күн бұрын
Just stumbled across this video and was short and to the point -- Many thanks :) Two things though: 1) I tried installing dependencies (torch, torchaudio etc) via pip - python script gave errors. Uninstalled and reinstalled via conda - solved. 2) Forcing a floating point flag as in 5:23 actually gave me errors. Simple main.py worked fine.
@anokhatv7829
@anokhatv7829 25 күн бұрын
Loved the video, but can someone explain the "Cloning the ComfyUI Repo" part. Still stuck there. I'm new to macbook
@PromptingPixels
@PromptingPixels 25 күн бұрын
Cloning is the same as copying it to your hard drive. There are a couple ways you could do this - either with the `git clone` command which was presented in the video. Or under that same box you could download the zip of the repo and then just extract it to your preferred location on the hard drive. Downside to this approach is that you can't perform a `git pull` command to update the repo when the author publishes updates to the repo. Alternatively, if you are having a hard time getting ComfyUI to work on your machine, Pinokio (pinokio.computer/) is super easy and offers a 1-click installer.
@GES1985
@GES1985 26 күн бұрын
How do you add face swap into this, to control the identity of the character?
@PromptingPixels
@PromptingPixels 25 күн бұрын
Before the VHS Video Combine node at the end you can do a face swap to then modify the frames before they are stitched together.
@GES1985
@GES1985 26 күн бұрын
How do we connect an image input to use as the first frame when you added the empty latent image (big batch) @ 3:43?
@PromptingPixels
@PromptingPixels 25 күн бұрын
That would be an img2vid workflow rather than txt2img (as is outlined in this video). I was playing around with an img2vid workflow a few weeks ago. Will try to get a video about this posted up onto the channel as the process is going to be different than what was covered in this video. Method 1: You could use IPAdapter+AnimateDiff to convert an image into a short form video. This workflow goes through the steps: civitai.com/models/372584 Another option is to use Stable Video Diffusion (SVD) which takes an image as an input and outputs a video. The problem though is that you can't apply textual prompts (that I am aware of).
@GES1985
@GES1985 25 күн бұрын
@@PromptingPixels can you do textual prompts inside of the img2vid workflow you mentioned wanting to make a video on?
@PromptingPixels
@PromptingPixels 24 күн бұрын
@@GES1985 Yes, textual prompts should be supported to help inform the output.
@GES1985
@GES1985 26 күн бұрын
In regards to "Load Upscale Model": what all do I need to know, i.e. do I need to go to civitai/huggingface and download upscale models? Where do I put them, in the checkpoints folder?
@PromptingPixels
@PromptingPixels 25 күн бұрын
Hey GES1985 - you can place upscale models in the ComfyUI\models\upscale_models directory. Models are available from multiple places as you mentioned - civitai, huggingface, openmodeldb.info.
@mibesto8039
@mibesto8039 29 күн бұрын
Thanks for this concise and very helpful guide. I'm a total dweeb when it comes to working in the terminal, and your instructions made everything simple and straightforward. I still have so much to learn, but your video was the most helpful one I found to quickly and correctly load Comfy UI onto my MacBook Pro with an M1 processor.
@PromptingPixels
@PromptingPixels 25 күн бұрын
Thanks so much for the kind words! If you have any other questions, don't be a stranger - drop them in the comments/Discord/website etc and I'll try my best to get back.
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 29 күн бұрын
Question: if i would want to create my own Checkpoint or even LLM , and its content would be 300'000 professional high resolution images i shot throughout my 3 decade career as a fashion photographer. what solutions are there to make such a checkpoint or such a LLM . i am aware that it would never need 300k images, but how many would it need, for a checkpoint or a LLm, to end up with results identical to the originals in resolution and detail. i seem to never get precise answers out there. would You know anything about this?
@PromptingPixels
@PromptingPixels 25 күн бұрын
I haven't trained a checkpoint, only LoRAs. From my understanding, the question would largely be around the labeling of the images, minimum resolution depending on the model, and the types/diversity of images (whether its a general checkpoint that has all your images, or those separated based on the types of topics - i.e. wedding, wildlife, fashion, etc.). This article on HuggingFace (huggingface.co/docs/diffusers/en/tutorials/basic_training) demonstrates training a diffusion model based on 1k images from the Smithsonian Butterflies dataset. Sorry this is kind of a non-answer - but thought it may be helpful to share the thoughts above. Best of luck!
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 24 күн бұрын
@@PromptingPixels Thank You so much, its a first step into teh right direction i think, much appreciated
@user-dn8ml4uk6o
@user-dn8ml4uk6o Ай бұрын
I get an error when trying to install conda 2:04 Could not solve for environment specs The following packages are incompatible ├─ pin-1 is installable and it requires │ └─ python 3.12.* , which can be installed; └─ torchvision is not installable because there are no viable options
@Film21Productions
@Film21Productions Ай бұрын
where do we download the inpainting checkpoint?
@PromptingPixels
@PromptingPixels Ай бұрын
This video in particular is using the Dreamshaper 8 Inapinting model which you can find on either HuggingFace and Civitai: huggingface.co/Lykon/dreamshaper-8-inpainting/tree/main civitai.com/models/4384?modelVersionId=131004
@alejandrogarcia9472
@alejandrogarcia9472 Ай бұрын
Thanks you help me a lot
@FusionDraw9527
@FusionDraw9527 Ай бұрын
I use the same prompt words as the model face_yolov8n to fix the face But why is the effect of using ComfyUI not as good as StableDiffusion? Thank you for your workflow and instructional videos. They are great.
@meadow-maker
@meadow-maker Ай бұрын
do you realise you use meaningless 'go-ahead' almost once per sentence. It's so distracting, even at 4:46 you said it when it made no sense whatsoever! You're saying it instead of 'errrr'. Just say 'errrr'. 29 times in the tutorial!! good job I wasn't doing a drinking game along with it! 🥴🥴 So, go ahead and go ahead and get totally under the table! Go ahead and, Cheers!
@PromptingPixels
@PromptingPixels Ай бұрын
Hey @meadow-maker, wow, I never noticed how much I say 'go-ahead' until you pointed it out! Thanks for the heads-up. I’ll definitely work on that. And yeah, a drinking game would be dangerous with my tutorials! 😂 Cheers! 🍻
@user-qu6eg3mb1b
@user-qu6eg3mb1b Ай бұрын
What if u have 832x1216 image (SDXL 1.0)? How to upscale it to 1080x2800? getting kinda bad quality
@Lovidar
@Lovidar Ай бұрын
Большое спасибо за подсказку. У меня получалось изображение не имеющее связи с образцом, оказалось что я забыл уменьшить шум с 1 до 0,5. Теперь все идет как нужно. Thanks a lot for the hint. Earlier, I received an image that has nothing to do with the sample, it turned out that I forgot to reduce the noise from 1 to 0.5. Now everything is going as it should.
@STaSHZILLA420
@STaSHZILLA420 Ай бұрын
@1:14 how did you get the auto complete?
@PromptingPixels
@PromptingPixels Ай бұрын
I believe this is native to ComfyUI - don't recall a custom node adding this feature.
@STaSHZILLA420
@STaSHZILLA420 Ай бұрын
@@PromptingPixels Appreciate the response. I found the ComfyUI Custom Scripts that did the autocomplete.
@slingerduskx3370
@slingerduskx3370 Ай бұрын
Is it possible to run these so you would have face detailer followed by hand & person version as well in one workflow? thanks for the video, quick information instead of loads of confusing stuff!
@kj-marslander
@kj-marslander Ай бұрын
I've watched dozens of comfyui tutorials. You're the best at explaining every step without rambling, or pretending we already know the basics. Gonna watch everything you got in this channel :)
@PromptingPixels
@PromptingPixels Ай бұрын
Thank you for the nice feedback - happy to see the tutorial was useful for ya!