No video

1000% FASTER Stable Diffusion in ONE STEP!

  Рет қаралды 94,426

Sebastian Kamph

Sebastian Kamph

Күн бұрын

Up to 10x Faster automatic1111 and ComfyUI Stable Diffusion after just downloading this LCM Lora.
Download LCM Lora huggingface.co...
Blog post huggingface.co...
Prompt styles for Stable diffusion a1111 & Vlad/SD.Next: / sebs-hilis-79649068
ComfyUI workflow for 1.5 models: / comfyui-1-5-86145057
ComfyUI Workflow for SDXL: / comfyui-workflow-86104919
Get early access to videos and help me, support me on Patreon / sebastiankamph
Chat with me in our community discord: / discord
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...

Пікірлер: 263
@leecoghlan1674
@leecoghlan1674 9 ай бұрын
You've made my day, no more waiting 30 mins on my potato pc for a generation. Thank you so much
@CoconutPete
@CoconutPete 6 ай бұрын
i installed but must have done something wrong as the quality seems poorer... back to the drawing board lol
@tungstentaco495
@tungstentaco495 9 ай бұрын
As others have mentioned, not using this LCM at full strength helps if you are having issues with messy/distorted images. I'm getting pretty good results with setting the LCM at 0.5 with 16 steps. Still really fast, but with better looking generations. Also, I recommend trying this if you are having issues with the LCM while using models and lora's that are trained on a particular subject.
@haggler40
@haggler40 9 ай бұрын
One issue is it makes animatediff not work well since animatediff usually needs more steps like 25-30 to get some good motion. Just wanted to put that out there, it does work with animatediff though.
@alsoeris
@alsoeris 7 ай бұрын
how do you change the strength? if its not in the prompt
@tungstentaco495
@tungstentaco495 7 ай бұрын
@@alsoeris In automatic1111, when LCM is used in the prompt, it would look something like this... for half strength. for full strength for 20% strength etc.
@ovworkshop3105
@ovworkshop3105 9 ай бұрын
it actually works very well, to create small samples then upscale them img2img, even SDXL is quick.
@sebastiankamph
@sebastiankamph 9 ай бұрын
Interesting approach!
@Dzynerr
@Dzynerr 9 ай бұрын
Sometimes you give us quite the gems from the industry. Your research and sharing the knowledge is highly appreciated.
@sebastiankamph
@sebastiankamph 9 ай бұрын
Thank you kindly! 🌟😊
@marlysilva2816
@marlysilva2816 9 ай бұрын
Sebastian, I really like your videos and your simple way of explaining things. Could you create a tutorial or recommend a video for Stable Diffusion or CmofyUI on how to insert an object that has been generated into other scenes? Generate the same element in different scenes? For example, I generated the design of a new bottle and then, the prompt gave me a perfect result, after which I want to create an image of this same bottle in a scene with different angles or different poses (like a new photo of someone holding the bottle of juice, for example) It would be very interesting to have this type of video.
@davewxc
@davewxc 9 ай бұрын
Tip for experimentation: use it like a regular lora and play with the weight. Some custom models that give horrible colors at 1, will actually work better at 0.7.
@sebastiankamph
@sebastiankamph 9 ай бұрын
Great tip!
@KrakenCMT
@KrakenCMT 9 ай бұрын
I've discovered the same. Also increasing steps to hone in on the right quality. Maybe not 1000% increase, but 500% is still pretty good :) Even going all the way down to .1 will allow some to work much better and still get the speed increase.
@cyberprompt
@cyberprompt 9 ай бұрын
yes, I'd feel more comfortable using the standard lora syntax instead of this black box method from the dropdown. same with my saved styles. Anyone know how to see them again and not just the tabs to add them? (please don't mention styles.csv that's where I edit them).
@jonathaningram8157
@jonathaningram8157 9 ай бұрын
It doesn't appear under the regular lora network for me. I can just choose it from the dropdown menu
@wilsonicsnet
@wilsonicsnet 7 ай бұрын
Thanks for the tip, I've seen my Anime models get really dim after applying LCM.
@joppemontezinos2092
@joppemontezinos2092 8 ай бұрын
I am also using an RTX4090 setup and i gotta say that i dont see much of a speed difference, however finding out about the comparison capabilities made it so much better to choose what model to use based on what i wanted to create. thank you for the info
@joppemontezinos2092
@joppemontezinos2092 8 ай бұрын
May also be noted i was doing about 80 sampling steps and at an upscale value of 2.3
@memb.
@memb. 8 ай бұрын
@@joppemontezinos2092 You're supposed to use 4 to 10 sampling steps AND cfg 1 to 3. It's very fast and yields good results but it's honestly a godsendfor mass producing images. You can make 100+ images SO FAST you can just pick the best one and high-res that with a better config to get the absolutely best of the best results.
@user-cute371
@user-cute371 Ай бұрын
SAME
@bankenichi
@bankenichi 9 ай бұрын
Duuuude, ive been using the sdxl one for a few days and it is a gamechanger, didnt know there was one for 1.5, awesome!
@sebastiankamph
@sebastiankamph 9 ай бұрын
Sweet! How have you been liking it for SDXL?
@bankenichi
@bankenichi 9 ай бұрын
@@sebastiankamph It's been amazing honestly, an order of magnitude faster on my 1080, going from 20+ mins with hires fix to about 1.5-3 mins using lcm. I was trying it out with 1.5 yesterday and it's great too, went from about 3 mins to just 30 secs. It honestly makes the experience much more enjoyable for me, being able to see this kind of improvement.
@irotom13
@irotom13 9 ай бұрын
I made the same grid as in the video with 8 sampling steps for 2 cases: 1) with this LoRA and 2) withOUT it / None. The time to generate is basically the same (actually without this LoRA is 10 seconds faster) => so the speed depends on the sampling steps rather than LoRA. While quality => depends on the sampler but there are some VERY good effects without this LoRA at all for the same sampling steps. I can't see much difference in either speed or quality if the right sampler is used.
@sebastiankamph
@sebastiankamph 9 ай бұрын
The point of using this lora & sampler is that you can achieve results in 8 steps that you otherwise might need 25 or more steps with other samplers. For the best quality, I'd recommend the Comfy route using the lcm sampler together with that Lora, as a1111 with another sampler is more of a half-measure atm.
@petec737
@petec737 9 ай бұрын
​@@sebastiankamphlet's be honest, nobody uses lcm if they are looking for the best quality. The only people using lcm are the ones with old pc's who want to have some fun poking a couple 512x512 still unusable image. On any high end graphic card, 8 steps vs 25 steps is only 1 second difference, no matter the model or sampler used, so something like the lcm makes no sense to professional users.
@UHDking
@UHDking Ай бұрын
I am a big fan of you. Thanks for sharing knowledge in easy to follow language while everything is explained within the details not like other radio just repeating information that sometimes is not fully useful. Your stuff is good. Got my like and sub and a long time follower. I am one of you as AI researcher. Thanks very much.
@sebastiankamph
@sebastiankamph Ай бұрын
So nice of you!
@UHDking
@UHDking Ай бұрын
@@sebastiankamph Thanks man. I told it from heart and I benefited couple time due to your video. Good job while sharing info like a champ.
@sinisterin5832
@sinisterin5832 7 ай бұрын
My not so "ptato PC" and my impatience thank you very much, I am your fan. I already passed the information on to my brother, I'm sure he will be happy too.
@sebastiankamph
@sebastiankamph 7 ай бұрын
Thanks for sharing!
@rycrex7986
@rycrex7986 3 ай бұрын
JUst started a week ago and ive been loving it. Sweitching to comfy
@pavi013
@pavi013 7 ай бұрын
This helped a lot, i dont want to wait 1 hour to generate one image 😅
@VooDooEf
@VooDooEf 9 ай бұрын
fuck das ist das beste SD Video dieses Jahr, ich kannst nicht fassen, wie schnell man jetzt damit arbeiten kann! Nvidia kann ihre TensorRT extention in die Tonne hauen!
@duskairable
@duskairable 9 ай бұрын
I've tried this with my ancient gpu gtx 970😂, generating 512x768, cfg 7, 30 steps image usually takes 42 seconds. With LCM it takes only 7 seconds, the result is comparatively good 👍
@jibcot8541
@jibcot8541 9 ай бұрын
You should be able to do it in 4-8 steps with LCM, my 3090 can make a a 512x512 image in 0.25 seconds
@eukaryote-prime
@eukaryote-prime 9 ай бұрын
980ti user here. I feel your pain.
@TheMaxvin
@TheMaxvin 9 ай бұрын
I have tried GTX1080Ti generating 768x768, cfg 8, 30 steps: with LCM or no the same result 30 sec.((((
@petec737
@petec737 9 ай бұрын
​@@jibcot8541which 100% looks like trash and is totally unusable. Not sure what's up with people wanting to brag about being able to generate some tiny (512x512px) low quality images in a second.
@mehmetonurlu
@mehmetonurlu 8 ай бұрын
I'm wondering what would happened if i use this with vega8. Hope it helps.
@marhensa
@marhensa 9 ай бұрын
I found out the picture quality is worse ONLY when applied to custom SDXL models, when applied to SDXL vanilla, or SDXL SSD-1B, it's somewhat par in quality yet SUPER FAST!!! (Tested on ComfyUI, LCM SSD1-B, LCM Sampler, 8 Steps).
@taiconan8857
@taiconan8857 9 ай бұрын
Useful info, thanks! Unfortunately, in my case, I'm often on custom checkpoints, but the methodology could be instrumental in making future iterations faster. 👏🤩
@marhensa
@marhensa 9 ай бұрын
@@taiconan8857 yah, surely it's doable for helping animated diff, that needs many frames to generate.
@taiconan8857
@taiconan8857 9 ай бұрын
@@marhensa OH! I HADN'T EVEN CONSIDERED THAT YET! You're totally right! I'ma definitely need to revisit this when I'm at that stage. 👌😲
@CoconutPete
@CoconutPete 6 ай бұрын
Update: I wasn't able to get it to work, then found a post on Reddit which suggested deleting the "cache.json" file in the webui directory. I renamed mine to cache2.json (just in case) and sure enough the Lora tab was showing ssd-1b in it and noticed speed improvements. Must be a bug of some sort as the cache.json file showed up again and everything seems to be working
@sebastiankamph
@sebastiankamph 6 ай бұрын
Happy you got it working!
@ArchangelAries
@ArchangelAries 9 ай бұрын
Dunno if it's bc I'm on AMD windows system and on the DirectML branch of A1111, but it doesn't seem like I have any improvement in speeds with this LoRA and even with reduced weight to 0.5 still seems like all it does is reduce generation quality. Oh well. Thanks for sharing Seb, still love your content! Edit: Finally got it to work, my generations went from 38 sec/image with hires fix and ADetailer inpainting all the way down to 12 sec/image... Only downside is that the quality is worse than I'd prefer, most likely due to the requirement of low cfg scale basically ignoring negative prompts and embeddings
@Mowgi
@Mowgi 9 ай бұрын
LCM's are what we call Rice Crispy Treats in Australia. Used to love when Mum put them in my lunch box for school 🤣
@2008spoonman
@2008spoonman 7 ай бұрын
FYI: install the animatediff extension in A1111, this will automatically install the LCM sampler.
@timhagen1426
@timhagen1426 9 ай бұрын
Doesn't work
@ScorgeRudess
@ScorgeRudess 9 ай бұрын
This is amazing!!! Thanks!
@sebastiankamph
@sebastiankamph 9 ай бұрын
Glad you like it! 😊🌟
@ulamss5
@ulamss5 9 ай бұрын
Thanks for the mega grid comparison - most of the comparisons so far are probably using the DPM 2M Karras, long time best performer, and seemingly terrible with LCM. I'll let the community do a few more evaluations with sampler and CFG before switching over.
@alderdean6112
@alderdean6112 9 ай бұрын
The SDXL lora does not seem to work for me. My RTX3060 with 12Gb VRAM gets 100% loaded and freezes the whole system for several seconds for each iteration. The outcoming images are usually a jumble of pixels. SD1.5 lora, however, seems to somewhat accelerate things for SD1.5 trained models.
@aegisgfx
@aegisgfx 9 ай бұрын
Wow so instead of creating a hundred images every day that nobody cares about I can create 10,000 images a day that nobody cares about, fantastic!!!
@politicalpatterns
@politicalpatterns 9 ай бұрын
Why are you so salty over this? It's a tool that some people use in their workflow. 😂
@daan3898
@daan3898 9 ай бұрын
Thanks for the research, will try it out !! :)
@sebastiankamph
@sebastiankamph 9 ай бұрын
Hope you like it!
@gorge.p96
@gorge.p96 9 ай бұрын
Cool video. Thank you
@DJVibeDubstep
@DJVibeDubstep 7 ай бұрын
I'm using the DirectML version because I have an AMD and I have to use my CPU and It's PAINFULLY slow. Will this help with that? Or is it only for those using GPUs? I actually have a really decent GPU (RX 5700 XT) but I sadly can't use it since SD hardly supports AMD.
@LinkL337
@LinkL337 7 ай бұрын
Did u try it? I have rx 7800 xt and have the same problem. Looking for options to improve rendering performance. AMD released a video with a tutorial but I haven't tried that yet.
@DJVibeDubstep
@DJVibeDubstep 7 ай бұрын
@@LinkL337 I have not I just sucked it up and using the painfully slow CPU way lol. I spent 7+ hours trying all types of things though and nothing worked. I literally have to use my CPU it seems.
@april11729_
@april11729_ 6 ай бұрын
my god! it works !!!!thankyou !!
@sebastiankamph
@sebastiankamph 6 ай бұрын
Enjoy!
@hjjubnh
@hjjubnh 9 ай бұрын
In A1111 I don't see any difference in speed, the results are just worse
@markusblandus
@markusblandus 9 ай бұрын
Any chance you can show how the live webcam setup can be done? Thanks!
@sebastiankamph
@sebastiankamph 9 ай бұрын
For the quickest answer, I'd guide you towards my Discord and ask kiksu himself.
@micbab-vg2mu
@micbab-vg2mu 9 ай бұрын
Amazing!!! Thank you :)
@sebastiankamph
@sebastiankamph 9 ай бұрын
You're very welcome! 🌟😊
@intelligenceservices
@intelligenceservices 8 ай бұрын
i have a 3060 12GB gpu, was getting vram errors with this workflow on XL. process was rerouted to cpu. 50-70 seconds. so i suspected my vram was being squatted by orphan processes. rebooted and it's now working the way you describe. thanks.
@CoconutPete
@CoconutPete 6 ай бұрын
I'm confused with trying to get this working with SSD-1B. I downloaded, put in the correct folder, renamed and it shows in the add to network prompt drop down, but so far notice no improvements and quality seems poor. I keeps seeing something about diffusers but not sure what that is all about . Going back to the drawing board lol
@maikelkat1726
@maikelkat1726 6 ай бұрын
'thanks but it doesnt make it faster...its the same speed...3-4 secs sdxl with or without lora in between ... any ideas why? i have old rtx 3090, 8g
@athenalong
@athenalong 9 ай бұрын
HAHAHA 😅 I ::: honestly ::: look forward to the Dad jokes 🤣 Even if I don't have time to watch the entire video when I initially see it, I will watch until the joke and then come back later 😆👏🏾
@sebastiankamph
@sebastiankamph 9 ай бұрын
Hah, glad to hear it! And great that you're coming back too 😅😁
@palax73
@palax73 9 ай бұрын
Thanks bro!
@sebastiankamph
@sebastiankamph 9 ай бұрын
You bet!
@DerXavia
@DerXavia 9 ай бұрын
Its even slower for me and looks much much worse using XL
@sidejike438
@sidejike438 4 ай бұрын
I already did the --xformers edit, can I still use this Lora or would the quality of images be affected?
@matthallett4126
@matthallett4126 9 ай бұрын
I've got a 4090 as well, and I can't not reproduce your results in A1111. Will keep trying.
@sebastiankamph
@sebastiankamph 9 ай бұрын
I am running with sdp memory optimization. Similar speed increase as xformers.
@dastpaster
@dastpaster 9 ай бұрын
Strange, I did everything you said, but it took 7 seconds longer to generate. cinematic, techwear car Steps: 30, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7, Seed: 4128880464, Size: 1024x1024, Model hash: 74dda471cc, Model: realvisxlV20_v20Bakedvae, Version: v1.6.0-400-gf0f100e6 Time taken: 17.2 sec. cinematic, techwear car Steps: 30, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7, Seed: 4128880464, Size: 1024x1024, Model hash: 74dda471cc, Model: realvisxlV20_v20Bakedvae, Lora hashes: "lcm-lora-sdxl: 2fa7e8e56b09", Version: v1.6.0-400-gf0f100e6 Time taken: 24.1 sec. Tried it on another sampler, get a 2 second gain. Apparently it doesn't work well enough on all samplers
@sebastiankamph
@sebastiankamph 9 ай бұрын
You need to use 8 steps and preferrably the lcm sampler
@cyberprompt
@cyberprompt 9 ай бұрын
still have to experiment with this more but wow.. zoom! a 960x640 usually takes at least 1.5 minutes (RTX1080), this is done in seconds. Not quite happy with the detail yet however. But great for a quick try of a prompt I guess until I do more tweaking.
@Christian-iu3lo
@Christian-iu3lo 9 ай бұрын
Lmao this crashes the crap out of my amd card. I have a 7800xt and it steals all of my vram immediately which forces me to restart
@user-or2zf9rv6n
@user-or2zf9rv6n 7 ай бұрын
After my first generation, the following generations are much slower. Any idea why this happens and how to avoid it?
@Steamrick
@Steamrick 9 ай бұрын
At a cfg scale of 2, how well does it adhere to complicated prompts? I get that it's amazing for AnimateDiff or real-time applications, but is the quality good enough to replace workflows for image generation?
@sebastiankamph
@sebastiankamph 9 ай бұрын
Probably less than usual. But try shorter prompts and weight them more.
@andreassteinbrecher458
@andreassteinbrecher458 9 ай бұрын
hey :) did the KSampler changed with the last update? i get errors on all my animatediff workflows since i updated all comfi-ui. Error occurred when executing KSampler: local variable 'motion_module' referenced before assignment
@sebastiankamph
@sebastiankamph 9 ай бұрын
Hmmmmm, good question 🤔
@keymaker.3d
@keymaker.3d 9 ай бұрын
me, too!
@andreassteinbrecher458
@andreassteinbrecher458 9 ай бұрын
today i did another UPDATE ALL in comfy-ui, and now animatediff is working fine again :)
@keymaker.3d
@keymaker.3d 9 ай бұрын
@@andreassteinbrecher458 yes,'UPDATE ALL' is the key
@claudiox2183
@claudiox2183 7 ай бұрын
Thank you! It works nice, Both A1111 and Comfy as well. But I have a rookie question. I can't save the Comfy workflow explained in the video, with the Lora loader node installed. If I save it as a .JSON file or PNG image it does not reload....
@ComplexTagret
@ComplexTagret 9 ай бұрын
And how to manage weight of lora in that upper menu? If you add lora to prompt field it is possible to manage as .
@N-DOP
@N-DOP 9 ай бұрын
Is there also a way to enhance performance for image2image generations? I selected the Lora, adjusted the steps and the CFG Scale but the render time is still the same if not worse. Please help :'D
@TheSparkoi
@TheSparkoi 3 ай бұрын
hey do you think we can have more than 0.7 frame par second if you render only 500X500 with a 4090 as hardware
@cyberprompt
@cyberprompt 9 ай бұрын
Oh and @sebastiankamph... I almost always laugh at your jokes even if my wife hates when I tell her them. Said the facial hair one to her yesterday because I DON'T like facial hair and she knows that! :)
@sebastiankamph
@sebastiankamph 9 ай бұрын
Hah, I love it! Keep spreading the dad jokes for everyone to enjoy 😊🌟
@unowenwasholo
@unowenwasholo 9 ай бұрын
This is WILD! This ecosystem continues to boggle the mind. There's certainly some amount of "too good to be true" in here, such as the lora not playing nice with a lot of samplers, but cool nonetheless. Btw, a couple things I would have liked discussed / to see is how this performs with common current settings (i.e. higher steps ~20 / CFG ~5), and on other models even if just sd1.5 / sdxl based models. Even if it was just like 15-30 seconds showing a good model vs a bad model that you've found. ofc, there's always the whole "try it in your workflow to see how it is for you," just would be nice to know if I can expect this to work outside of vanilla sd.
@spiritsplice
@spiritsplice 7 ай бұрын
Vanlandic can't even see the files. They won't show up in the list after dropping them in the folder and restarting.
@biggestmattfan28
@biggestmattfan28 2 ай бұрын
Do you know how to make it faster for pony diffusion? I dont think this works for pony models
@ortizgab
@ortizgab 9 ай бұрын
Hi! Thanks for the lessons, they are great!!! I cant set the sample steps below 20... Am I missing something?
@stableArtAI
@stableArtAI 3 ай бұрын
Ok first run of video, very confused what the one step use to make it 1000% faster??? download "1" file?? you started download several files and what so lost..
@ferluisch
@ferluisch 8 ай бұрын
How much faster is it really? A comparison would be nice, also this could be used with the new tensorCoreRT?
@user-gq2bq3zf1f
@user-gq2bq3zf1f 8 ай бұрын
Thanks as always! I have an off-topic question, is there any way to make StableDiffusion not show people but only clothes? I put no human, no girl, etc. in the negative prompt and it still shows people.
@_trashcode
@_trashcode 9 ай бұрын
you mentioned animateDiff? how can you use LCM with animateDIff? great video, btw
@ADZIOO
@ADZIOO 9 ай бұрын
Not working for SDXL. Always bad quality, it should be also 8 steps/1CFG Scale at SDXL?
@sebastiankamph
@sebastiankamph 9 ай бұрын
Works great for me with the LCM sampler. Not well without it.
@ADZIOO
@ADZIOO 9 ай бұрын
@@sebastiankamph Okay then, now I know. I am at A1111, there is still not patch with LCM sampler, at least 1.5 working with Euler A.
@2008spoonman
@2008spoonman 7 ай бұрын
​@@ADZIOOinstall the animatediff extension, this will automatically install the LCM sampler.
@AndyHTu
@AndyHTu 9 ай бұрын
Does this trick only work with the dreamshaper model or would it work on any models?
@jordanbrock4142
@jordanbrock4142 5 ай бұрын
I'm kinda new, but isnt it a problem if i have to use this LoRA? I mean, I can only use 1 LoRA at a time right? And if Im using this one it means I can't use another, which sort of defeats the purpose...
@zuriel4783
@zuriel4783 4 ай бұрын
You can use as many LoRAs at a time as you like, there could possibly be a limit that i'm not aware of, but I know for sure you can use at least 4 or 5 at a time
@SupremacyGamesYT
@SupremacyGamesYT 9 ай бұрын
I assumed this video would be about the RT in A111, what's going on with that is it out yet? I've break from AI since March.
@flareonspotify
@flareonspotify 9 ай бұрын
I have a M1 16 unified memory MacBook Air I wonder how has it would be on it
@user-ch8ku5bk1w
@user-ch8ku5bk1w 9 ай бұрын
hello sebastian love your videos can you also make video on how to use 2 character loras in image to image generation without inpainting ? thank you
@povang
@povang 9 ай бұрын
Not optimized for a1111 yet. Im using a custom checkpoint, a1111, 1.5 same settings as in video. Im on a 1080ti, and the quality is worse and the generation speeds are faster, but lower quality image.
@stanTrX
@stanTrX 8 ай бұрын
thanks but mine is still very very very slow... what else can i do?
@thegreatujo
@thegreatujo 9 ай бұрын
How do I make the interface like yours ? At the top where you select the model/checkpoint you have two more dropdowns to the right called SD_VAE and Add Network to Prompt. If somebody else than the video creator has the answer feel free to reply
@drabodows
@drabodows 9 ай бұрын
Watch the video, he shows you how...
@xyzxyz324
@xyzxyz324 9 ай бұрын
01:38 - 01:57
@juschu85
@juschu85 9 ай бұрын
The video title is wrong. 10 times faster is 900% faster. The percentage is always 100% lower than you would intuitively expect from the factor. Just like 50% more is 1.5 times as much and 100% more is 2 times as much.
@Jammy1up
@Jammy1up 9 ай бұрын
Well 900% doesn't sound as cool. Prolly not worth nit picking here, not like it's misleading or anything lol.
@LewGiDi
@LewGiDi 9 ай бұрын
​@@Jammy1upin a krita tuto I saw this in the thumbnail 'edit 900x faster'
@amafuji
@amafuji 9 ай бұрын
900% faster = 1000% as fast
@ronbere
@ronbere 9 ай бұрын
As always.. 😂
@arnavkumar7970
@arnavkumar7970 9 ай бұрын
He is probably asian
@henrischomacker6097
@henrischomacker6097 9 ай бұрын
Hmm... why is it working for you and not for a lot of us in automatic1111? * Downloaded and renamed both Loras and put them into their Lora directory * Enabled sd_lora in User-Interface Option in main UI * Reloaded UI * Updated complete automatic1111 with all extensions * Restarted automatic1111 (ORIGINAL) * lcm Loras do NOT appear in the Lora Tab Gallery, Only in the unusable dropdown list if you have a lot of Loras * Tried all my Models AND Samplers for 1.5. and XL, all with really bad results with 8 sampling steps My Options in main UI (like the "Add network to prompt" dropdown is shown in the left column under CFG scale, seed, etc. Are you using a different version of automatic1111 or ist there something else that has to be anabled what a lot of us maybe don't have?
@jonathaningram8157
@jonathaningram8157 9 ай бұрын
I have also very bad results.
@MatichekYoutube
@MatichekYoutube 9 ай бұрын
test LCM on stable diffusion - seems that img2img lcm and vid2vid has an error - TypeError: slice indices must be integers or None or have an __index__ method
@marcus_ohreallyus
@marcus_ohreallyus 9 ай бұрын
IS this LORA affecting the outcome of the artwork look or style, other than speeding it up? If this changes quality for the worse, I would not see the point of using it because SD is pretty fast as it is.
@olvaddeepfake
@olvaddeepfake 9 ай бұрын
i don't have the option to add the lora setting to the UI
@ragnarmarnikulasson3626
@ragnarmarnikulasson3626 9 ай бұрын
tried this with sdxl with no good resaults. sdv1-5 worked great though. any ideas? was using sd_xl_base_1.0.safetensors [31e35c80fc] with the lcm-lora-sdxl on mac m1 if that makes any difference
@ragnarmarnikulasson3626
@ragnarmarnikulasson3626 9 ай бұрын
figured it out. I forgot to turn up the resolution :D lol
@victorvaltchev42
@victorvaltchev42 9 ай бұрын
Great video. What I don't get is why the CFG needs to be so low?
@NamikMamedov
@NamikMamedov 9 ай бұрын
How to make common image like yours? With all generations results in one table with methods and scalers?
@sebastiankamph
@sebastiankamph 9 ай бұрын
Xyz plot in script at bottom. Can see my settings in video
@PerChristianFrankplads
@PerChristianFrankplads 9 ай бұрын
Will this work on Apple silicon like M1?
@sebastiankamph
@sebastiankamph 9 ай бұрын
Actually, Apple M1 reached the most speed improvements (10x). I haven't tested myself, but the claims seem to be solid.
@Ekkivok
@Ekkivok 9 ай бұрын
hmmmm for SDXL the result is a total mess : D it's like the cfg is at 30 and steps to 1 xD
@sebastiankamph
@sebastiankamph 9 ай бұрын
Did you use the LCM sampler? Without it, it's not great.
@jibcot8541
@jibcot8541 9 ай бұрын
It does work for SDXL, use "Euler a" sampler and a CFG 1-2 and 4-8 steps.
@Ekkivok
@Ekkivok 9 ай бұрын
@@sebastiankamph yes I activated SD Lora in a1111 cause I use sdxl on a1111 and I tried ....and..... Was a massacre, but I use 1.5 with Vlad (SD.NEXT) But problem ..... Sd_lora not appearing:/
@Ekkivok
@Ekkivok 9 ай бұрын
@@jibcot8541 already used that settings same problem...
@clay6440
@clay6440 3 ай бұрын
your link for civitai is no longer working
@metanulski
@metanulski 9 ай бұрын
I am confused. my pictures look worse using this :-(
@sebastiankamph
@sebastiankamph 9 ай бұрын
Make sure to use the LCM sampler in Comfy for best results.
@metanulski
@metanulski 9 ай бұрын
@@sebastiankamph I used Auto1111. I did Put the 1.5 lora in the lora folder, loaded a 1.5 model, added the lora to the prompt and set the steps to 8 with euler. Result looks worse than without the lora.
@metanulski
@metanulski 9 ай бұрын
I did not use the lora dropdown Like you did. Ist this a must?
@sebastiankamph
@sebastiankamph 9 ай бұрын
Not at all. Just an easy way of using it. But it limits the use of weights.@@metanulski
@metanulski
@metanulski 9 ай бұрын
@@sebastiankamph thanks. will try again today. :-)
@fjccommish
@fjccommish 6 ай бұрын
I used LCM sampler in A111 - the results were awful.
@consig1iere294
@consig1iere294 9 ай бұрын
I am super confused, when I go to download the LCM model for SDXL, are we downloading the "pytorch_lora_weights.safetensors" file? I did that and used it as LORA, it is stuck! I am using a RTX 4090.
@sebastiankamph
@sebastiankamph 9 ай бұрын
Yes! One for 1.5 and one for SDXL. Rename so you know which is which. Put in Loras folder
@_gr1nchh
@_gr1nchh 9 ай бұрын
I'm using LCM in ComfyUI but any time I go above 512 x 768 in resolution, it just craps out and starts adding all types of different things like double heads, 3 arms, etc. but doesn't do it at all if in 512 x 768. I don't change any other settings besides resolution. I have no idea why it's doing it. LCM gives good results in 4 steps which is great for my 4GB 1050 ti. I can generate an 850 x 1200 image in 40 seconds with it so no need to even upscale (upscale tacks on another 2 minutes almost since I prefer to use USD Upscale). I may have to try Euler even with this instead.
@jonathaningram8157
@jonathaningram8157 9 ай бұрын
It's usual for a SD1.5 to do the double head and stuff if you go above 512x512, that's what you have to use the highres fix.
@_gr1nchh
@_gr1nchh 9 ай бұрын
@@jonathaningram8157 I'm not sure how to use hires fix in comfyui. I'll look it up, thanks.
@_trashcode
@_trashcode 9 ай бұрын
i would like to find a way to use that with deforum and control net. does anybody have an idea how to make it work in automatic1111?
@Rasukix
@Rasukix 9 ай бұрын
so is this for sdxl only? Or will the 1.5 lora do the same thing etc
@sebastiankamph
@sebastiankamph 9 ай бұрын
Both! ☺️
@Wunderpuuuus
@Wunderpuuuus 9 ай бұрын
I am seeing a lot of Comfy UI and Automatic 1111. Is there and advantage to use one over the other? Is one better at "A" and another at "B"?
@jonathaningram8157
@jonathaningram8157 9 ай бұрын
It's a very different philosophy. I would recommend automatic1111 for beginner and also for flexibility. ComfyUI in my opinion is more specialized but you don't have as much creative power (the inpaint for instance is quite annoying to setup). I tried ComfyUI and I'm back to automatic1111, it gives me the best results (also I kinda lost my node setup for ComfyUI and it's a pain to do).
@Wunderpuuuus
@Wunderpuuuus 9 ай бұрын
@@jonathaningram8157 thank you! I also have been using automatic 1111 atm, but saw so many videos for ComfyUI so I thought i'd ask. thanks for the response!
@jonathaningram8157
@jonathaningram8157 9 ай бұрын
It gives me trash result with a lot of noise no matter what sampler I chose even with a CFG
@zahrajp2223
@zahrajp2223 8 ай бұрын
How i can use it with fooocus?
@the_smad
@the_smad 9 ай бұрын
Need to try for my gtx 1060. yesterday, with xformers and medvram it took 30 minutes to do a single image with sdxl and no refiner
@sebastiankamph
@sebastiankamph 9 ай бұрын
Let me know what speed improvements you get 😊
@alexvovsu675
@alexvovsu675 9 ай бұрын
Is it possible to do on Silicon M2? I try, but have some issues
@davoodice
@davoodice 3 ай бұрын
Unfortunately, nothing change for me.
@bladechild2449
@bladechild2449 9 ай бұрын
i played with it for a while and decided the quality was vastly subpar compared to what you'd get using better samplers and schedulers
@sebastiankamph
@sebastiankamph 9 ай бұрын
I would indeed say it's a trade off. I wouldn't call it vastly subpar with the LCM sampler and some finetuned settings. This is a good step in the right direction. If we would have bashed on Stable Diffusion day 1, we wouldn't be where we are today. This is a fantastic step forward where these ideas can be developed further!
@BabylonBaller
@BabylonBaller 9 ай бұрын
That was my hunch as well. No point of being able to generate a ton of garbage images just for bragging rights.
@sebastiankamph
@sebastiankamph 9 ай бұрын
The images you can get with the LCM lora and sampler is in no way garbage. Run it in Comfy today and you'll probably be amazed by the results at that speed@@BabylonBaller
@BabylonBaller
@BabylonBaller 9 ай бұрын
@@sebastiankamph cool. Will check it out.
@wholeness
@wholeness 9 ай бұрын
It can only get better from here even an idoit could see that. Haven't you learned anything?
@FearfulEntertainment
@FearfulEntertainment 6 ай бұрын
does having A1111 installed on a HDD or SSD matter?
@scarekrow1264
@scarekrow1264 5 ай бұрын
absolutely - ssd is way faster
@sinanisler1
@sinanisler1 9 ай бұрын
sdxl doesnt work, not sure why. probably need latest pips. will test again later.
@sebastiankamph
@sebastiankamph 9 ай бұрын
You need LCM sampler for that.
@ArcaneRealities
@ArcaneRealities 9 ай бұрын
can this be done with animation ? animated diff or video to video ? not sure I am setting it up right - in Comfy
@sebastiankamph
@sebastiankamph 9 ай бұрын
Yes!
@elowine
@elowine 9 ай бұрын
​@@sebastiankamph I tried it too but I only get "weight" errors and noise. The creator of AnimateDiff seems to be working on a fix, not sure why some people claim it works for them?
@sebastiankamph
@sebastiankamph 9 ай бұрын
@@elowine I used it just a few hours ago and worked ok. Not amazing, but ok.
@elowine
@elowine 9 ай бұрын
@@sebastiankamph Ah nice, thanks for checking. Maybe an issue with certain GPU's
@peacetoall1858
@peacetoall1858 9 ай бұрын
Newbie question - would this speed up image generation on my Nvidia GTX 1060 gaming laptop with 4GB VRAM?
@user-jw9kg5rt4d
@user-jw9kg5rt4d 9 ай бұрын
Yup, should speed up image generation on any platform so long as you make use of the lower steps "requirement" for a decent image.
@peacetoall1858
@peacetoall1858 9 ай бұрын
@@user-jw9kg5rt4d That's awesome. Thanks!
@luxecutor
@luxecutor 9 ай бұрын
Does the LCM model work only with SDXL, and not SD 1.5 based models?
@sebastiankamph
@sebastiankamph 9 ай бұрын
This is available for both. Works best in Comfy atm.
@luxecutor
@luxecutor 9 ай бұрын
@@sebastiankamph Thank you. I look forward to trying it out. Still haven't taken the plunge on comfy yet. I really need to take some time and get it set up.
@khalifarmili1256
@khalifarmili1256 Ай бұрын
can this work on SD 3 ?
@tuhinbiswas98
@tuhinbiswas98 8 ай бұрын
this will work with intel arc???
@donschannel9310
@donschannel9310 8 ай бұрын
mine is not even generating any pic
NEW Prompt Cheat Sheet
5:13
Sebastian Kamph
Рет қаралды 66 М.
How to AI Upscale and Restore images with Supir.
16:31
Sebastian Kamph
Рет қаралды 32 М.
Please Help Barry Choose His Real Son
00:23
Garri Creative
Рет қаралды 23 МЛН
managed to catch #tiktok
00:16
Анастасия Тарасова
Рет қаралды 49 МЛН
Blazing Fast AI Generations with SDXL Turbo + Local Live painting
14:42
Use Any Face EASY in Stable Diffusion. Ipadapter Tutorial.
10:30
Sebastian Kamph
Рет қаралды 138 М.
ControlNet ComfyUI Workflow with Sebastian Kamph
15:29
ThinkDiffusion
Рет қаралды 3,1 М.
I Learned Blender in 1000 Hours
8:51
Adam Baird
Рет қаралды 19 М.
Discover the Essential A1111 Extensions for Stable Diffusion: Your Must-Have Toolkit!
22:32
Magnific AI Upscaler Free Alternatives! Krea and Comfy UI Workflows
10:21
Please Help Barry Choose His Real Son
00:23
Garri Creative
Рет қаралды 23 МЛН