AMD GPU's are screaming fast at stable diffusion! How to install Automatic1111 on windows with AMD

  Рет қаралды 40,074

FE-Engineer

FE-Engineer

9 ай бұрын

Update March 2024 -- better way to do this
• March 2024 - Stable Di...
Alternatives for windows
Shark - • Install Stable Diffusi...
ComfyUI - • AMD GPU + Windows + Co...
Getting Stable diffusion running on AMD GPU's used to be pretty complicated. It is so much easier now, and you can get amazing performance out of your AMD GPU!
Download latest AMD drivers!
Follow this guide:
community.amd.com/t5/ai/updat...
Install Git for windows
Install MiniConda for windows (add directory to path!)
Open mini conda command prompt
conda create --name Automatic1111_olive python=3.10.6
conda activate Automatic1111_olive
git clone github.com/lshqqytiger/stable...
cd stable-diffusion-webui-directml
git submodule update --init --recursive
webui.bat --onnx --backend directml
If you get an error about "socket_options"
venv\Scripts\activate
pip install httpx==0.24.1
Great models to use:
prompthero/openjourney
Lykon/DreamShaper
If looking for models on hugging face...
they need to have text-to-img
libraries: check onnx
Download model from ONNX tab
Then go to Olive tab, inside Olive use the Optimize ONNX model
when optimizing ONNX model ID is the same as you used to download
change input and output folder names to be the same as the location the model downloaded to.
Optimization takes a while!
Come back and I will have some other videos about tips and tricks for getting good results!

Пікірлер: 562
@FE-Engineer
@FE-Engineer 7 ай бұрын
Converting civitai models to ONNX -> kzfaq.info/get/bejne/maqinNV22dOpoY0.html
@nomanqureshi1357
@nomanqureshi1357 7 ай бұрын
thank you i was just looking for it 😍
@_JustCallMeRex_
@_JustCallMeRex_ 7 ай бұрын
Hello. I would like to ask something regarding the installation process, at the point where it begins creating the venv folder in Stable Diffusion. I have an AMD Graphics Card, specifically an RX 580, I accidentally updated Stable Diffusion by adding the git pull command on the webui text file and it broke Stable Diffusion because apparently it had installed torch version 2.0.1. Now, I tried deleting everything and starting out fresh by following your guide, but for some reason it keeps on installing 2.0.1 torch version. How do I prevent this from happening? Is there anyway to specify it to install torch 2.0.0 again? Thank you.
@OneTimePainter
@OneTimePainter 7 ай бұрын
Finally a tutorial that makes sense and doesn't reference 3 other unnamed videos. Thank you!
@FE-Engineer
@FE-Engineer 7 ай бұрын
Glad you liked it. I try to boil things down and go start to finish completing a task.
@user-km5to9np3r
@user-km5to9np3r 6 ай бұрын
i know what youtuber your referring too HAHAHA
@CahabaCryptid
@CahabaCryptid 8 ай бұрын
This new process is significantly easier to get SD running on AMD GPUs than it was even 6 months ago. Thanks for the video!
@FE-Engineer
@FE-Engineer 8 ай бұрын
You are welcome! And I agree. It is a lot easier than before. And with ROCm on Linux you get to do everything. Hopefully they will finish getting ROCm onto windows.
@dumiicris2694
@dumiicris2694 3 ай бұрын
@@FE-Engineer what is the vram requirement on amd as high as before? or comparable now with nvidia?
@bhaveshsonar7558
@bhaveshsonar7558 13 күн бұрын
​@@dumiicris2694 vram doesnt work like that
@dumiicris2694
@dumiicris2694 13 күн бұрын
@@bhaveshsonar7558 what ure saying it ocupies less bytes? But ure talking about the speed thats why ure saying that and yeah vram has a bus of 192 or 256 bytes or 512 but it still works the same imagine instead of 64 bits of whitch u need 2 bytes instead video card needs th whole line yeap but my man it works the same but thats the technology with clocks so it needs more bytes on ram cause thats the way vram works but ram needs a different driver to be used as vram so it has to be bigger so it does not separate the line cause of the slower speed
@LadyIno
@LadyIno 7 ай бұрын
I'm so gonna try this when I'm home. Just recently I tried running stable diffusion on my xtx (took me half an evening to set it up) and was immediately frustrated how slow everything was. It took around 10 minutes to create 4 batches. I'm a total beginner when it comes to ai art, but your guide is very well explained. I think I can copy your homework 😅 ty for the video!
@LadyIno
@LadyIno 7 ай бұрын
Quick update: This worked perfectly! I can create 4 batches in less than half a minute. Sir, you are a genius. Thanks so much ❤
@FE-Engineer
@FE-Engineer 7 ай бұрын
🙃 I’m glad it helped and worked without issue. As I state in the video. There are a lot of things like inpainting that do not in fact work appropriately. Right now unfortunately to get “everything” you really need to run it in Linux. But full windows support with ROCm should be coming soon ish. So hopefully when you get to the point of wanting the other pieces hopefully ROCm will work on windows and switching over should be easy! Have fun! And thank you for watching and the kind words!
@NicoPlayGames96
@NicoPlayGames96 8 ай бұрын
Dude thank u so much, u help me a lot. Im from germany and this video was still very understandable and thx to u i can now have fun on stable diffusion :)
@FE-Engineer
@FE-Engineer 8 ай бұрын
You are very welcome! I’m glad it helped! Next tutorial is for running with Rocm on Ubuntu.
@Lumpsack
@Lumpsack 9 ай бұрын
Thanks so much for this, I followed the guide, got the same error and fixed with the text in the description - top man, this has saved me from having (yet another) fight with Linux :) Also, top tip on being patent, not my strong suit, thankfully for her my wifes at work, so I had to just pester the kids instead! Now, I too am on the 7900xtx and not getting quite the same speeds, around 17it/s but still a big jump up, so thank you and I look forward to more of your vids. Incidentally, the nice thing here too, is not seeing gpu ram perma-maxxed!
@FE-Engineer
@FE-Engineer 9 ай бұрын
Yea with previous runs a few months or like a year ago. The ram was like always maxed out and would just randomly “out of vram” which drove me crazy having to constantly kill and restart if I made one mistake with a button. Glad it helped! And 17it/s is still really fast overall. That’s still 100 steps in under 10 seconds easily. And probably only about 7 seconds.
@FE-Engineer
@FE-Engineer 9 ай бұрын
For getting up to 20 iterations per second. Just a thought you might consider undervolting your gpu slightly. Like a -10 or something. I think mine is at -10.
@Lumpsack
@Lumpsack 9 ай бұрын
@@FE-Engineer Thats cool, I'll take the slightly slower speed, but thanks - I get the difference now.
@2ndGear
@2ndGear 4 ай бұрын
All these other tutorials had me installing python from github for my AMD GPU. Did not realize there was a tutorial on AMD site itself for A1111! Well, time to start over and do it your way. Radeon 6600 XT and all I get for speed is 2/it while you're getting 20+. I have to start over thanks for your tutorials!
@LeLeader00
@LeLeader00 9 ай бұрын
Very good Video, I was having trouble installing SD in my amd pc, thank you
@FE-Engineer
@FE-Engineer 9 ай бұрын
I honestly got tired of trying to get it up and running and coming back to it being entirely broken and figuring out how to get it back up. I figured others might appreciate skipping the junk (hopefully) and having a straight forward guide to just get it up and running! I’m super glad it helped and hopefully was easy and got you up and running quickly!
@leandrovargas615
@leandrovargas615 9 ай бұрын
Thank you bro for the fix!!!A Big hug!!!
@FE-Engineer
@FE-Engineer 9 ай бұрын
Glad it helped!
@xIndustrialShadoWx
@xIndustrialShadoWx 8 ай бұрын
Thanks man!! Questions: How do you update the repository safely keeping all your models and extensions? Also, how do you reset the entire environment if things go tits up?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Just git pull to update. The folders for models and stuff should not have any problems. Resetting the entire environment. Delete the venv folder will blow away the virtual environment stuff. Then you just need to re install all of those tools to get back to a hopefully working environment. I’m really bad cases you can of course move or copy your models and delete everything but that would be if you were really having problems that you could not get working properly.
@jinxPad
@jinxPad 8 ай бұрын
great guide, I've been looking at getting back into some SD fun, one question regrding downloading the model from the ONNX tab, does it have to be from huggingface? Or can you download from other sources like civitai?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Yep you can optimize civitai models. I have a video about that.
@shakewait7612
@shakewait7612 8 ай бұрын
Excited for new content! Well done! How to relaunch Stable Diffusion URL without reinstalling all over again? Also about the sampling methods - where are the other samplers like DPM++ 2M Karras?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Unfortunately right now, I don't know how many improvements will be made with this repo. It is still actively worked on, but as you can see, some of the samplers are simply missing. I also saw the person who built and maintains this repository is also helping out with SD.Next. I tried SD.Next...and did not find it working as well as I would like, but it is a bit simpler in some respects.
@shakewait7612
@shakewait7612 8 ай бұрын
Like you I have a 7900XTX and I REALLY enjoy the speed boost, thanks! Just wish there were more working features inside the UI. You had commented somewhere about best of both worlds. Excited to see what what's in store@@FE-Engineer
@FE-Engineer
@FE-Engineer 8 ай бұрын
Same here. As a quick update. I tried installing Rocm on windows subsystem for Linux. Cool. It worked. But you essentially can’t get the graphics card passed through or at least…not really. So then I was like…well what about Ubuntu desktop. Then we at least have a working gui basically. Ran into a lot of problems there. Blew away dual boot Ubuntu desktop. Accidentally wiped an entire hard drive that I was using…and now I’m doing it straight through Ubuntu server. That’s why I have mostly been quiet for a few days. Working through some issues so that hopefully I can get a good clean tutorial up showing something worth seeing!
@el_khanman
@el_khanman 8 ай бұрын
@@FE-Engineer please let us know if you have any success with dual booting. I got it working nicely on my first try, then accidentally broke it, and have not been able to get it working again (even after countless fresh reinstalls).
@petrpospisil9193
@petrpospisil9193 8 ай бұрын
Thanks a lot for this tutorial, works like a charm! I would like to ask you, do you suggest running command arguments like --medvram? I am using RX 6600 XT with 8GBs and although I had not enough time to test it, I was not able to generate above 512x512 without any cmd args. Also do you happen to know if we are able to convert local checkpoint models instead of relying on uploaded models on hugging face? Thanks a lot bro!
@FE-Engineer
@FE-Engineer 8 ай бұрын
So about command line arguments. A year ago. Memory was very inefficient. After generating like a single image you would run out of memory so they were basically 100% required. Memory optimizations have improved significantly. So you might be able to get away with not running them at all. But if you find yourself getting out of memory errors I would suggest turning them on and trying it out for a while. Performance will take a bit of a hit but that might be preferable to stopping and restarting the ui if it happens a lot. Also. Working on a video for ckpt and safetensor conversions.
@Kierak
@Kierak 7 ай бұрын
@@FE-Engineer Did you finish your "ckpt and safetensor conversions". I really need it :3
@mgwach
@mgwach 7 ай бұрын
Hey FE-Engineer!! Thank you so much for this tutorial. Glad to see people helping out the AMD crowd. :) I do have a question though.... how come when you run the webui.bat initial setup command you don't get the "Torch is not able to use GPU; add --skip-torch-cuda-test" error? I get that every time I try to install it.
@FE-Engineer
@FE-Engineer 7 ай бұрын
because before I even get that error I add directml to the requirements file for pip to install -- i do this specifically because I know this error is coming.
@Jyoumon
@Jyoumon 7 ай бұрын
@@FE-Engineer mind telling how you do that? extremely new to this stuff.
@xt_raven8842
@xt_raven8842 7 ай бұрын
im getting the same torch error how can we fix it?@@FE-Engineer
@user-cf1od8jy1q
@user-cf1od8jy1q 7 ай бұрын
@@FE-EngineerReally, why don't you talk about it? And how can we do this
@hobgob321
@hobgob321 7 ай бұрын
Hey did you figure it out? I have the same issue. I tried editing the webui-user.bat/sh but I still get the same error@@user-cf1od8jy1q
@uffegeorgsen372
@uffegeorgsen372 8 ай бұрын
спасибо, друг. Я уже хотел выбросить свою карту AMD, но попался твой ролик. Все получилось, искренне благодарю!
@FE-Engineer
@FE-Engineer 8 ай бұрын
That is fantastic! I am glad it worked! :)
@duladrop4252
@duladrop4252 8 ай бұрын
Thanks for sharing this Tutorial...
@FE-Engineer
@FE-Engineer 8 ай бұрын
:) you are welcome!
@Andee...
@Andee... 9 ай бұрын
Works so far! However hires fix isn't working at all. Just does nothing. Any idea what that could be? I've made sure to put an upscale model in the correct folder.
@FE-Engineer
@FE-Engineer 9 ай бұрын
Honestly. With these directml onnx + olive. A lot of things don’t seem to work appropriately. I’m currently looking at a bunch of alternatives like using normal A1111 with Rocm. And also using sd.next still with directml and onnx. So far I don’t see many things that are nearly as fast though. Still working on it.
@scumcookie
@scumcookie 8 ай бұрын
Really great video! I'm wondering though, I upgraded from a 12gb 3060 to a 7900xt, would I need to delete/uninstall SD and python and git and then follow the steps in your video? Or can I just install mini/anaconda and go from there?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Python and git you can keep. Install miniconda and go from there. Your old version of stable diffusion was the automatic1111 version and not the directml version. So with an amd GPU on windows you will need the one running directml and using onnx. Alternatively. You can install shark. Or You can run in Linux with ROCm.
@scumcookie
@scumcookie 8 ай бұрын
Ok so I need to delete SD and install miniconda and go from there. Thanks for the reply @@FE-Engineer I appreciate it.
@zankares
@zankares 8 ай бұрын
Tyvm, so helpful
@FE-Engineer
@FE-Engineer 8 ай бұрын
Happy that it helped!
@pnaluigi6344
@pnaluigi6344 8 ай бұрын
You are a hero.
@FE-Engineer
@FE-Engineer 8 ай бұрын
Hahaha thank you very much. I hope it helped!
@Kudoxh
@Kudoxh 7 ай бұрын
So do I always need to download models from huggingface AND into the Onnx folder? does it also work if i would simply download a model and placed it into model/stable-diffussion? I'm kinda new to this sry if it seems like a dumb question. edit: another question, to start the webui, is it required to start it via anaconda using the "...\ webui.bat --onnx --backend directml" or can i simply start it with clicking on the webui-user batch file and if so I probably need to add the --onnx --backend directml into the arguments section...?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Anaconda is required if that is how you set it up (that is how I did it in my video because it significantly reduces problems and provides consistency, plus if you do not use it, and make any type of mistake, it is time consuming to try and fix any of these mistakes) You can download models from just about anywhere. Not every model works 100% of the time due to different ways that people configure and encode some models. See the pinned message on this video to get a better idea of how to convert models from civitai for example.
@sa2bin909
@sa2bin909 7 ай бұрын
This was the best tutorial I can find at this time for AMD One question, did you manage to get stable diffusion XL models to work? If I put them in the stable diffusion folder in the models folder, the WebUI does not show them
@FE-Engineer
@FE-Engineer 7 ай бұрын
Only when using ROCm. I have not tried with shark. But on windows I have not gotten SDXL to work. :-/
@sa2bin909
@sa2bin909 7 ай бұрын
@@FE-Engineer me neither, ended up installing ubuntu 22.04, works even better and can use SDXL models
@Code_String
@Code_String 9 ай бұрын
How does this compare to A1111 with ROCm on Linux? I tried to run the Olive optimization on my G15AE's RX6800m but it never picked that up. Was wondering if it's worth going through after getting a simple Ubuntu setup going.
@FE-Engineer
@FE-Engineer 9 ай бұрын
7900xtx just recently got Rocm support. I’m going to try it out and see how they compare. I’m trying to get to that today if I can get enough time.
@bodyswapai
@bodyswapai 8 ай бұрын
How was it? I am thinking of buying the card but there isn't fair comparisons on the internet. All of them XTX on DirectlML but rather I am looking for comparions with Linux ROCM. .@@FE-Engineer
@athrunsblade846
@athrunsblade846 8 ай бұрын
What is the reason for having far less sampling methods on the AMD version? Or is there a way to install more? Thanks for the help :)
@FE-Engineer
@FE-Engineer 8 ай бұрын
I have new videos coming out that should help with this. The specific question you asked, I’m not sure how much support this directml fork of automatic1111 is receiving these days. I know the person who build it is also helping out with SD.Next project as well. Hence why I said I’m not entirely sure how much more support this fork is really getting. I hope this information helps. Also I have new videos coming out about running native with Rocm for and cards.
@humansvd3269
@humansvd3269 8 ай бұрын
Subbed. I had SD installed in webui without olive and it was slow. I'm using an 6700xt with a 7900xtx on the way, and this made a huge difference in making the best of what I have for now. Is there a way I can launch webui SD without having to activate SD olive manually in Miniconda3? As in, create my own BAT file?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Yes, here is a video showing exactly that. Like 2 minutes! kzfaq.info/get/bejne/rLF5pMdmq6qwnmQ.html
@Mr.Kat3
@Mr.Kat3 7 ай бұрын
So from my understanding unless I'm missing something so far I cant get any of my old "Embedings (Textual Inversions)" to work? And I am assuming they don't work in this version which is a huge downside for me. Any info you have on this?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Run ROCm in Linux if you want to be able to do everything. No optimizations or anything in Linux. Just pure regular automatic1111 and everything works.
@Blue_Razor_
@Blue_Razor_ 8 ай бұрын
Downloading models using the ONNX tab is super slow, and stops about halfway through. Is there a way I can download the file off of huggingface and just copy and paste it into the ONNX-Olive folder? I tried it with a dreamshaper model I already had downloaded but it didn't recognize it.
@FE-Engineer
@FE-Engineer 8 ай бұрын
I have not had those problems. I’ve found those tabs to be really finicky and easily get messed up. Sorry I can’t be much more help. Keep trying though.
@jakeblargh
@jakeblargh 7 ай бұрын
How do I optimize safetensors models I've downloaded from CivitAI using this new WebUI?
@FE-Engineer
@FE-Engineer 7 ай бұрын
How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models! kzfaq.info/get/bejne/maqinNV22dOpoY0.html
@edzedzify
@edzedzify 8 ай бұрын
Awesome, think you, I have it running. Silly question of the day. If I exit out or reboot my pc, is there a simple way to get it running again?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Several people asked this basically. Here is the simplest way I could find. kzfaq.info/get/bejne/rLF5pMdmq6qwnmQ.html Super short video, check it out, create a batch file that does it all for you automatically.
@SkronkJappleson
@SkronkJappleson 8 ай бұрын
Thanks, I got it going a lot faster on my RX 6600 because of this
@FE-Engineer
@FE-Engineer 8 ай бұрын
That’s awesome! Glad to hear it helped!
@thaido1750
@thaido1750 8 ай бұрын
how many it/s does your RX 6600 have?
@SkronkJappleson
@SkronkJappleson 8 ай бұрын
@@thaido1750 after using it a bit I decided to just use my other machine with rtx 3060. I could get 2.5 it/s with the 6600 (a little more if i overclocked) and then you have to use their crappier sampling method as well. for comparison, rtx 3060 gets around 7 it/s without trying to overclock with xformers installed
@evetevet7874
@evetevet7874 7 ай бұрын
@@FE-Engineer help, i get "Torch is not able to use GPU" error plss
@TrippyRiddimKid
@TrippyRiddimKid 7 ай бұрын
Trying to get this running on a 5600xt but no matter what I do I get "Torch is not able to use GPU". I could skip the torch test but from what I can tell that will just end up using my CPU. I know the 5xxx series can do it as Ive seen others mention it working. Any help?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Yep. Read the video description at the top. It will provide the help you need….
@astraleren
@astraleren 8 ай бұрын
thank you a lot! question though. can i also install the models from civitai? will they also be optimized by onnx? i will still try it out but yeah, just incase!
@FE-Engineer
@FE-Engineer 8 ай бұрын
You can definitely try. I had a lot of troubles with optimizing models without errors etc. ultimately I found that generally if I just very carefully downloaded from hugging face and optimized in an exact way it seemed to mostly work. But you can definitely try!
@mehmetonurlu
@mehmetonurlu 8 ай бұрын
I am kinda new to all of these, when i close it and reopen the webui-user file, standart stable diffusion opens up not this one. I opened anaconda prompt and manually changed all directories and pasted the "webui.bat --onnx --backend directml" code. And it works, but is there any easy way?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Yep. I have a video showing how to open and run anaconda all from a single prompt.
@phelix88
@phelix88 8 ай бұрын
Anyone know if training isnt supposed to work on this version? I get an error immediately after it finishes preparing the dataset...
@FE-Engineer
@FE-Engineer 8 ай бұрын
I don’t know for sure. On the directml version there are a lot of things that do not work appropriately. You can also do the swap to move over to Linux where and has the Rocm drivers that work. Or you can wait another 2-3 months or so until amd finishes hopefully getting Rocm ported over to windows.
@Sod1es
@Sod1es 2 ай бұрын
the onnx and olive tabs aren't not showing
@diamondlion47
@diamondlion47 8 ай бұрын
Good vid man, gotta show support for open source non ngreedia ai. Nice punk btw.
@FE-Engineer
@FE-Engineer 8 ай бұрын
Haha thank you. I worked for a crypto company and the designers made punks for everyone who worked there. It’s on some chain, I don’t remember which one though to be honest. And yea. Nvidia cards are good. No doubts there. Their prices are just too high for me to stomach personally. :-/
@LeLeader00
@LeLeader00 8 ай бұрын
What does it mean 😢 OSError: Cannot load model DreamShaper: model is not cached locally and an error occured while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace above.
@FE-Engineer
@FE-Engineer 8 ай бұрын
It seems to not be able to load the model. Did you download it. Then optimize it?
@petrinafilip96
@petrinafilip96 9 ай бұрын
Whats considered fast? I do inpainting with batches of 4 pics (so I pick the best one) and it usually takes 3-4 minutes for one batch with RX 6800
@FE-Engineer
@FE-Engineer 9 ай бұрын
I would say getting 8-10+ iterations per second is quite fast. Are you using olive optimized models? Are you increasing resolution when you do this? How many steps are you doing? I would expect 6800xt to perform a bit better to be honest.
@shanold7681
@shanold7681 8 ай бұрын
Thanks for the video got it working for me A little sad I'm getting 15 IT/s With my 5800x and 7900xtx :( But still 15 is Way better then 2!!
@FE-Engineer
@FE-Engineer 7 ай бұрын
Yea. My 7900xtx can get up to 18/19 or so. But most of the time depending on what else is running on my computer etc I tend to get somewhere around 15-17 and quickly degrading if I am building larger images bigger than 512x512. But even if you compare this to nvidia GPU’s this is still very fast. Running in Linux will give a lot more features.
@shanold7681
@shanold7681 7 ай бұрын
@@FE-EngineerInteresting I'll have to spin up linux on my system and give it a shot. Sadly there is a lack of Onix models it seems. or maybe its limitations of the windows vertion.
@ponyplower5963
@ponyplower5963 8 ай бұрын
Appreciate the video! Is there anyway to use models not on huggingface? Maybe use a model that was installed via a MEGA link or CivitAI?
@FE-Engineer
@FE-Engineer 8 ай бұрын
I’ve tested and run civitai models that I know work.
@FE-Engineer
@FE-Engineer 8 ай бұрын
Look here. How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models! kzfaq.info/get/bejne/maqinNV22dOpoY0.html
@mrmorephun
@mrmorephun 8 ай бұрын
I have a (noobish) question...when i close the program and want to restart stable diffusion later, what command should i use?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Check my other videos. I have one specifically about this. I think it is about 3 minutes total. It’s very short.
@_gr1nchh
@_gr1nchh 3 ай бұрын
Any update on this? I just got a 6600 last night (as a test card, I was planning on going with a 2070 super instead for a cheaper price) but I like this card and all of AMD's tools more than nVidia's. If I can get decent results out of this card I'll just keep it. Wondering if there's been any major updates regarding SD on AMD.
@richkell1653
@richkell1653 7 ай бұрын
Hi, followed everything running and downloaded the model you use in the vid, however I am getting this error: models\ONNX-Olive\DreamShaper\text_encoder\model.onnx failed. File doesn't exist. My text_encoder folder is in my z: drive and in that is a config.json and model.safetensors file. Any ideas? Btw thanks for your work on helping us poor AMDer's out :)
@richkell1653
@richkell1653 7 ай бұрын
Managed to optimize another model and it works perfectly! Jumped from 2-3it/s to 12.36it/s!!! You SIR do ROCK!!!
@FE-Engineer
@FE-Engineer 7 ай бұрын
If it is saying file not found it means it is looking specifically for a file and it is not there. Why it thinks there should be a file there is harder to figure out. Might just try optimizing again and during optimization it should put a file there
@Korodarn
@Korodarn 6 ай бұрын
I have a 3080 with 10GB ram, I've been wanting a 24 GB ram card, in perspective of those here, is this whole process worth it as an upgrade from a 3080 to a 7900XTX? I'm considering saving and going for the 4090 but it's over twice the cost, especially if I go with real costs of available units including used parts off marketplace, etc. I am a Linux user by default, but I do play VR games in windows 11 because VR is a little weak on windows and with UEVR making a lot of unreal engine games open on windows in VR, I'm thinking I"ll be spending some more time that way. So I'd ideally want to be able to use ComfyUI, Automatic1111, text-generation-webui and SillyTavern all within both places equally well like I can today, where I dual boot EndeavourOS and Windows 11.
@FE-Engineer
@FE-Engineer 6 ай бұрын
Is endeavor Linux based? Sorry I’m not familiar with it? Having a 3080. I would say upgrading to a 7900xtx will not be worth it for you unless you plan on doing AI stuff on Linux. Or…wait for ROCm to be finished working on windows. Once we get ROCm on windows yes it will be worth it if your goal is to have 24gb of vram. But at the moment no. Changing to AMD to run AI on windows is not going to be worth it. Not yet at least.
@4MERSAT
@4MERSAT 8 ай бұрын
Why can't I change the image size in the Optimize tab? I can only select 512.
@FE-Engineer
@FE-Engineer 8 ай бұрын
You do not need to change it in that tab. The models you are optimizing are most likely trained on 512x512 anyway.
@iskiiwizz536
@iskiiwizz536 2 ай бұрын
I get the 'launch.py: error: unrecognized arguments: --onnx --backend directml' error at 9:23 even if i put the two lines of code
@FE-Engineer
@FE-Engineer 2 ай бұрын
Code has been updated. Lots of changes
@hhkl3bhhksm466
@hhkl3bhhksm466 5 ай бұрын
Hey, do extensions like controlnet and animatediff work; also, can you train models using koyha at a reasonable speed? Also, is it possible to convert XL models using olive to make it compatible with AMD? I'm thinking of buying a 7900 xt because of its 20 gb of vram, but I'm concerned that not everything works out of the box. Thanks and have a good one
@FE-Engineer
@FE-Engineer 5 ай бұрын
No they do not. At least I have never gotten them to work except when using rocm in Linux.
@deviwaazaa
@deviwaazaa 7 ай бұрын
Hey there, what if I already have some models downloaded? How would I "optimize" these?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Check out my video about converting civitai models. It explains exactly what you are looking for.
@Namelles_One
@Namelles_One 8 ай бұрын
Any chance to list models that can run with these? I tried stable diffusion xl, and always getting "assertion error", so, list of models that can be used will be very helpfull, with slower connection is just waste of time to download and try blindly. Thank you!
@FE-Engineer
@FE-Engineer 8 ай бұрын
I’ll have a new video coming out basically outlining which programs can do what with AMD cards because honestly it is all over the board.
@FE-Engineer
@FE-Engineer 8 ай бұрын
As a quick note. I have not been able to get stable diffusion xl working on this one in windows.
@destructiveideation3784
@destructiveideation3784 8 ай бұрын
Any idea how to make this work with Img2Img/Controlnet? I've followed this guide and image generation is indeed wickedly fast, but it seems things like controlnet and adetailer don't work at all; the command line actually doesn't even show a call for either extension during runs.
@FE-Engineer
@FE-Engineer 8 ай бұрын
Right now. As far as I know. To get in painting and control net. You have to run it in Linux. I have guides for making a dual boot pc. And installing it with Rocm on Linux.
@FE-Engineer
@FE-Engineer 8 ай бұрын
The other option besides a Linux install. Is basically waiting probably 2-3 months until amd is able to get miopen running in windows so that Rocm works in windows. At least that is what I understand is the problem as of right now.
@destructiveideation3784
@destructiveideation3784 8 ай бұрын
@@FE-Engineer Thanks for responding so quickly! I'll give Linux a try in the interim and see how that goes.
@Tbehr1250
@Tbehr1250 8 ай бұрын
So I was able to get this going... but now how would I run again without having to redo steps? New to all of this - I would like to have a shortcut I could just click on in my desktop to run all this
@FE-Engineer
@FE-Engineer 8 ай бұрын
Check my other videos for the one talking about activating conda and running sd in one script.
@andregamaliel
@andregamaliel 8 ай бұрын
Hello sir, i notice tha sampling methods is not include all sampling methods like dpm++ 2m karras for example, how to enable or add it? Thank you
@FE-Engineer
@FE-Engineer 8 ай бұрын
No way to add it with this fork of automatic1111. For that sampler you will either need to use shark on windows - I have a video Or use ROCm in Linux - I also have a video Until ROCm is fully supported on windows (soon?) then AMD can use normal automatic1111.
@andregamaliel
@andregamaliel 8 ай бұрын
@@FE-Engineer so pretty much i need to use linux? Not a problem to be honest, can i use any linux distro or i need to use Ubuntu? If i only use Ubuntu can i use zorin os since it based on Ubuntu
@16thSD
@16thSD 7 ай бұрын
i got the error "FileNotFoundError: [Errno 2] No such file or directory: 'footprints\\safety_checker_gpu-dml_footprints.json' Time taken: 1 min. 56.9 sec." not sure what i did wrong here....
@FE-Engineer
@FE-Engineer 7 ай бұрын
I have no idea either. The safety checker is usually used when optimizing if I remember correctly. But I have not seen this error.
@CreepyManiacs
@CreepyManiacs 7 ай бұрын
I didnt have the ONNX and Olive tab, I just add --onnx in commandline args and it seems to work XD
@FE-Engineer
@FE-Engineer 7 ай бұрын
Strange. Well if it is working correctly then that is all you can ask for.
@JahonCross
@JahonCross 2 ай бұрын
Is this like a beginner guide to SD? I have an amd gpu and cpu
@void-qy4ov
@void-qy4ov 8 ай бұрын
great tutorial !! thanks Finally it is working. ! Can you please show how to use models from civitai ? and is it possible to use SDXL with this method ?
@FE-Engineer
@FE-Engineer 8 ай бұрын
I tried and was unable to get sdxl working despite trying several different things. Also the refiner simply did not even seem to run at all.
@FE-Engineer
@FE-Engineer 8 ай бұрын
And absolutely. I’ll try to have a video up about using a model from civitai in the next few days
@wwk279
@wwk279 8 ай бұрын
Do controlnet, extensions such as adetailer, roop, reactor,...work well on this amd SD webui version. Do you get crashing sometime when generating picture?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Wow, thanks for the tip. I’ll have to try them out. I only get crashes when I put in silly values. If I try to resize by 4x. It will crash on me. But if I am generating 512x512 I can do it endlessly for hours without crashes or out of memory etc. so generally no crashes except when I do something kind of silly.
@wwk279
@wwk279 8 ай бұрын
I think it's not the right time for me to upgrade to the RX 7900 XT yet. Your video was great! Keep up the good work. I'm looking forward to your next videos.
@FE-Engineer
@FE-Engineer 8 ай бұрын
Oof. I spent like 5 hours fiddling with control net. Got the right extension. Downloaded models put them in the right places. I got previews to work. But at the end of the day it just ignores them for me. You using control net 1.1? Tried using open pose, canny, and numerous others. But at the end of the day it never actually applied them when generating images. And I played with virtually every slider and setting. Weight, preference, in painting, txt to img. IMG to IMG etc.
@Drunkslav_Yugoslavich
@Drunkslav_Yugoslavich 6 ай бұрын
Is there any way to make the main folder not on C? conda create --prefix /path/to/directory makes a directory in the needed path, but when i do git clone it's just downloads everything to my user folder on C :/
@FE-Engineer
@FE-Engineer 6 ай бұрын
Go to the directory that you want to clone the repo in. Then do git clone there.
@Drunkslav_Yugoslavich
@Drunkslav_Yugoslavich 6 ай бұрын
@@FE-Engineer I can do that only through cmd, not conda. Sorry, I'm not really into any kind of programming, so it's kinda hard to me. I cloned it through cmd and done everything you showed in the vid next, but it just gives me "Torch is not able to use GPU" and gives me the command to ignore CUDA
@TokkSickk
@TokkSickk 6 ай бұрын
@@Drunkslav_Yugoslavich do --use-directml not --backend directml
@Drunkslav_Yugoslavich
@Drunkslav_Yugoslavich 6 ай бұрын
@@TokkSickk Do not work in command line for webui-user.bat, "launch.py: error: unrecognized arguments: not --backend-directml"
@TokkSickk
@TokkSickk 6 ай бұрын
Huh? the current working directml is --use-directml not the backend one. @@Drunkslav_Yugoslavich
@rwarren58
@rwarren58 4 ай бұрын
Is it possible to use SDXL on my AMD Stable Diffusion? If so can you point to the right way?
@FE-Engineer
@FE-Engineer 4 ай бұрын
Yes. Read the video description. In windows you can use zluda to do it. In Linux rocm has no issues with running sdxl
@Istock5
@Istock5 7 ай бұрын
Nice guide! How or what file do I run to open stable diffusion back up?
@FE-Engineer
@FE-Engineer 7 ай бұрын
So you can run through the same mini conda stuff as before, or you can watch my video about making a quick script that will launch it for you. Then you can just run that single batch file from mini conda and it will do it all.
@Istock5
@Istock5 7 ай бұрын
Thanks!
@PCproffesorx
@PCproffesorx 7 ай бұрын
I have an nvidia GPU, but have still looked into ONNX my main problem with it is that it doesnt have lora support yet. You have to merge the lora's into your model first. If lora's are every properly supported with the ONNX format I would switch immediately.
@FE-Engineer
@FE-Engineer 7 ай бұрын
Interesting take. My understanding of the underlying differences between the formats is pretty limited. So it is definitely curious to me to find out that ONNX while lacking some almost rudimentary functionality is that appealing. I’ll have to find time to dig in a bit more when I have some time.
@zekkzachary
@zekkzachary 9 ай бұрын
When trying to Olive optimize, it always crash on "ERROR:onnxruntime.transformers.optimizer:There is no gpu for onnxruntime to do optimization. Click here to continue". Which version of torch do you use? I always get "You are running torch 1.13.1+cpu. The program is tested to work with torch 2.0.0.". I manually update it to 2.0.0, but SD automatically downgrade to 1.13.1. Do you have any lead?
@FE-Engineer
@FE-Engineer 9 ай бұрын
Yes I have seen all of these errors. If you simply do nothing. It should continue on and work. Be patient. Optimizing the model will take a while.
@zekkzachary
@zekkzachary 9 ай бұрын
@@FE-Engineer That's the problem, it doesn't continue. It close Stable-Diffusion preventing the optimisation to continue.
@FE-Engineer
@FE-Engineer 8 ай бұрын
For torch I’m using the same version as you. I’m surprised. A lot of people have followed this and have not seemingly had any real problems. The press here to continue is a bit unusual. Is that the last error that you see before or we here to continue?
@zekkzachary
@zekkzachary 8 ай бұрын
@@FE-Engineer Yes. Here's a screenshot: drive.google.com/file/d/1DdUJl_B_5N6ahJtjRpvJi4d7uPgxWO2U/view I have 32Gb of RAM and a Radeon RX 6800 XT, it should be enough.
@zekkzachary
@zekkzachary 8 ай бұрын
As a folluw-up, in case anyone has the same problem that I had: I made sure to have no heavy memory application running (even in the tray) then I tried again and it works. However, I found and the output with DirectML renders a lot faster, but are of lower quality than pyTorch. I made much more stunning image with my old nVidia than with my brand new Radeon. Let's hope AMD fix this compatibility issue quickly...
@thelaughingmanofficial
@thelaughingmanofficial 8 ай бұрын
I have to use the WebUI from explorer otherwise it doesn't install. if I try from Miniconda I get an error about "couldn't launch python" when using the --onnx --backend directml options. Rather frustrating.
@FE-Engineer
@FE-Engineer 8 ай бұрын
Sounds like python problems. Like multiple versions of python potentially or not checking the add to path option when installing. Hard to say for sure though. :-/ sorry that is irritating.
@thelaughingmanofficial
@thelaughingmanofficial 8 ай бұрын
@@FE-Engineer I only have one version of python installed and it's 3.10.6 because that's the only version it seems to work with
@ELIASVV
@ELIASVV 8 ай бұрын
Hi, how can I start SB after leaving the anaconda terminal?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Automatically activate conda and run your SD from one bat file! Super easy! kzfaq.info/get/bejne/rLF5pMdmq6qwnmQ.html Like this :)
@zackbum9159
@zackbum9159 8 ай бұрын
This worked for me. However i've got some Problems wit extensions and Models. Extension like "ReActor" "Faceswap Lab" "Agent Scheduler" are not working correctly. An i Have Downloaded a new Model from Civitai. Its a safetensor file. Where i need to copy this file an how can i Optimize this file? Do i get this Problems because of this special version of SD oder because the AMD card? Do you have expierience in this?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Yes. Unfortunately ONNX models have a lot of known issues. Many of the parts do not work appropriately on directML. If you want the full set of features at the moment the only real way to do it is running on rocm in Linux. AMD is working to get Rocm on windows but that could easily be a few months away still.
@zackbum9159
@zackbum9159 8 ай бұрын
@@FE-Engineer Ok, Thanks. So i could install Linux as a second OS and run rocm there? An then i can use any Model without optimizing process and the mentiones extensions will also work? Same compatibility like a nvidia card? I have a AMD 7800 XT, what would you prefer to do, run rocm on linux or buy a nvidia card and sell de AMD? My main concern is that I can do everything with it that you can do with the Geforce, with similar speed.
@kkryptokayden4653
@kkryptokayden4653 8 ай бұрын
​@@zackbum9159I run sd on Linux only for generating images, zero issues. I ran comfyui on Linux for animations and zero issues. Give comfyui a try. It is amazing.
@optimisery
@optimisery 5 ай бұрын
Great tutorial, thank you very much! One thing worth mentioning is that conda virtual env (like any venv for python) is not really "virtual machine", but rather a bunch of env variables that's set/activated for the current shell, so that when you're running anything under this context, binaries/libs are searched for within the context. Nothing is really "virtualized"
@rwarren58
@rwarren58 4 ай бұрын
I am a rank beginner. I would appreciate an explanation of what you mean by "virtualized". Thanks if you reply. It's a month old thread.
@evetevet7874
@evetevet7874 7 ай бұрын
Can i do it on my 6000's series? (6900XT) and take the speed that you take?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Yes it works in 6900xt. Speeds will not be as fast but will be faster than any other way in windows other than maybe shark.
@Niko87art
@Niko87art 8 ай бұрын
Thank you so much for the tutorial, such great content I'm very new to stable diffusion, its gonna sound silly but how do I run Automatic 1111 again? I get through with the installation but then we I launch it again it doesn't open with the ONNX tab.
@Niko87art
@Niko87art 8 ай бұрын
could you please leave a link to the video where you explain it? again thank you
@FE-Engineer
@FE-Engineer 8 ай бұрын
Automatically activate conda and run your SD from one bat file! Super easy! kzfaq.info/get/bejne/rLF5pMdmq6qwnmQ.html This one.
@tednardo
@tednardo 8 ай бұрын
nice video! i have a problem tho, maybe you or someone in the comments can help me fix, whenever i try to inpaint in the img2img section, it doesnt work as it should, instead to replace the masked part of the image it basically changes the masked part but it doesnt replace it on the main photo, so, as a result, i only get the changed part as a file. I wanted to know if maybe i have to change some settings on the webui!
@FE-Engineer
@FE-Engineer 8 ай бұрын
This is a known limitation for ONNX right now. In painting will generally not work as intended. People are working on it. Or you can go full Linux and run AMD ROCm in Linux and get the full feature set of automatic1111
@tednardo
@tednardo 8 ай бұрын
thank you very much for the answer!
@thepope2412
@thepope2412 7 ай бұрын
I feel like this is a dumb question but can models from civitai be optimized?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Yes they can.
@kenbismarck4999
@kenbismarck4999 6 ай бұрын
Hello, great comprehensive tutorial video, nicely done man :) Timestamping the vid would be awesome. Say, i.e.: 2:00 mins after the intro, beginning of the main part of the vid :) Best regards
@FE-Engineer
@FE-Engineer 6 ай бұрын
Oh you mean time stamping in the video description? KZfaq pretty automagically separates the videos fairly well into sections. Which is crazy convenient.
@yodashi5
@yodashi5 8 ай бұрын
It worked, thanks. Now there is another problem, i dont know how to open it again. Every time i use webui use bat i dont have olive window. Could you help?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Take a look at this. Automatically activate conda and run your SD from one bat file! Super easy! kzfaq.info/get/bejne/rLF5pMdmq6qwnmQ.html
@KasperJuul87
@KasperJuul87 7 ай бұрын
First of all. Thanks for a great video. Im stuck in the model opimization. Its been about an hour now with my computer freezing completely. Is this to expected?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Sure. I think if it has been going an hour with no changes I would think something is likely wrong. In your UI are those little orange boxes still spinning and rotating?
@KasperJuul87
@KasperJuul87 7 ай бұрын
@@FE-Engineer when i unfreezes they are spinning. So something must be going on.
@Mr.Every1
@Mr.Every1 8 ай бұрын
i have the following error message when i try to optimize .. what can i do ? AssertionError: No valid accelerator specified for target system. Please specify the accelerators in the target system or provide valid execution providers. Given execution providers: ['DmlExecutionProvider']. Current accelerators: ['gpu'].Supported execution providers: {'cpu': ['CPUExecutionProvider', 'OpenVINOExecutionProvider'], 'gpu': ['DmlExecutionProvider', 'CUDAExecutionProvider', 'ROCMExecutionProvider', 'TensorrtExecutionProvider', 'CPUExecutionProvider', 'OpenVINOExecutionProvider'], 'npu': ['QNNExecutionProvider', 'CPUExecutionProvider']}.
@FE-Engineer
@FE-Engineer 8 ай бұрын
I have never seen that error. Try reinstalling. Not really sure because that is not an error anyone else has mentioned.
@mgbspeedy
@mgbspeedy 3 ай бұрын
Had to add skip cuda test and it worked. But when I try to create an image, it still fails and says there is no NVIDIA GPU. Doesn’t seem to recognize my AMD. Is a AMD RX 580 8g too old to be recognized on stable division.
@FE-Engineer
@FE-Engineer 3 ай бұрын
The code has changed pretty significantly since I made this video. Zluda does not work for the rx580 because it is not supported by hip sdk. But I believe the directML fork should work. You might need the argument -use-directml
@FE-Engineer
@FE-Engineer 3 ай бұрын
Remove anything about onnx.
@mgbspeedy
@mgbspeedy 3 ай бұрын
Thanks for the reply. I’ll give it a shot.
@duckybcky7732
@duckybcky7732 6 ай бұрын
Everytime I get to collecting torch==2.0.1 my computer freezes like I can’t move my cursor and the clock is stuck on the computer is that normal?
@FE-Engineer
@FE-Engineer 6 ай бұрын
No. That is not normal or at least does not happen to me. Sounds like resource constraints maybe?
@DarkShadow686
@DarkShadow686 7 ай бұрын
for me it still doesnt show olive/onnx in the bar you have any idea how to fix it? I did the check on miniconda
@FE-Engineer
@FE-Engineer 7 ай бұрын
Something is wrong on automatic1111 right now. I’m looking into what the fix is, or what broke.
@DarkShadow686
@DarkShadow686 7 ай бұрын
@@FE-Engineer okay thanks a lot because atm it does render but it takes ages, did it before with another installation and it did it like 100 times faster.
@FE-Engineer
@FE-Engineer 7 ай бұрын
New video showing how to get around all the current problems is up. :)
@TAS2OO6
@TAS2OO6 7 ай бұрын
Could someone please help me? I keep getting an error message 'File "C:\Users\Admin\sd-test\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 363, in optimize assert conversion_footprint and optimizer_footprint AssertionError' every time I try to optimize a model in the Optimize ONNX model tab. I have RX6600 8gb by the way. Update: I found out that it eats all my RAM, because of it I have this error. So, after that I have another question: How can I reduce RAM usage while optimizing the model?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Try using -medvram setting. Is it using up all of your ram or vram?
@TAS2OO6
@TAS2OO6 7 ай бұрын
​​It using up all of my ram, not vram. How can I reduce ram usage while optimizing the model?@@FE-Engineer
@FE-Engineer
@FE-Engineer 7 ай бұрын
Oh. I see. You can look through the flags on the GitHub for that repo. I think there is a -lowram flag you can add.
@user-kj5ux9ms6q
@user-kj5ux9ms6q 7 ай бұрын
Nice video. But I keep getting the error "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check" and then it uses CPU only. I have a RX6800, found a lot of ppl with same issue but no one has a solution. Do you have any idea how to fix it?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Not currently. This just happened within the last few days. I’m working on figuring out what’s happening and how to fix it.
@htenoh5386
@htenoh5386 6 ай бұрын
Any luck with finding a fix? Getting this too...@@FE-Engineer
@vl7823
@vl7823 8 ай бұрын
hi, love the video. i was able to get SD working. However i've noticed that LORAs don't work. Is there a fix to this, is this a known problem/limitation?? Thanks
@FE-Engineer
@FE-Engineer 8 ай бұрын
This version is very limited. Unfortunately a few things just work. Many do not. Lora’s to my knowledge do not work. :-/. If you want absolutely full functionality right now you really need to go ROCm on Linux
@renkun8090
@renkun8090 7 ай бұрын
i get AssertionError in the optimization progress. any ideas?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Hard to say. Double check that you did it the way I did it in the video and don’t flip over to other tabs as that sometimes causes problems.
@cmdr_talikarni
@cmdr_talikarni 20 күн бұрын
Only if you have a RX7900XT, ROCm support is limited for the rest of the line and are half or less of the speed. For example my RTX3050 produces 800x1200 images in 20-30 seconds per image (native CUDA) versus my RX7600 does the same in 220-250 seconds per image (via directml). In gaming the RX7600 has almost twice the performance of the 3050 but Stable Diffusion and many AI tools still rely on CUDA.
@Robert306gti
@Robert306gti 6 ай бұрын
Swede here with some problem. I just installed the latest Miniconda because I thought that was what I'm supposed to do but got an error that in the end that it wanted 3.10.6 but I can't find an installer for this. The only one I find is for python. Am I doing something wrong?
@FE-Engineer
@FE-Engineer 6 ай бұрын
I run it on python 3.10.6. I’m fairly sure that the 3.10.6 is regarding python version.
@FE-Engineer
@FE-Engineer 6 ай бұрын
And for the record. I stopped using mini conda mostly.
@toketokepass
@toketokepass 5 ай бұрын
I get "runtime error: found no nvidia driver on your system" in the console and gui. I also dont have the ONNX tab. *Sigh*
@FE-Engineer
@FE-Engineer 5 ай бұрын
Check out new video. Just finished recording. Should be up in less than 24 hours.
@leeyong414
@leeyong414 7 ай бұрын
hi, so after the "socket_options" error, I followed the venv\Scripts\activate and pasted the code but still get the same error. what am i doing wrong?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Wrong version of httpx is installed.
@FE-Engineer
@FE-Engineer 7 ай бұрын
Change the version to I think it is 0.24.1 you can change it in the requirements.txt file. But the directions in the video do work. So you must have something else going on or you are not in a conda environment correctly. Or wrong version of python. Lots of ways to have things go sideways. Have to follow the directions closely.
@mmeade9402
@mmeade9402 6 ай бұрын
I get a different error message. when running the webui.bat --onnx --backend directml command it runs through and I end up with RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@mmeade9402
@mmeade9402 6 ай бұрын
This is all just too much. Im vaguely computer literate, but I'm certainly no programmer. Until somebody makes this stuff more user friendly for somebody that just wants to download the software in windows, click install and start messing with it Im going to throw in the towel. Ive gone through 4 different A1111 forks today, and they all toss errors at me while following the destructions. Im sure my 7900xtx is probably making things more complicated. But thats ridiculous.
@FE-Engineer
@FE-Engineer 6 ай бұрын
You can use shark from nod.ai. It is pretty much one button install. And it just kinda works. It’s just super slow. It compiles shaders to use. So it’s quite fast to generate an image. But then if you change models it recompiles shaders. If you change image size. Recompiles shaders.
@Knox420
@Knox420 5 ай бұрын
stderr: ERROR: Could not install packages due to an OSError: [WinError 5]
@yozari4
@yozari4 7 ай бұрын
There is a way to put a model that I have already downloaded?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Most models will work. I converted safetensor models from civitai. I even made a video on how to do it. kzfaq.info/get/bejne/maqinNV22dOpoY0.htmlsi=h2nK_Ez4-TwXKR6m
@djust270
@djust270 9 ай бұрын
Ive followed this guide to a T, but I keep getting this error when trying to optimize a model "ERROR:onnxruntime.transformers.optimizer:There is no gpu for onnxruntime to do optimization." Ive tried searching this error but havent found a solution yet. Have you encountered that before? Im using a Radeon 6950XT with the latest driver 23.11.1
@FE-Engineer
@FE-Engineer 9 ай бұрын
Yep. It says it for me as well. Just let it continue it will optimize anyway.
@djust270
@djust270 9 ай бұрын
@@FE-Engineer ok thanks I'll try again. I was only getting 4-5 iterations per second and the images generated were distorted, particularly faces. I'm going to start from scratch and try again.
@FE-Engineer
@FE-Engineer 9 ай бұрын
I understand. I have had that happen. Make sure to double check settings. Make sure you are in the text to image tab. Image to image can get really wonky if you click the wrong boxes. When optimizing a model. Some basically will just work. Others will not (from my testing and playing with it). Definitely to start out try some of the ones I suggested just to make sure everything is working properly if you can. During optimization you will see that specific error that you mentioned onnx does not have a gpu to use. That is ok. Let it finish optimizing. It should work even with that error. It definitely takes several minutes though.
@FE-Engineer
@FE-Engineer 9 ай бұрын
If faces look really weird. Try changing the sampler to Euler or Euler-ancestral and do that once or twice. Sometimes some of the other sampling methods from my experience make horrible faces…
@djust270
@djust270 8 ай бұрын
@@FE-Engineer Thank you for the tip. Changing the sampler did indeed fix my issue! Also, these videos are great. Thank you.
@ncironhorse8367
@ncironhorse8367 7 ай бұрын
Will this work with a 6900 Black Edition? I tried in vain using the standard install of A1111 optimized for an NVIDIA GPU but was not successful
@FE-Engineer
@FE-Engineer 7 ай бұрын
Yes this will work. Or you can do rocm on Linux. Or you can use nod.ai shark on windows. The regular automatic1111 on windows with an AMD card will 100% not work until ROCm on windows is out and working with MIOpen and MIGraph. Soon maybe…hopefully
@ncironhorse8367
@ncironhorse8367 7 ай бұрын
@@FE-Engineer I could not get Shark to work or the install guides I found weren't very clear. I did notice that the git repository you are using is different from the "standard" repo.
@ncironhorse8367
@ncironhorse8367 7 ай бұрын
Once you make the changes to overcome the socket errors, it loads right away. Now let's see about getting a model loaded.
@MikkoRantalainen
@MikkoRantalainen 8 ай бұрын
How's the runtime performance in practice? Will this allow using similarly priced AMD GPUs to get same or better performance compared to Nvidia GPU?
@FE-Engineer
@FE-Engineer 8 ай бұрын
That is a difficult question to answer. I saw something recently that said nvidia cards could have new software for AI that gives them a 5x bump in performance. If nvidia cards get a 5x bump then no they will not ultimately be very comparable (probably). Since this is a moving target that changes somewhat often with changes in drivers etc. I will say that previously yes and cards were good value for their money even compared to nvidia cards. And you get a lot more vram in general with AMD cards which is very important. I find the performance to be entirely satisfactory especially running ROCm on Linux with amd. They do everything and the performance is great! Look at benchmarks as recently as you can find for any specific type of card you are thinking about. The big downside of the AMD cards is just that it is quite a bit harder to get it working properly sometimes.
@FE-Engineer
@FE-Engineer 8 ай бұрын
And on windows AMD cards are quite limited overall until ROCm finally is fully on windows.
@MikkoRantalainen
@MikkoRantalainen 8 ай бұрын
@@FE-Engineer Thanks for the information! I'm currently running an old Nvidia card on Linux and I'm hoping that more software supported something else but proprieatary CUDA so I could switch to AMD. I'm pretty sure the tweaking you need to do with AMD cards is not worse than what you need to do with Nvidia cards on Linux.
@FE-Engineer
@FE-Engineer 8 ай бұрын
AMD cards are not difficult to get running if you are using a good tutorial or guide. It can be very easy with the AMD cards to just have things not work and have a difficult time figuring out why exactly. Also with AMD there are occasionally some features that may not work. If you are on Linux with ROCm so far I have not seen anything that will not work, windows is a very different story though. Again. With a good guide AMD cards are pretty smooth sailing on Linux in my opinion. And as long as someone is ok with a few extra commands and an occasional tweak I think the AMD cards are great for AI, I also expect performance, and support overall to get better with time as AI is receiving a ton of support. I have absolutely no regrets going from team green to team red for my graphics card on my computer build. :)
@OriolLlv
@OriolLlv 4 ай бұрын
Im getting an error after executing webui.bat --onnx --backend directml. fatal: No names found, cannot describe anything. Any idea how to fix it?
@FE-Engineer
@FE-Engineer 4 ай бұрын
Read the video description
@OriolLlv
@OriolLlv 4 ай бұрын
Which part? I followed all the steps.@@FE-Engineer
@Kii230
@Kii230 3 ай бұрын
@@OriolLlv having the same issue. It's because lshqqytiger refactored onnx so -onnx no longer works. Idk how to fix
@ferluisch
@ferluisch 7 ай бұрын
How much of an improvement was this? How many it/s where you getting before?
@FE-Engineer
@FE-Engineer 7 ай бұрын
I don’t remember off hand. But it’s a huge improvement. Almost 10x if I remember correctly.
@FE-Engineer
@FE-Engineer 7 ай бұрын
Although if you run it on Linux with ROCm it will be roughly comparable to using directml.
@lapplander890
@lapplander890 8 ай бұрын
Thanks man. Black Friday soon ends and I opt to buy the 7900 card but really want to do AI. But I don't want to be screwed over by buying the green company card . Sadly it is looking as the 4070 ti with half the vram of 7900 is a lot better on AI. (checked the test in Tom's hardware) You know if there is any development taking place on AMD to fix it?
@FE-Engineer
@FE-Engineer 8 ай бұрын
I believe AMD have stated that ai is their #1 priority. They are trying to get ROCm working in windows. Currently it is stuck at MIOpen not working in windows right now.
@FE-Engineer
@FE-Engineer 8 ай бұрын
I bought my 7900xtx specifically for the vram knowing that while behind its support and AI abilities would get better over time. I also just could not stomach the prices for team green. If you can. You could always wait for team green super variants to come around here in the next 2 months and then determine what makes the most sense for you.
@PineyJustice
@PineyJustice 8 ай бұрын
Unfortunately the 4070ti is quite a bit slower for AI, the guide you looked at is probably dated and the space has been moving fast. A 7900xtx is now faster than a 4080 for AI using either the method in this video, or running in linux with ROCm.
@danp2306
@danp2306 14 күн бұрын
@@PineyJustice Bull.
@ultralaggerREV1
@ultralaggerREV1 8 ай бұрын
Hey man, I got Stable Diffusion installed on my PC, but here’s a massive problem… AUTOMATIC1111 stated that in AMD GPUs, if you use the argument “-medvram”, it will work for GPUs that are between 4GB to 6GB and “-highvram” for GPUs that are 8GB and above. I own an RX 6600 with 8GB of video memory, but whenever I test the AI with the “-medvram” argument, I end up a not enough memory error. How to solve this?
@FE-Engineer
@FE-Engineer 8 ай бұрын
Try no arguments. Try it with lowvram arguments. The version you are using here is using directML and onnx. So it is a bit different than the normal automatic1111 as far as memory is concerned. You will have to test them out to see which ones work for you.
@ultralaggerREV1
@ultralaggerREV1 8 ай бұрын
@@FE-Engineer i’ve been using the lowvram arguement and it does work, but the problem is that even with 35 sampling steps, a CFG of 8, and the Karras sampling method, I don’t get to generate as good images as the ones my buddies do. :( I tried without arguments and it gives me an error.
@mareck6946
@mareck6946 6 ай бұрын
@@ultralaggerREV1 optimize your models to fp16 helps but 8GB depending on the model you use is cutting it awfully close
@lucian6172
@lucian6172 Ай бұрын
I'm getting about 1 iteration per second using the RX580. The latest updates made it a bit slower for some unknown reason. ComfyUI is also broken now, after the latest updates. It worked fine before.
@HeinleinShinobu
@HeinleinShinobu 8 ай бұрын
your tutorial just break my stable diffusion
@FE-Engineer
@FE-Engineer 8 ай бұрын
I am sorry to hear that.
@HeinleinShinobu
@HeinleinShinobu 8 ай бұрын
@@FE-Engineer fresh git clone it and got "there is no gpu for onnxruntime to do optimization" on olive optimization part. Using rx 6600 xt gpu
@FE-Engineer
@FE-Engineer 8 ай бұрын
Sounds like it can’t find your GPU. Maybe try installing or reinstalling GPU drivers?
@KamiMountainMan
@KamiMountainMan 6 ай бұрын
I installed on a laptop that has both AMD integrated and AMD dedicated GPUs. It automatically picks the integrated one which is much slower. Do you know by any chance how to set the right GPU for it?
@FE-Engineer
@FE-Engineer 6 ай бұрын
On my desktop I have both integrated and discrete. For me it always uses the right one. I don’t know if I have seen any way to try and alter or change its functionality in how it picks. I assume it just chooses the one with the most vram. But off the top of my head. No I do not know. Sorry. :-/
@Sujal-ow7cj
@Sujal-ow7cj Ай бұрын
Will it work on 6000 series
@jaimeflores4683
@jaimeflores4683 7 ай бұрын
Is there a way to convert any model from civitai to onnx?
@FE-Engineer
@FE-Engineer 7 ай бұрын
Read either the pinned comment, or my answer to any number of other comments on this video where people have asked the exact same question please. Tldr; yes
New! Windows Update Replaced AMD Graphics Driver (1-Minute Fix)
3:24
ЧУТЬ НЕ УТОНУЛ #shorts
00:27
Паша Осадчий
Рет қаралды 10 МЛН
Jumping off balcony pulls her tooth! 🫣🦷
01:00
Justin Flom
Рет қаралды 27 МЛН
Playing hide and seek with my dog 🐶
00:25
Zach King
Рет қаралды 36 МЛН
Faster Stable Diffusion with ZLUDA (AMD GPU)
9:06
Northbound
Рет қаралды 8 М.
March 2024 - Stable Diffusion with AMD on windows -- use zluda ;)
16:37
How to Install Stable Diffusion WebUI - DirectML on AMD GPUs
20:22
Ai Dream World
Рет қаралды 22 М.
How to update Gigabyte motherboard BIOS
11:15
R4GE VipeRzZ
Рет қаралды 208 М.
How To SAFELY Overclock Your GPU in 2024
3:17
Khorvie Tech
Рет қаралды 100 М.
How to Install AUTOMATIC1111 + SDXL1.0 - Easy and Fast!
10:23
Incite AI
Рет қаралды 24 М.
Хакер взломал компьютер с USB кабеля. Кевин Митник.
0:58
Последний Оплот Безопасности
Рет қаралды 2,2 МЛН
КРУТОЙ ТЕЛЕФОН
0:16
KINO KAIF
Рет қаралды 6 МЛН