March 28, 2025

ikayaniaamirshahzad@gmail.com

Running SD XL with 4 GB VRAM, can’t use LoRAs (Automatic1111)



So I have a quite old laptop, it has 16 GB RAM and a GeForce 960M with 4 GB VRAM. I was able to load a XL model into Automatic1111 and create images. As you can guess, is pretty slow but it works, I really don't mind about how long it takes. The problem lies when I try to use a LoRA, if I load even a single one, PC freezes or the CMD prompt freezes, or weird things start to happen. I was able to use LoRAs a couple of times but I don't know how and why. I'm running SD with the following start arguments:

–xformers –opt-split-attention –opt-sub-quad-attention –lowvram

I tried using –medvram-xl and I was able to use LoRAs, but generation takes way too long. Do I have any solution to this or I'm cooked ?

I like to clarify that even using SD XL normally, without LoRAs, sometimes freezes or get stuck, but I'm able to continue after restarting SD and eventually it works.

submitted by /u/RauloSuper
[comments]



Source link

Leave a Comment