comfyanonymous
|
1f6a467e92
|
Update ldm dir with latest upstream stable diffusion changes.
|
2023-02-09 13:47:36 -05:00 |
comfyanonymous
|
773cdabfce
|
Same thing but for the other places where it's used.
|
2023-02-09 12:43:29 -05:00 |
comfyanonymous
|
df40d4f3bf
|
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
|
2023-02-09 12:33:27 -05:00 |
comfyanonymous
|
e8c499ddd4
|
Split optimization for VAE attention block.
|
2023-02-08 22:04:20 -05:00 |
comfyanonymous
|
5b4e312749
|
Use inplace operations for less OOM issues.
|
2023-02-08 22:04:13 -05:00 |
comfyanonymous
|
047775615b
|
Lower the chances of an OOM.
|
2023-02-08 14:24:27 -05:00 |
comfyanonymous
|
1daccf3678
|
Run softmax in place if it OOMs.
|
2023-01-30 19:55:01 -05:00 |
comfyanonymous
|
50db297cf6
|
Try to fix OOM issues with cards that have less vram than mine.
|
2023-01-29 00:50:46 -05:00 |
comfyanonymous
|
051f472e8f
|
Fix sub quadratic attention for SD2 and make it the default optimization.
|
2023-01-25 01:22:43 -05:00 |
comfyanonymous
|
220afe3310
|
Initial commit.
|
2023-01-16 22:37:14 -05:00 |