comfyanonymous
|
261bcbb0d9
|
A few missing comfy ops in the VAE.
|
2023-12-22 04:05:42 -05:00 |
comfyanonymous
|
77755ab8db
|
Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
|
2023-12-11 23:27:13 -05:00 |
comfyanonymous
|
d44a2de49f
|
Make VAE code closer to sgm.
|
2023-10-17 15:18:51 -04:00 |
comfyanonymous
|
23680a9155
|
Refactor the attention stuff in the VAE.
|
2023-10-17 03:19:29 -04:00 |
comfyanonymous
|
88733c997f
|
pytorch_attention_enabled can now return True when xformers is enabled.
|
2023-10-11 21:30:57 -04:00 |
comfyanonymous
|
1a4bd9e9a6
|
Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
|
2023-10-11 20:38:48 -04:00 |
comfyanonymous
|
1938f5c5fe
|
Add a force argument to soft_empty_cache to force a cache empty.
|
2023-09-04 00:58:18 -04:00 |
comfyanonymous
|
bed116a1f9
|
Remove optimization that caused border.
|
2023-08-29 11:21:36 -04:00 |
comfyanonymous
|
1c794a2161
|
Fallback to slice attention if xformers doesn't support the operation.
|
2023-08-27 22:24:42 -04:00 |
comfyanonymous
|
d935ba50c4
|
Make --bf16-vae work on torch 2.0
|
2023-08-27 21:33:53 -04:00 |
comfyanonymous
|
95d796fc85
|
Faster VAE loading.
|
2023-07-29 16:28:30 -04:00 |
comfyanonymous
|
fa28d7334b
|
Remove useless code.
|
2023-06-23 12:35:26 -04:00 |
comfyanonymous
|
b8636a44aa
|
Make scaled_dot_product switch to sliced attention on OOM.
|
2023-05-20 16:01:02 -04:00 |
comfyanonymous
|
797c4e8d3b
|
Simplify and improve some vae attention code.
|
2023-05-20 15:07:21 -04:00 |
comfyanonymous
|
bae4fb4a9d
|
Fix imports.
|
2023-05-04 18:10:29 -04:00 |
comfyanonymous
|
73c3e11e83
|
Fix model_management import so it doesn't get executed twice.
|
2023-04-15 19:04:33 -04:00 |
comfyanonymous
|
e46b1c3034
|
Disable xformers in VAE when xformers == 0.0.18
|
2023-04-04 22:22:02 -04:00 |
comfyanonymous
|
3ed4a4e4e6
|
Try again with vae tiled decoding if regular fails because of OOM.
|
2023-03-22 14:49:00 -04:00 |
comfyanonymous
|
c692509c2b
|
Try to improve VAEEncode memory usage a bit.
|
2023-03-22 02:45:18 -04:00 |
comfyanonymous
|
83f23f82b8
|
Add pytorch attention support to VAE.
|
2023-03-13 12:45:54 -04:00 |
comfyanonymous
|
a256a2abde
|
--disable-xformers should not even try to import xformers.
|
2023-03-13 11:36:48 -04:00 |
comfyanonymous
|
0f3ba7482f
|
Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
|
2023-03-12 15:44:16 -04:00 |
comfyanonymous
|
1de86851b1
|
Try to fix memory issue.
|
2023-03-11 15:15:13 -05:00 |
comfyanonymous
|
cc8baf1080
|
Make VAE use common function to get free memory.
|
2023-03-05 14:20:07 -05:00 |
comfyanonymous
|
509c7dfc6d
|
Use real softmax in split op to fix issue with some images.
|
2023-02-10 03:13:49 -05:00 |
comfyanonymous
|
773cdabfce
|
Same thing but for the other places where it's used.
|
2023-02-09 12:43:29 -05:00 |
comfyanonymous
|
e8c499ddd4
|
Split optimization for VAE attention block.
|
2023-02-08 22:04:20 -05:00 |
comfyanonymous
|
5b4e312749
|
Use inplace operations for less OOM issues.
|
2023-02-08 22:04:13 -05:00 |
comfyanonymous
|
220afe3310
|
Initial commit.
|
2023-01-16 22:37:14 -05:00 |