comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00
comfyanonymous
79f73a4b33
Remove useless code.
2024-01-02 01:50:29 -05:00
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
2023-12-24 07:13:18 -05:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
2023-11-14 00:08:12 -05:00
comfyanonymous
794dd2064d
Fix typo.
2023-11-07 23:41:55 -05:00
comfyanonymous
a527d0c795
Code refactor.
2023-11-07 19:33:40 -05:00
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
2023-11-07 04:30:37 -05:00
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
2023-10-27 14:45:15 -04:00
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
2023-10-13 14:41:17 -04:00
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
2023-10-11 20:38:48 -04:00
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
2023-09-22 20:26:47 -04:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
0e3b641172
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00