comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
2024-02-29 13:11:30 -05:00
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
2024-02-07 18:52:51 -05:00
comfyanonymous
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00
comfyanonymous
79f73a4b33
Remove useless code.
2024-01-02 01:50:29 -05:00
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
2023-12-24 07:13:18 -05:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
2023-11-14 00:08:12 -05:00
comfyanonymous
794dd2064d
Fix typo.
2023-11-07 23:41:55 -05:00
comfyanonymous
a527d0c795
Code refactor.
2023-11-07 19:33:40 -05:00
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
2023-11-07 04:30:37 -05:00
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
2023-10-27 14:45:15 -04:00
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
2023-10-13 14:41:17 -04:00
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
2023-10-11 20:38:48 -04:00
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00