Commit Graph

84 Commits

Author SHA1 Message Date
comfyanonymous 07f6eeaa13 Fix mask issue with attention_xformers. 2024-11-20 17:07:46 -05:00
comfyanonymous fabf449feb Mochi VAE encoder. 2024-11-01 17:33:09 -04:00
comfyanonymous 33fb282d5c Fix issue. 2024-08-14 02:51:47 -04:00
comfyanonymous bb1969cab7 Initial support for the stable audio open model. 2024-06-15 12:14:56 -04:00
comfyanonymous 0920e0e5fe Remove some unused imports. 2024-05-27 19:08:27 -04:00
comfyanonymous 8508df2569 Work around black image bug on Mac 14.5 by forcing attention upcasting. 2024-05-21 16:56:33 -04:00
comfyanonymous 83d969e397 Disable xformers when tracing model. 2024-05-21 13:55:49 -04:00
comfyanonymous 1900e5119f Fix potential issue. 2024-05-20 08:19:54 -04:00
comfyanonymous 0bdc2b15c7 Cleanup. 2024-05-18 10:11:44 -04:00
comfyanonymous 98f828fad9 Remove unnecessary code. 2024-05-18 09:36:44 -04:00
comfyanonymous 46daf0a9a7 Add debug options to force on and off attention upcasting. 2024-05-16 04:09:41 -04:00
comfyanonymous ec6f16adb6 Fix SAG. 2024-05-14 18:02:27 -04:00
comfyanonymous bb4940d837 Only enable attention upcasting on models that actually need it. 2024-05-14 17:00:50 -04:00
comfyanonymous b0ab31d06c Refactor attention upcasting code part 1. 2024-05-14 12:47:31 -04:00
comfyanonymous 2aed53c4ac Workaround xformers bug. 2024-04-30 21:23:40 -04:00
comfyanonymous 2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous 6bcf57ff10 Fix attention masks properly for multiple batches. 2024-02-17 16:15:18 -05:00
comfyanonymous f8706546f3 Fix attention mask batch size in some attention functions. 2024-02-17 15:22:21 -05:00
comfyanonymous 3b9969c1c5 Properly fix attention masks in CLIP with batches. 2024-02-17 12:13:13 -05:00
comfyanonymous 89507f8adf Remove some unused imports. 2024-01-25 23:42:37 -05:00
comfyanonymous 6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 2024-01-09 13:46:52 -05:00
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous 0c2c9fbdfa Support attention mask in split attention. 2024-01-06 13:16:48 -05:00
comfyanonymous 3ad0191bfb Implement attention mask on xformers. 2024-01-06 04:33:03 -05:00
comfyanonymous a5056cfb1f Remove useless code. 2023-12-15 01:28:16 -05:00
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous 1bbd65ab30 Missed this one. 2023-12-05 12:48:41 -05:00
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 2023-12-04 03:12:18 -05:00
comfyanonymous 39e75862b2 Fix regression from last commit. 2023-11-26 03:43:02 -05:00
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 2023-11-24 03:55:35 -05:00
comfyanonymous 871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 2023-10-30 15:30:49 -04:00
comfyanonymous 125b03eead Fix some OOM issues with split attention. 2023-10-30 13:14:11 -04:00
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 2023-10-25 20:17:28 -04:00
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 2023-10-22 03:59:53 -04:00
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 2023-10-22 03:51:29 -04:00
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 2023-10-21 13:23:03 -04:00
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 2023-10-16 02:31:24 -04:00
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 2023-10-11 20:38:48 -04:00
comfyanonymous 1a4bd9e9a6 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous fff491b032 Model patches can now know which batch is positive and negative. 2023-09-27 12:04:07 -04:00
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 2023-09-04 00:58:18 -04:00
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 2023-09-02 18:22:10 -07:00
comfyanonymous 0e3b641172 Remove xformers related print. 2023-09-01 02:12:03 -04:00
comfyanonymous b80c3276dc Fix issue with gligen. 2023-08-18 16:32:23 -04:00
comfyanonymous d6e4b342e6 Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 2023-07-29 14:51:56 -04:00