Commit Graph

68 Commits

Author SHA1 Message Date
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 2023-12-26 05:02:02 -05:00
comfyanonymous d0165d819a Fix SVD lowvram mode. 2023-12-24 07:13:18 -05:00
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 2023-12-22 04:05:42 -05:00
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 2023-12-04 03:12:18 -05:00
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous 871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
comfyanonymous 72741105a6 Remove useless code. 2023-11-21 17:27:28 -05:00
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 2023-11-16 15:26:28 -05:00
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 2023-11-16 12:57:12 -05:00
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 2023-11-14 00:08:12 -05:00
comfyanonymous 794dd2064d Fix typo. 2023-11-07 23:41:55 -05:00
comfyanonymous a527d0c795 Code refactor. 2023-11-07 19:33:40 -05:00
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00
comfyanonymous d44a2de49f Make VAE code closer to sgm. 2023-10-17 15:18:51 -04:00
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 2023-10-17 03:19:29 -04:00
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 2023-10-13 14:41:17 -04:00
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 2023-10-11 21:30:57 -04:00
comfyanonymous 1a4bd9e9a6 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 2023-09-22 20:26:47 -04:00
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 2023-09-04 00:58:18 -04:00
comfyanonymous bed116a1f9 Remove optimization that caused border. 2023-08-29 11:21:36 -04:00
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 2023-08-27 22:24:42 -04:00
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 2023-08-27 21:33:53 -04:00
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 2023-08-22 01:06:26 -04:00
comfyanonymous b80c3276dc Fix issue with gligen. 2023-08-18 16:32:23 -04:00
comfyanonymous d6e4b342e6 Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous 2b13939044 Remove some useless code. 2023-07-30 14:13:33 -04:00
comfyanonymous 95d796fc85 Faster VAE loading. 2023-07-29 16:28:30 -04:00
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 2023-07-29 14:51:56 -04:00
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous fa28d7334b Remove useless code. 2023-06-23 12:35:26 -04:00
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 2023-06-15 18:42:30 -04:00
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 2023-06-15 15:00:10 -04:00
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 2023-06-15 14:29:26 -04:00
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 2023-06-14 19:46:08 -04:00
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2023-05-20 16:01:02 -04:00
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2023-05-20 15:07:21 -04:00
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2023-05-05 18:11:41 -04:00
comfyanonymous bae4fb4a9d Fix imports. 2023-05-04 18:10:29 -04:00
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2023-05-02 14:17:51 -04:00
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2023-05-02 13:31:43 -04:00
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2023-04-04 22:22:02 -04:00
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00