Commit Graph

153 Commits

Author SHA1 Message Date
comfyanonymous b0ab31d06c Refactor attention upcasting code part 1. 2024-05-14 12:47:31 -04:00
comfyanonymous 2aed53c4ac Workaround xformers bug. 2024-04-30 21:23:40 -04:00
comfyanonymous d7897fff2c Move cascade scale factor from stage_a to latent_formats.py 2024-03-16 14:49:35 -04:00
comfyanonymous 2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous 5f60ee246e Support loading the sr cascade controlnet. 2024-03-07 01:22:48 -05:00
comfyanonymous 03e6e81629 Set upscale algorithm to bilinear for stable cascade controlnet. 2024-03-06 02:59:40 -05:00
comfyanonymous 03e83bb5d0 Support stable cascade canny controlnet. 2024-03-06 02:25:42 -05:00
comfyanonymous cb7c3a2921 Allow image_only_indicator to be None. 2024-02-29 13:11:30 -05:00
comfyanonymous b3e97fc714 Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous e93cdd0ad0 Remove print. 2024-02-19 11:47:26 -05:00
comfyanonymous a7b5eaa7e3 Forgot to commit this. 2024-02-19 04:25:46 -05:00
comfyanonymous 6bcf57ff10 Fix attention masks properly for multiple batches. 2024-02-17 16:15:18 -05:00
comfyanonymous 11e3221f1f fp8 weight support for Stable Cascade. 2024-02-17 15:27:31 -05:00
comfyanonymous f8706546f3 Fix attention mask batch size in some attention functions. 2024-02-17 15:22:21 -05:00
comfyanonymous 3b9969c1c5 Properly fix attention masks in CLIP with batches. 2024-02-17 12:13:13 -05:00
comfyanonymous 805c36ac9c Make Stable Cascade work on old pytorch 2.0 2024-02-17 00:42:30 -05:00
comfyanonymous 667c92814e Stable Cascade Stage B. 2024-02-16 13:02:03 -05:00
comfyanonymous f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
comfyanonymous 5e06baf112 Stable Cascade Stage A. 2024-02-16 06:30:39 -05:00
comfyanonymous c661a8b118 Don't use numpy for calculating sigmas. 2024-02-07 18:52:51 -05:00
comfyanonymous 89507f8adf Remove some unused imports. 2024-01-25 23:42:37 -05:00
comfyanonymous 2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous 6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 2024-01-09 13:46:52 -05:00
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous 0c2c9fbdfa Support attention mask in split attention. 2024-01-06 13:16:48 -05:00
comfyanonymous 3ad0191bfb Implement attention mask on xformers. 2024-01-06 04:33:03 -05:00
comfyanonymous 8c6493578b Implement noise augmentation for SD 4X upscale model. 2024-01-03 14:27:11 -05:00
comfyanonymous 79f73a4b33 Remove useless code. 2024-01-02 01:50:29 -05:00
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 2023-12-26 05:02:02 -05:00
comfyanonymous d0165d819a Fix SVD lowvram mode. 2023-12-24 07:13:18 -05:00
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 2023-12-22 04:05:42 -05:00
comfyanonymous a5056cfb1f Remove useless code. 2023-12-15 01:28:16 -05:00
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous 1bbd65ab30 Missed this one. 2023-12-05 12:48:41 -05:00
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 2023-12-04 03:12:18 -05:00
comfyanonymous 39e75862b2 Fix regression from last commit. 2023-11-26 03:43:02 -05:00
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 2023-11-24 03:55:35 -05:00
comfyanonymous 871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
comfyanonymous 72741105a6 Remove useless code. 2023-11-21 17:27:28 -05:00
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 2023-11-16 15:26:28 -05:00
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 2023-11-16 12:57:12 -05:00
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 2023-11-14 00:08:12 -05:00
comfyanonymous 794dd2064d Fix typo. 2023-11-07 23:41:55 -05:00
comfyanonymous a527d0c795 Code refactor. 2023-11-07 19:33:40 -05:00
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00