Commit Graph

269 Commits

Author SHA1 Message Date
comfyanonymous 391c1046cf More flexibility with text encoder return values.
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
comfyanonymous 4040491149 Better T5xxl detection. 2024-07-06 00:53:33 -04:00
comfyanonymous d7484ef30c Support loading checkpoints with the UNETLoader node. 2024-07-03 11:34:32 -04:00
comfyanonymous 537f35c7bc Don't update dict if contiguous. 2024-07-02 20:21:51 -04:00
Alex "mcmonkey" Goodwin 3f46362d22
fix non-contiguous tensor saving (from channels-last) (#3932) 2024-07-02 20:16:33 -04:00
comfyanonymous 8ceb5a02a3 Support saving stable audio checkpoint that can be loaded back. 2024-06-27 11:06:52 -04:00
comfyanonymous 4ef1479dcd Multi dimension tiled scale function and tiled VAE audio encoding fallback. 2024-06-22 11:57:49 -04:00
comfyanonymous 1e2839f4d9 More proper tiled audio decoding. 2024-06-20 16:50:31 -04:00
comfyanonymous 0d6a57938e Support loading diffusers SD3 model format with UNETLoader node. 2024-06-19 22:21:18 -04:00
comfyanonymous a45df69570 Basic tiled decoding for audio VAE. 2024-06-17 22:48:23 -04:00
comfyanonymous 6425252c4f Use fp16 as the default vae dtype for the audio VAE. 2024-06-16 13:12:54 -04:00
comfyanonymous ca9d300a80 Better estimation for memory usage during audio VAE encoding/decoding. 2024-06-16 11:47:32 -04:00
comfyanonymous 746a0410d4 Fix VAEEncode with taesd3. 2024-06-16 03:10:04 -04:00
comfyanonymous 04e8798c37 Improvements to the TAESD3 implementation. 2024-06-16 02:04:24 -04:00
Dr.Lt.Data df7db0e027
support TAESD3 (#3738) 2024-06-16 02:03:53 -04:00
comfyanonymous bb1969cab7 Initial support for the stable audio open model. 2024-06-15 12:14:56 -04:00
comfyanonymous 69c8d6d8a6 Single and dual clip loader nodes support SD3.
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
2024-06-11 23:27:39 -04:00
comfyanonymous 0e49211a11 Load the SD3 T5xxl model in the same dtype stored in the checkpoint. 2024-06-11 17:03:26 -04:00
comfyanonymous 5889b7ca0a Support multiple text encoder configurations on SD3. 2024-06-11 13:14:43 -04:00
comfyanonymous 8c4a9befa7 SD3 Support. 2024-06-10 14:06:23 -04:00
comfyanonymous 0920e0e5fe Remove some unused imports. 2024-05-27 19:08:27 -04:00
comfyanonymous e1489ad257 Fix issue with lowvram mode breaking model saving. 2024-05-11 21:55:20 -04:00
comfyanonymous 93e876a3be Remove warnings that confuse people. 2024-05-09 05:29:42 -04:00
comfyanonymous c61eadf69a Make the load checkpoint with config function call the regular one.
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.

The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
comfyanonymous 8dc19e40d1 Don't init a VAE model when there are no VAE weights. 2024-04-24 09:20:31 -04:00
comfyanonymous c59fe9f254 Support VAE without quant_conv. 2024-04-18 21:05:33 -04:00
comfyanonymous 30abc324c2 Support properly saving CosXL checkpoints. 2024-04-08 00:36:22 -04:00
comfyanonymous 0ed72befe1 Change log levels.
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
comfyanonymous 65397ce601 Replace prints with logging and add --verbose argument. 2024-03-10 12:14:23 -04:00
comfyanonymous ca7c310a0e Support loading old CLIP models saved with CLIPSave. 2024-02-25 08:29:12 -05:00
comfyanonymous c2cb8e889b Always return unprojected pooled output for gligen. 2024-02-25 07:33:13 -05:00
comfyanonymous 1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous d91f45ef28 Some cleanups to how the text encoders are loaded. 2024-02-19 10:46:30 -05:00
comfyanonymous 3b2e579926 Support loading the Stable Cascade effnet and previewer as a VAE.
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous 97d03ae04a StableCascade CLIP model support. 2024-02-16 13:29:04 -05:00
comfyanonymous f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
comfyanonymous 5e06baf112 Stable Cascade Stage A. 2024-02-16 06:30:39 -05:00
comfyanonymous 38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 2024-02-13 00:01:08 -05:00
comfyanonymous da7a8df0d2 Put VAE key name in model config. 2024-01-30 02:24:38 -05:00
comfyanonymous 4871a36458 Cleanup some unused imports. 2024-01-21 21:51:22 -05:00
comfyanonymous d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous 5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous 824e4935f5 Add dtype parameter to VAE object. 2023-12-12 12:03:29 -05:00
comfyanonymous ba07cb748e Use faster manual cast for fp8 in unet. 2023-12-11 18:24:44 -05:00
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 2023-12-08 02:35:45 -05:00
comfyanonymous 983ebc5792 Use smart model management for VAE to decrease latency. 2023-11-28 04:58:51 -05:00
comfyanonymous c45d1b9b67 Add a function to load a unet from a state dict. 2023-11-27 17:41:29 -05:00
comfyanonymous 871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
comfyanonymous 410bf07771 Make VAE memory estimation take dtype into account. 2023-11-22 18:17:19 -05:00
comfyanonymous 6a491ebe27 Allow model config to preprocess the vae state dict on load. 2023-11-21 16:29:18 -05:00
comfyanonymous cd4fc77d5f Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous 0cf4e86939 Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous 0a6fd49a3e Print leftover keys when using the UNETLoader. 2023-11-07 22:15:55 -05:00
comfyanonymous 2455aaed8a Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous 2a134bfab9 Fix checkpoint loader with config. 2023-10-27 22:13:55 -04:00
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00
comfyanonymous 430a8334c5 Fix some potential issues. 2023-10-18 19:48:36 -04:00
comfyanonymous d44a2de49f Make VAE code closer to sgm. 2023-10-17 15:18:51 -04:00
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 2023-10-13 14:41:17 -04:00
comfyanonymous 72188dffc3 load_checkpoint_guess_config can now optionally output the model. 2023-10-06 13:48:18 -04:00
City 9bfec2bdbf Fix quality loss due to low precision 2023-10-04 15:40:59 +02:00
comfyanonymous 71713888c4 Print missing VAE keys. 2023-09-28 00:54:57 -04:00
comfyanonymous 7d401ed1d0 Add ldm format support to UNETLoader. 2023-09-11 16:36:50 -04:00
comfyanonymous 7931ff0fd9 Support SDXL inpaint models. 2023-09-01 15:22:52 -04:00
comfyanonymous fe4c07400c Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous 1300a1bb4c Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous f92074b84f Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
comfyanonymous c77f02e1c6 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous 15a7716fa6 Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00
comfyanonymous ec96f6d03a Move text_projection to base clip model. 2023-08-24 23:43:48 -04:00
comfyanonymous 51dde87e97 Try to free enough vram for control lora inference. 2023-08-24 17:20:54 -04:00
comfyanonymous cc44ade79e Always shift text encoder to GPU when the device supports fp16. 2023-08-23 21:45:00 -04:00
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 2023-08-23 21:01:15 -04:00
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous afcb9cb1df All resolutions now work with t2i adapter for SDXL. 2023-08-22 16:23:54 -04:00
comfyanonymous 85fde89d7f T2I adapter SDXL. 2023-08-22 14:40:43 -04:00
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 2023-08-22 01:06:26 -04:00
comfyanonymous 763b0cf024 Fix control lora not working in fp32. 2023-08-21 20:38:31 -04:00
comfyanonymous 199d73364a Fix ControlLora on lowvram. 2023-08-21 00:54:04 -04:00
comfyanonymous d08e53de2e Remove autocast from controlnet code. 2023-08-20 21:47:32 -04:00
comfyanonymous c9b562aed1 Free more memory before VAE encode/decode. 2023-08-19 12:13:13 -04:00
comfyanonymous d6e4b342e6 Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous c28db1f315 Fix potential issues with patching models when saving checkpoints. 2023-08-17 11:07:08 -04:00
comfyanonymous 89a0767abf Smarter memory management.
Try to keep models on the vram when possible.

Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous 53f326a3d8 Support diffusers mini controlnets. 2023-08-16 12:28:01 -04:00
comfyanonymous c8a23ce9e8 Support for yet another lora type based on diffusers. 2023-08-11 13:04:21 -04:00
comfyanonymous c20583286f Support diffuser text encoder loras. 2023-08-10 20:28:28 -04:00
comfyanonymous d8e58f0a7e Detect hint_channels from controlnet. 2023-08-06 14:08:59 -04:00
comfyanonymous c5d7593ccf Support loras in diffusers format. 2023-08-05 01:40:24 -04:00
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 2023-07-29 14:51:56 -04:00
comfyanonymous 727588d076 Fix some new loras. 2023-07-25 16:39:15 -04:00
comfyanonymous 5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 2023-07-24 18:38:17 -04:00
comfyanonymous 7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2023-07-24 17:50:49 -04:00
comfyanonymous 22f29d66ca Try to fix memory issue with lora. 2023-07-22 21:38:56 -04:00
comfyanonymous 12a6e93171 Del the right object when applying lora. 2023-07-22 11:25:49 -04:00
comfyanonymous 78e7958d17 Support controlnet in diffusers format. 2023-07-21 22:58:16 -04:00
comfyanonymous 09386a3697 Fix issue with lora in some cases when combined with model merging. 2023-07-21 21:27:27 -04:00
comfyanonymous 58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 2023-07-21 14:38:56 -04:00
comfyanonymous 0115018695 Print errors and continue when lora weights are not compatible. 2023-07-20 19:56:22 -04:00