Commit Graph

661 Commits

Author SHA1 Message Date
comfyanonymous 248aa3e563 Fix bug. 2023-11-11 12:20:16 -05:00
comfyanonymous 4a8a839b40 Add option to use in place weight updating in ModelPatcher. 2023-11-11 01:11:12 -05:00
comfyanonymous 412d3ff57d Refactor. 2023-11-11 01:11:06 -05:00
comfyanonymous 58d5d71a93 Working RescaleCFG node.
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous 3e0033ef30 Fix model merge bug.
Unload models before getting weights for model patching.
2023-11-10 03:19:05 -05:00
comfyanonymous 002aefa382 Support lcm models.
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous ec12000136 Add support for full diff lora keys. 2023-11-08 22:05:31 -05:00
comfyanonymous 064d7583eb Add a CONDConstant for passing non tensor conds to unet. 2023-11-08 01:59:09 -05:00
comfyanonymous 794dd2064d Fix typo. 2023-11-07 23:41:55 -05:00
comfyanonymous 0a6fd49a3e Print leftover keys when using the UNETLoader. 2023-11-07 22:15:55 -05:00
comfyanonymous fe40109b57 Fix issue with object patches not being copied with patcher. 2023-11-07 22:15:15 -05:00
comfyanonymous a527d0c795 Code refactor. 2023-11-07 19:33:40 -05:00
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous 844dbf97a7 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous 656c0b5d90 CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 2023-11-06 01:09:18 -05:00
gameltb 7e455adc07 fix unet_wrapper_function name in ModelPatcher 2023-11-05 17:11:44 +08:00
comfyanonymous 1ffa8858e7 Move model sampling code to comfy/model_sampling.py 2023-11-04 01:32:23 -04:00
comfyanonymous ae2acfc21b Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous d2e27b48f1 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous 2455aaed8a Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous ecb80abb58 Allow ModelSamplingDiscrete to be instantiated without a model config. 2023-11-01 19:13:03 -04:00
comfyanonymous e73ec8c4da Not used anymore. 2023-11-01 00:01:30 -04:00
comfyanonymous 111f1b5255 Fix some issues with sampling precision. 2023-10-31 23:49:29 -04:00
comfyanonymous 7c0f255de1 Clean up percent start/end and make controlnets work with sigmas. 2023-10-31 22:14:32 -04:00
comfyanonymous a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous 1777b54d02 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 2023-10-30 15:30:49 -04:00
comfyanonymous 125b03eead Fix some OOM issues with split attention. 2023-10-30 13:14:11 -04:00
comfyanonymous a12cc05323 Add --max-upload-size argument, the default is 100MB. 2023-10-29 03:55:46 -04:00
comfyanonymous 2a134bfab9 Fix checkpoint loader with config. 2023-10-27 22:13:55 -04:00
comfyanonymous e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 2023-10-27 15:54:04 -04:00
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00
comfyanonymous 434ce25ec0 Restrict loading embeddings from embedding folders. 2023-10-27 02:54:13 -04:00
comfyanonymous 723847f6b3 Faster clip image processing. 2023-10-26 01:53:01 -04:00
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 2023-10-25 20:17:28 -04:00
comfyanonymous 7fbb217d3a Fix uni_pc returning noisy image when steps <= 3 2023-10-25 16:08:30 -04:00
Jedrzej Kosinski 3783cb8bfd change 'c_adm' to 'y' in ControlNet.get_control 2023-10-25 08:24:32 -05:00
comfyanonymous d1d2fea806 Pass extra conds directly to unet. 2023-10-25 00:07:53 -04:00
comfyanonymous 036f88c621 Refactor to make it easier to add custom conds to models. 2023-10-24 23:31:12 -04:00
comfyanonymous 3fce8881ca Sampling code refactor to make it easier to add more conds. 2023-10-24 03:38:41 -04:00
comfyanonymous 8594c8be4d Empty the cache when torch cache is more than 25% free mem. 2023-10-22 13:58:12 -04:00
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 2023-10-22 03:59:53 -04:00
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 2023-10-22 03:51:29 -04:00
comfyanonymous a0690f9df9 Fix t2i adapter issue. 2023-10-21 20:31:24 -04:00
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 2023-10-21 13:23:03 -04:00
comfyanonymous 4185324a1d Fix uni_pc sampler math. This changes the images this sampler produces. 2023-10-20 04:16:53 -04:00
comfyanonymous e6962120c6 Make sure cond_concat is on the right device. 2023-10-19 01:14:25 -04:00
comfyanonymous 45c972aba8 Refactor cond_concat into conditioning. 2023-10-18 20:36:58 -04:00
comfyanonymous 430a8334c5 Fix some potential issues. 2023-10-18 19:48:36 -04:00
comfyanonymous 782a24fce6 Refactor cond_concat into model object. 2023-10-18 16:48:37 -04:00
comfyanonymous 0d45a565da Fix memory issue related to control loras.
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
comfyanonymous d44a2de49f Make VAE code closer to sgm. 2023-10-17 15:18:51 -04:00
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 2023-10-17 03:19:29 -04:00
comfyanonymous c8013f73e5 Add some Quadro cards to the list of cards with broken fp16. 2023-10-16 16:48:46 -04:00
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 2023-10-16 02:31:24 -04:00
comfyanonymous fd4c5f07e7 Add a --bf16-unet to test running the unet in bf16. 2023-10-13 14:51:10 -04:00
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 2023-10-13 14:41:17 -04:00
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 2023-10-11 21:30:57 -04:00
comfyanonymous 20d3852aa1 Pull some small changes from the other repo. 2023-10-11 20:38:48 -04:00
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 2023-10-11 20:38:48 -04:00
comfyanonymous 1a4bd9e9a6 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous 8cc75c64ff Let unet wrapper functions have .to attributes. 2023-10-11 01:34:38 -04:00
comfyanonymous 5e885bd9c8 Cleanup. 2023-10-10 21:46:53 -04:00
comfyanonymous 851bb87ca9 Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI 2023-10-10 21:42:35 -04:00
Yukimasa Funaoka 9eb621c95a
Supports TAESD models in safetensors format 2023-10-10 13:21:44 +09:00
comfyanonymous d1a0abd40b Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI 2023-10-09 01:53:29 -04:00
comfyanonymous 72188dffc3 load_checkpoint_guess_config can now optionally output the model. 2023-10-06 13:48:18 -04:00
Jairo Correa 63e5fd1790 Option to input directory 2023-10-04 19:45:15 -03:00
City 9bfec2bdbf Fix quality loss due to low precision 2023-10-04 15:40:59 +02:00
badayvedat 0f17993d05 fix: typo in extra sampler 2023-09-29 06:09:59 +03:00
comfyanonymous 66756de100 Add SamplerDPMPP_2M_SDE node. 2023-09-28 21:56:23 -04:00
comfyanonymous 71713888c4 Print missing VAE keys. 2023-09-28 00:54:57 -04:00
comfyanonymous d234ca558a Add missing samplers to KSamplerSelect. 2023-09-28 00:17:03 -04:00
comfyanonymous 1adcc4c3a2 Add a SamplerCustom Node.
This node takes a list of sigmas and a sampler object as input.

This lets people easily implement custom schedulers and samplers as nodes.

More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous bf3fc2f1b7 Refactor sampling related code. 2023-09-27 16:45:22 -04:00
comfyanonymous fff491b032 Model patches can now know which batch is positive and negative. 2023-09-27 12:04:07 -04:00
comfyanonymous 1d6dd83184 Scheduler code refactor. 2023-09-26 17:07:07 -04:00
comfyanonymous 446caf711c Sampling code refactor. 2023-09-26 13:45:15 -04:00
comfyanonymous 76cdc809bf Support more controlnet models. 2023-09-23 18:47:46 -04:00
comfyanonymous ae87543653 Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI 2023-09-23 00:57:17 -04:00
Simon Lui eec449ca8e Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively. 2023-09-22 21:11:27 -07:00
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 2023-09-22 20:26:47 -04:00
comfyanonymous 492db2de8d Allow having a different pooled output for each image in a batch. 2023-09-21 01:14:42 -04:00
comfyanonymous 1cdfb3dba4 Only do the cast on the device if the device supports it. 2023-09-20 17:52:41 -04:00
comfyanonymous 7c9a92f552 Don't depend on torchvision. 2023-09-19 13:12:47 -04:00
MoonRide303 2b6b178173 Added support for lanczos scaling 2023-09-19 10:40:38 +02:00
comfyanonymous b92bf8196e Do lora cast on GPU instead of CPU for higher performance. 2023-09-18 23:04:49 -04:00
comfyanonymous 321c5fa295 Enable pytorch attention by default on xpu. 2023-09-17 04:09:19 -04:00
comfyanonymous 61b1f67734 Support models without previews. 2023-09-16 12:59:54 -04:00
comfyanonymous 43d4935a1d Add cond_or_uncond array to transformer_options so hooks can check what is
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous 415abb275f Add DDPM sampler. 2023-09-15 19:22:47 -04:00
comfyanonymous 94e4fe39d8 This isn't used anywhere. 2023-09-15 12:03:03 -04:00
comfyanonymous 44361f6344 Support for text encoder models that need attention_mask. 2023-09-15 02:02:05 -04:00
comfyanonymous 0d8f376446 Set last layer on SD2.x models uses the proper indexes now.
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.

TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous 0966d3ce82 Don't run text encoders on xpu because there are issues. 2023-09-14 12:16:07 -04:00
comfyanonymous 3039b08eb1 Only parse command line args when main.py is called. 2023-09-13 11:38:20 -04:00
comfyanonymous ed58730658 Don't leave very large hidden states in the clip vision output. 2023-09-12 15:09:10 -04:00
comfyanonymous fb3b728203 Fix issue where autocast fp32 CLIP gave different results from regular. 2023-09-11 21:49:56 -04:00
comfyanonymous 7d401ed1d0 Add ldm format support to UNETLoader. 2023-09-11 16:36:50 -04:00
comfyanonymous e85be36bd2 Add a penultimate_hidden_states to the clip vision output. 2023-09-08 14:06:58 -04:00
comfyanonymous 1e6b67101c Support diffusers format t2i adapters. 2023-09-08 11:36:51 -04:00
comfyanonymous 326577d04c Allow cancelling of everything with a progress bar. 2023-09-07 23:37:03 -04:00
comfyanonymous f88f7f413a Add a ConditioningSetAreaPercentage node. 2023-09-06 03:28:27 -04:00
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 2023-09-04 00:58:18 -04:00
comfyanonymous 7746bdf7b0 Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI 2023-09-04 00:43:11 -04:00
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 2023-09-02 20:07:52 -07:00
comfyanonymous a74c5dbf37 Move some functions to utils.py 2023-09-02 22:33:37 -04:00
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 2023-09-02 18:22:10 -07:00
comfyanonymous 77a176f9e0 Use common function to reshape batch to. 2023-09-02 03:42:49 -04:00
comfyanonymous 7931ff0fd9 Support SDXL inpaint models. 2023-09-01 15:22:52 -04:00
comfyanonymous 0e3b641172 Remove xformers related print. 2023-09-01 02:12:03 -04:00
comfyanonymous 5c363a9d86 Fix controlnet bug. 2023-09-01 02:01:08 -04:00
comfyanonymous cfe1c54de8 Fix controlnet issue. 2023-08-31 15:16:58 -04:00
comfyanonymous 1c012d69af It doesn't make sense for c_crossattn and c_concat to be lists. 2023-08-31 13:25:00 -04:00
comfyanonymous 7e941f9f24 Clean up DiffusersLoader node. 2023-08-30 12:57:07 -04:00
Simon Lui 18617967e5
Fix error message in model_patcher.py
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous fe4c07400c Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous f2f5e5dcbb Support SDXL t2i adapters with 3 channel input. 2023-08-29 16:44:57 -04:00
comfyanonymous 15adc3699f Move beta_schedule to model_config and allow disabling unet creation. 2023-08-29 14:22:53 -04:00
comfyanonymous bed116a1f9 Remove optimization that caused border. 2023-08-29 11:21:36 -04:00
comfyanonymous 65cae62c71 No need to check filename extensions to detect shuffle controlnet. 2023-08-28 16:49:06 -04:00
comfyanonymous 4e89b2c25a Put clip vision outputs on the CPU. 2023-08-28 16:26:11 -04:00
comfyanonymous a094b45c93 Load clipvision model to GPU for faster performance. 2023-08-28 15:29:27 -04:00
comfyanonymous 1300a1bb4c Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous f92074b84f Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
comfyanonymous 4798cf5a62 Implement loras with norm keys. 2023-08-28 11:20:06 -04:00
comfyanonymous b8c7c770d3 Enable bf16-vae by default on ampere and up. 2023-08-27 23:06:19 -04:00
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 2023-08-27 22:24:42 -04:00
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 2023-08-27 21:33:53 -04:00
comfyanonymous a57b0c797b Fix lowvram model merging. 2023-08-26 11:52:07 -04:00
comfyanonymous f72780a7e3 The new smart memory management makes this unnecessary. 2023-08-25 18:02:15 -04:00
comfyanonymous c77f02e1c6 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous 15a7716fa6 Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00
comfyanonymous ec96f6d03a Move text_projection to base clip model. 2023-08-24 23:43:48 -04:00
comfyanonymous 30eb92c3cb Code cleanups. 2023-08-24 19:39:18 -04:00
comfyanonymous 51dde87e97 Try to free enough vram for control lora inference. 2023-08-24 17:20:54 -04:00
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 2023-08-24 00:54:16 -04:00
comfyanonymous cc44ade79e Always shift text encoder to GPU when the device supports fp16. 2023-08-23 21:45:00 -04:00
comfyanonymous a6ef08a46a Even with forced fp16 the cpu device should never use it. 2023-08-23 21:38:28 -04:00
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 2023-08-23 21:01:15 -04:00
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous afcb9cb1df All resolutions now work with t2i adapter for SDXL. 2023-08-22 16:23:54 -04:00
comfyanonymous 85fde89d7f T2I adapter SDXL. 2023-08-22 14:40:43 -04:00
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 2023-08-22 01:06:26 -04:00
comfyanonymous 763b0cf024 Fix control lora not working in fp32. 2023-08-21 20:38:31 -04:00
comfyanonymous 199d73364a Fix ControlLora on lowvram. 2023-08-21 00:54:04 -04:00
comfyanonymous d08e53de2e Remove autocast from controlnet code. 2023-08-20 21:47:32 -04:00
comfyanonymous 0d7b0a4dc7 Small cleanups. 2023-08-20 14:56:47 -04:00
Simon Lui 9225465975 Further tuning and fix mem_free_total. 2023-08-20 14:19:53 -04:00