Commit Graph

885 Commits

Author SHA1 Message Date
comfyanonymous 1281f933c1 Small optimization. 2024-06-15 02:44:38 -04:00
comfyanonymous f2e844e054 Optimize some unneeded if conditions in the sampling code. 2024-06-15 02:26:19 -04:00
comfyanonymous 0ec513d877 Add a --force-channels-last to inference models in channel last mode. 2024-06-15 01:08:12 -04:00
comfyanonymous 0e06b370db Print key names for easier debugging. 2024-06-14 18:18:53 -04:00
Simon Lui 5eb98f0092
Exempt IPEX from non_blocking previews fixing segmentation faults. (#3708) 2024-06-13 18:51:14 -04:00
comfyanonymous ac151ac169 Support SD3 diffusers lora. 2024-06-13 18:26:10 -04:00
comfyanonymous 37a08a41b3 Support setting weight offsets in weight patcher. 2024-06-13 17:21:26 -04:00
comfyanonymous 605e64f6d3 Fix lowvram issue. 2024-06-12 10:39:33 -04:00
comfyanonymous 1ddf512fdc Don't auto convert clip and vae weights to fp16 when saving checkpoint. 2024-06-12 01:07:58 -04:00
comfyanonymous 694e0b48e0 SD3 better memory usage estimation. 2024-06-12 00:49:00 -04:00
comfyanonymous 69c8d6d8a6 Single and dual clip loader nodes support SD3.
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
2024-06-11 23:27:39 -04:00
comfyanonymous 0e49211a11 Load the SD3 T5xxl model in the same dtype stored in the checkpoint. 2024-06-11 17:03:26 -04:00
comfyanonymous 5889b7ca0a Support multiple text encoder configurations on SD3. 2024-06-11 13:14:43 -04:00
comfyanonymous 9424522ead Reuse code. 2024-06-11 07:20:26 -04:00
Dango233 73ce178021
Remove redundancy in mmdit.py (#3685) 2024-06-11 06:30:25 -04:00
comfyanonymous a82fae2375 Fix bug with cosxl edit model. 2024-06-10 16:00:03 -04:00
comfyanonymous 8c4a9befa7 SD3 Support. 2024-06-10 14:06:23 -04:00
comfyanonymous a5e6a632f9 Support sampling non 2D latents. 2024-06-10 01:31:09 -04:00
comfyanonymous 742d5720d1 Support zeroing out text embeddings with the attention mask. 2024-06-09 16:51:58 -04:00
comfyanonymous 6cd8ffc465 Reshape the empty latent image to the right amount of channels if needed. 2024-06-08 02:35:08 -04:00
comfyanonymous 56333d4850 Use the end token for the text encoder attention mask. 2024-06-07 03:05:23 -04:00
comfyanonymous 104fcea0c8 Add function to get the list of currently loaded models. 2024-06-05 23:25:16 -04:00
comfyanonymous b1fd26fe9e pytorch xpu should be flash or mem efficient attention? 2024-06-04 17:44:14 -04:00
comfyanonymous 809cc85a8e Remove useless code. 2024-06-02 19:23:37 -04:00
comfyanonymous b249862080 Add an annoying print to a function I want to remove. 2024-06-01 12:47:31 -04:00
comfyanonymous bf3e334d46 Disable non_blocking when --deterministic or directml. 2024-05-30 11:07:38 -04:00
JettHu b26da2245f
Fix UnetParams annotation typo (#3589) 2024-05-27 19:30:35 -04:00
comfyanonymous 0920e0e5fe Remove some unused imports. 2024-05-27 19:08:27 -04:00
comfyanonymous ffc4b7c30e Fix DORA strength.
This is a different version of #3298 with more correct behavior.
2024-05-25 02:50:11 -04:00
comfyanonymous efa5a711b2 Reduce memory usage when applying DORA: #3557 2024-05-24 23:36:48 -04:00
comfyanonymous 6c23854f54 Fix OSX latent2rgb previews. 2024-05-22 13:56:28 -04:00
Chenlei Hu 7718ada4ed
Add type annotation UnetWrapperFunction (#3531)
* Add type annotation UnetWrapperFunction

* nit

* Add types.py
2024-05-22 02:07:27 -04:00
comfyanonymous 8508df2569 Work around black image bug on Mac 14.5 by forcing attention upcasting. 2024-05-21 16:56:33 -04:00
comfyanonymous 83d969e397 Disable xformers when tracing model. 2024-05-21 13:55:49 -04:00
comfyanonymous 1900e5119f Fix potential issue. 2024-05-20 08:19:54 -04:00
comfyanonymous 09e069ae6c Log the pytorch version. 2024-05-20 06:22:29 -04:00
comfyanonymous 11a2ad5110 Fix controlnet not upcasting on models that have it enabled. 2024-05-19 17:58:03 -04:00
comfyanonymous 0bdc2b15c7 Cleanup. 2024-05-18 10:11:44 -04:00
comfyanonymous 98f828fad9 Remove unnecessary code. 2024-05-18 09:36:44 -04:00
comfyanonymous 19300655dd Don't automatically switch to lowvram mode on GPUs with low memory. 2024-05-17 00:31:32 -04:00
comfyanonymous 46daf0a9a7 Add debug options to force on and off attention upcasting. 2024-05-16 04:09:41 -04:00
comfyanonymous 2d41642716 Fix lowvram dora issue. 2024-05-15 02:47:40 -04:00
comfyanonymous ec6f16adb6 Fix SAG. 2024-05-14 18:02:27 -04:00
comfyanonymous bb4940d837 Only enable attention upcasting on models that actually need it. 2024-05-14 17:00:50 -04:00
comfyanonymous b0ab31d06c Refactor attention upcasting code part 1. 2024-05-14 12:47:31 -04:00
Simon Lui f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. (#3459)
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.

* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous fa6dd7e5bb Fix lowvram issue with saving checkpoints.
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous 49c20cdc70 No longer necessary. 2024-05-12 05:34:43 -04:00
comfyanonymous e1489ad257 Fix issue with lowvram mode breaking model saving. 2024-05-11 21:55:20 -04:00
comfyanonymous 93e876a3be Remove warnings that confuse people. 2024-05-09 05:29:42 -04:00
comfyanonymous cd07340d96 Typo fix. 2024-05-08 18:36:56 -04:00
comfyanonymous c61eadf69a Make the load checkpoint with config function call the regular one.
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.

The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
Simon Lui a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. (#3388) 2024-05-02 03:26:50 -04:00
comfyanonymous f81a6fade8 Fix some edge cases with samplers and arrays with a single sigma. 2024-05-01 17:05:30 -04:00
comfyanonymous 2aed53c4ac Workaround xformers bug. 2024-04-30 21:23:40 -04:00
Garrett Sutula bacce529fb
Add TLS Support (#3312)
* Add TLS Support

* Add to readme

* Add guidance for windows users on generating certificates

* Add guidance for windows users on generating certificates

* Fix typo
2024-04-30 20:17:02 -04:00
Jedrzej Kosinski 7990ae18c1
Fix error when more cond masks passed in than batch size (#3353) 2024-04-26 12:51:12 -04:00
comfyanonymous 8dc19e40d1 Don't init a VAE model when there are no VAE weights. 2024-04-24 09:20:31 -04:00
comfyanonymous c59fe9f254 Support VAE without quant_conv. 2024-04-18 21:05:33 -04:00
comfyanonymous 719fb2c81d Add basic PAG node. 2024-04-14 23:49:50 -04:00
comfyanonymous 258dbc06c3 Fix some memory related issues. 2024-04-14 12:08:58 -04:00
comfyanonymous 58812ab8ca Support SDXS 512 model. 2024-04-12 22:12:35 -04:00
comfyanonymous 831511a1ee Fix issue with sampling_settings persisting across models. 2024-04-09 23:20:43 -04:00
comfyanonymous 30abc324c2 Support properly saving CosXL checkpoints. 2024-04-08 00:36:22 -04:00
comfyanonymous 0a03009808 Fix issue with controlnet models getting loaded multiple times. 2024-04-06 18:38:39 -04:00
kk-89 38ed2da2dd
Fix typo in lowvram patcher (#3209) 2024-04-05 12:02:13 -04:00
comfyanonymous 1088d1850f Support for CosXL models. 2024-04-05 10:53:41 -04:00
comfyanonymous 41ed7e85ea Fix object_patches_backup not being the same object across clones. 2024-04-05 00:22:44 -04:00
comfyanonymous 0f5768e038 Fix missing arguments in cfg_function. 2024-04-04 23:38:57 -04:00
comfyanonymous 1f4fc9ea0c Fix issue with get_model_object on patched model. 2024-04-04 23:01:02 -04:00
comfyanonymous 1a0486bb96 Fix model needing to be loaded on GPU to generate the sigmas. 2024-04-04 22:08:49 -04:00
comfyanonymous c6bd456c45 Make zero denoise a NOP. 2024-04-04 11:41:27 -04:00
comfyanonymous fcfd2bdf8a Small cleanup. 2024-04-04 11:16:49 -04:00
comfyanonymous 0542088ef8 Refactor sampler code for more advanced sampler nodes part 2. 2024-04-04 01:26:41 -04:00
comfyanonymous 57753c964a Refactor sampling code for more advanced sampler nodes. 2024-04-03 22:09:51 -04:00
comfyanonymous 6c6a39251f Fix saving text encoder in fp8. 2024-04-02 11:46:34 -04:00
comfyanonymous e6482fbbfc Refactor calc_cond_uncond_batch into calc_cond_batch.
calc_cond_batch can take an arbitrary amount of cond inputs.

Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
2024-04-01 18:07:47 -04:00
comfyanonymous 575acb69e4 IP2P model loading support.
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.

This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
2024-03-31 03:10:28 -04:00
comfyanonymous 94a5a67c32 Cleanup to support different types of inpaint models. 2024-03-29 14:44:13 -04:00
comfyanonymous 5d8898c056 Fix some performance issues with weight loading and unloading.
Lower peak memory usage when changing model.

Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous 327ca1313d Support SDXS 0.9 2024-03-27 23:58:58 -04:00
comfyanonymous ae77590b4e dora_scale support for lora file. 2024-03-25 18:09:23 -04:00
comfyanonymous c6de09b02e Optimize memory unload strategy for more optimized performance. 2024-03-24 02:36:30 -04:00
comfyanonymous 0624838237 Add inverse noise scaling function. 2024-03-21 14:49:11 -04:00
comfyanonymous 5d875d77fe Fix regression with lcm not working with batches. 2024-03-20 20:48:54 -04:00
comfyanonymous 4b9005e949 Fix regression with model merging. 2024-03-20 13:56:12 -04:00
comfyanonymous c18a203a8a Don't unload model weights for non weight patches. 2024-03-20 02:27:58 -04:00
comfyanonymous 150a3e946f Make LCM sampler use the model noise scaling function. 2024-03-20 01:35:59 -04:00
comfyanonymous 40e124c6be SV3D support. 2024-03-18 16:54:13 -04:00
comfyanonymous cacb022c4a Make saved SD1 checkpoints match more closely the official one. 2024-03-18 00:26:23 -04:00
comfyanonymous d7897fff2c Move cascade scale factor from stage_a to latent_formats.py 2024-03-16 14:49:35 -04:00
comfyanonymous f2fe635c9f SamplerDPMAdaptative node to test the different options. 2024-03-15 22:36:10 -04:00
comfyanonymous 448d9263a2 Fix control loras breaking. 2024-03-14 09:30:21 -04:00
comfyanonymous db8b59ecff Lower memory usage for loras in lowvram mode at the cost of perf. 2024-03-13 20:07:27 -04:00
comfyanonymous 2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous 0ed72befe1 Change log levels.
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
comfyanonymous 65397ce601 Replace prints with logging and add --verbose argument. 2024-03-10 12:14:23 -04:00
comfyanonymous 5f60ee246e Support loading the sr cascade controlnet. 2024-03-07 01:22:48 -05:00
comfyanonymous 03e6e81629 Set upscale algorithm to bilinear for stable cascade controlnet. 2024-03-06 02:59:40 -05:00
comfyanonymous 03e83bb5d0 Support stable cascade canny controlnet. 2024-03-06 02:25:42 -05:00