Commit Graph

797 Commits

Author SHA1 Message Date
comfyanonymous 98f828fad9 Remove unnecessary code. 2024-05-18 09:36:44 -04:00
comfyanonymous 19300655dd Don't automatically switch to lowvram mode on GPUs with low memory. 2024-05-17 00:31:32 -04:00
comfyanonymous 46daf0a9a7 Add debug options to force on and off attention upcasting. 2024-05-16 04:09:41 -04:00
comfyanonymous 2d41642716 Fix lowvram dora issue. 2024-05-15 02:47:40 -04:00
comfyanonymous ec6f16adb6 Fix SAG. 2024-05-14 18:02:27 -04:00
comfyanonymous bb4940d837 Only enable attention upcasting on models that actually need it. 2024-05-14 17:00:50 -04:00
comfyanonymous b0ab31d06c Refactor attention upcasting code part 1. 2024-05-14 12:47:31 -04:00
Simon Lui f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. (#3459)
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.

* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous fa6dd7e5bb Fix lowvram issue with saving checkpoints.
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous 49c20cdc70 No longer necessary. 2024-05-12 05:34:43 -04:00
comfyanonymous e1489ad257 Fix issue with lowvram mode breaking model saving. 2024-05-11 21:55:20 -04:00
comfyanonymous 93e876a3be Remove warnings that confuse people. 2024-05-09 05:29:42 -04:00
comfyanonymous cd07340d96 Typo fix. 2024-05-08 18:36:56 -04:00
comfyanonymous c61eadf69a Make the load checkpoint with config function call the regular one.
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.

The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
Simon Lui a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. (#3388) 2024-05-02 03:26:50 -04:00
comfyanonymous f81a6fade8 Fix some edge cases with samplers and arrays with a single sigma. 2024-05-01 17:05:30 -04:00
comfyanonymous 2aed53c4ac Workaround xformers bug. 2024-04-30 21:23:40 -04:00
Garrett Sutula bacce529fb
Add TLS Support (#3312)
* Add TLS Support

* Add to readme

* Add guidance for windows users on generating certificates

* Add guidance for windows users on generating certificates

* Fix typo
2024-04-30 20:17:02 -04:00
Jedrzej Kosinski 7990ae18c1
Fix error when more cond masks passed in than batch size (#3353) 2024-04-26 12:51:12 -04:00
comfyanonymous 8dc19e40d1 Don't init a VAE model when there are no VAE weights. 2024-04-24 09:20:31 -04:00
comfyanonymous c59fe9f254 Support VAE without quant_conv. 2024-04-18 21:05:33 -04:00
comfyanonymous 719fb2c81d Add basic PAG node. 2024-04-14 23:49:50 -04:00
comfyanonymous 258dbc06c3 Fix some memory related issues. 2024-04-14 12:08:58 -04:00
comfyanonymous 58812ab8ca Support SDXS 512 model. 2024-04-12 22:12:35 -04:00
comfyanonymous 831511a1ee Fix issue with sampling_settings persisting across models. 2024-04-09 23:20:43 -04:00
comfyanonymous 30abc324c2 Support properly saving CosXL checkpoints. 2024-04-08 00:36:22 -04:00
comfyanonymous 0a03009808 Fix issue with controlnet models getting loaded multiple times. 2024-04-06 18:38:39 -04:00
kk-89 38ed2da2dd
Fix typo in lowvram patcher (#3209) 2024-04-05 12:02:13 -04:00
comfyanonymous 1088d1850f Support for CosXL models. 2024-04-05 10:53:41 -04:00
comfyanonymous 41ed7e85ea Fix object_patches_backup not being the same object across clones. 2024-04-05 00:22:44 -04:00
comfyanonymous 0f5768e038 Fix missing arguments in cfg_function. 2024-04-04 23:38:57 -04:00
comfyanonymous 1f4fc9ea0c Fix issue with get_model_object on patched model. 2024-04-04 23:01:02 -04:00
comfyanonymous 1a0486bb96 Fix model needing to be loaded on GPU to generate the sigmas. 2024-04-04 22:08:49 -04:00
comfyanonymous c6bd456c45 Make zero denoise a NOP. 2024-04-04 11:41:27 -04:00
comfyanonymous fcfd2bdf8a Small cleanup. 2024-04-04 11:16:49 -04:00
comfyanonymous 0542088ef8 Refactor sampler code for more advanced sampler nodes part 2. 2024-04-04 01:26:41 -04:00
comfyanonymous 57753c964a Refactor sampling code for more advanced sampler nodes. 2024-04-03 22:09:51 -04:00
comfyanonymous 6c6a39251f Fix saving text encoder in fp8. 2024-04-02 11:46:34 -04:00
comfyanonymous e6482fbbfc Refactor calc_cond_uncond_batch into calc_cond_batch.
calc_cond_batch can take an arbitrary amount of cond inputs.

Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
2024-04-01 18:07:47 -04:00
comfyanonymous 575acb69e4 IP2P model loading support.
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.

This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
2024-03-31 03:10:28 -04:00
comfyanonymous 94a5a67c32 Cleanup to support different types of inpaint models. 2024-03-29 14:44:13 -04:00
comfyanonymous 5d8898c056 Fix some performance issues with weight loading and unloading.
Lower peak memory usage when changing model.

Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous 327ca1313d Support SDXS 0.9 2024-03-27 23:58:58 -04:00
comfyanonymous ae77590b4e dora_scale support for lora file. 2024-03-25 18:09:23 -04:00
comfyanonymous c6de09b02e Optimize memory unload strategy for more optimized performance. 2024-03-24 02:36:30 -04:00
comfyanonymous 0624838237 Add inverse noise scaling function. 2024-03-21 14:49:11 -04:00
comfyanonymous 5d875d77fe Fix regression with lcm not working with batches. 2024-03-20 20:48:54 -04:00
comfyanonymous 4b9005e949 Fix regression with model merging. 2024-03-20 13:56:12 -04:00
comfyanonymous c18a203a8a Don't unload model weights for non weight patches. 2024-03-20 02:27:58 -04:00
comfyanonymous 150a3e946f Make LCM sampler use the model noise scaling function. 2024-03-20 01:35:59 -04:00