comfyanonymous
94e4fe39d8
This isn't used anywhere.
2023-09-15 12:03:03 -04:00
comfyanonymous
44361f6344
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
0d8f376446
Set last layer on SD2.x models uses the proper indexes now.
...
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
0966d3ce82
Don't run text encoders on xpu because there are issues.
2023-09-14 12:16:07 -04:00
comfyanonymous
3039b08eb1
Only parse command line args when main.py is called.
2023-09-13 11:38:20 -04:00
comfyanonymous
ed58730658
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
fb3b728203
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
7d401ed1d0
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
e85be36bd2
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
1e6b67101c
Support diffusers format t2i adapters.
2023-09-08 11:36:51 -04:00
comfyanonymous
326577d04c
Allow cancelling of everything with a progress bar.
2023-09-07 23:37:03 -04:00
comfyanonymous
f88f7f413a
Add a ConditioningSetAreaPercentage node.
2023-09-06 03:28:27 -04:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
7746bdf7b0
Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI
2023-09-04 00:43:11 -04:00
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
comfyanonymous
a74c5dbf37
Move some functions to utils.py
2023-09-02 22:33:37 -04:00
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
77a176f9e0
Use common function to reshape batch to.
2023-09-02 03:42:49 -04:00
comfyanonymous
7931ff0fd9
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
0e3b641172
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
5c363a9d86
Fix controlnet bug.
2023-09-01 02:01:08 -04:00
comfyanonymous
cfe1c54de8
Fix controlnet issue.
2023-08-31 15:16:58 -04:00
comfyanonymous
1c012d69af
It doesn't make sense for c_crossattn and c_concat to be lists.
2023-08-31 13:25:00 -04:00
comfyanonymous
7e941f9f24
Clean up DiffusersLoader node.
2023-08-30 12:57:07 -04:00
Simon Lui
18617967e5
Fix error message in model_patcher.py
...
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
fe4c07400c
Fix "Load Checkpoint with config" node.
2023-08-29 23:58:32 -04:00
comfyanonymous
f2f5e5dcbb
Support SDXL t2i adapters with 3 channel input.
2023-08-29 16:44:57 -04:00
comfyanonymous
15adc3699f
Move beta_schedule to model_config and allow disabling unet creation.
2023-08-29 14:22:53 -04:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
65cae62c71
No need to check filename extensions to detect shuffle controlnet.
2023-08-28 16:49:06 -04:00
comfyanonymous
4e89b2c25a
Put clip vision outputs on the CPU.
2023-08-28 16:26:11 -04:00
comfyanonymous
a094b45c93
Load clipvision model to GPU for faster performance.
2023-08-28 15:29:27 -04:00
comfyanonymous
1300a1bb4c
Text encoder should initially load on the offload_device not the regular.
2023-08-28 15:08:45 -04:00
comfyanonymous
f92074b84f
Move ModelPatcher to model_patcher.py
2023-08-28 14:51:31 -04:00
comfyanonymous
4798cf5a62
Implement loras with norm keys.
2023-08-28 11:20:06 -04:00
comfyanonymous
b8c7c770d3
Enable bf16-vae by default on ampere and up.
2023-08-27 23:06:19 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
a57b0c797b
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
f72780a7e3
The new smart memory management makes this unnecessary.
2023-08-25 18:02:15 -04:00
comfyanonymous
c77f02e1c6
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
15a7716fa6
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00
comfyanonymous
ec96f6d03a
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
30eb92c3cb
Code cleanups.
2023-08-24 19:39:18 -04:00
comfyanonymous
51dde87e97
Try to free enough vram for control lora inference.
2023-08-24 17:20:54 -04:00
comfyanonymous
e3d0a9a490
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
cc44ade79e
Always shift text encoder to GPU when the device supports fp16.
2023-08-23 21:45:00 -04:00
comfyanonymous
a6ef08a46a
Even with forced fp16 the cpu device should never use it.
2023-08-23 21:38:28 -04:00
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
f081017c1a
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
afcb9cb1df
All resolutions now work with t2i adapter for SDXL.
2023-08-22 16:23:54 -04:00
comfyanonymous
85fde89d7f
T2I adapter SDXL.
2023-08-22 14:40:43 -04:00
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00
comfyanonymous
763b0cf024
Fix control lora not working in fp32.
2023-08-21 20:38:31 -04:00
comfyanonymous
199d73364a
Fix ControlLora on lowvram.
2023-08-21 00:54:04 -04:00
comfyanonymous
d08e53de2e
Remove autocast from controlnet code.
2023-08-20 21:47:32 -04:00
comfyanonymous
0d7b0a4dc7
Small cleanups.
2023-08-20 14:56:47 -04:00
Simon Lui
9225465975
Further tuning and fix mem_free_total.
2023-08-20 14:19:53 -04:00
Simon Lui
2c096e4260
Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes.
2023-08-20 14:19:51 -04:00
comfyanonymous
e9469e732d
--disable-smart-memory now disables loading model directly to vram.
2023-08-20 04:00:53 -04:00
comfyanonymous
c9b562aed1
Free more memory before VAE encode/decode.
2023-08-19 12:13:13 -04:00
comfyanonymous
b80c3276dc
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
39ac856a33
ReVision support: unclip nodes can now be used with SDXL.
2023-08-18 11:59:36 -04:00
comfyanonymous
76d53c4622
Add support for clip g vision model to CLIPVisionLoader.
2023-08-18 11:13:29 -04:00
Alexopus
e59fe0537a
Fix referenced before assignment
...
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous
be9c5e25bc
Fix issue with not freeing enough memory when sampling.
2023-08-17 15:59:56 -04:00
comfyanonymous
ac0758a1a4
Fix bug with lowvram and controlnet advanced node.
2023-08-17 13:38:51 -04:00
comfyanonymous
c28db1f315
Fix potential issues with patching models when saving checkpoints.
2023-08-17 11:07:08 -04:00
comfyanonymous
3aee33b54e
Add --disable-smart-memory for those that want the old behaviour.
2023-08-17 03:12:37 -04:00
comfyanonymous
2be2742711
Fix issue with regular torch version.
2023-08-17 01:58:54 -04:00
comfyanonymous
89a0767abf
Smarter memory management.
...
Try to keep models on the vram when possible.
Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous
2c97c30256
Support small diffusers controlnet so both types are now supported.
2023-08-16 12:45:56 -04:00
comfyanonymous
53f326a3d8
Support diffusers mini controlnets.
2023-08-16 12:28:01 -04:00
comfyanonymous
58f0c616ed
Fix clip vision issue with old transformers versions.
2023-08-16 11:36:22 -04:00
comfyanonymous
ae270f79bc
Fix potential issue with batch size and clip vision.
2023-08-16 11:05:11 -04:00
comfyanonymous
a2ce9655ca
Refactor unclip code.
2023-08-14 23:48:47 -04:00
comfyanonymous
9cc12c833d
CLIPVisionEncode can now encode multiple images.
2023-08-14 16:54:05 -04:00
comfyanonymous
0cb6dac943
Remove 3m from PR #1213 because of some small issues.
2023-08-14 00:48:45 -04:00
comfyanonymous
e244b2df83
Add sgm_uniform scheduler that acts like the default one in sgm.
2023-08-14 00:29:03 -04:00
comfyanonymous
58c7da3665
Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras.
2023-08-14 00:28:50 -04:00
comfyanonymous
ba319a34e4
Merge branch 'dpmpp3m' of https://github.com/FizzleDorf/ComfyUI
2023-08-14 00:23:15 -04:00
FizzleDorf
3cfad03a68
dpmpp 3m + dpmpp 3m sde added
2023-08-13 22:29:04 -04:00
comfyanonymous
585a062910
Print unet config when model isn't detected.
2023-08-13 01:39:48 -04:00
comfyanonymous
c8a23ce9e8
Support for yet another lora type based on diffusers.
2023-08-11 13:04:21 -04:00
comfyanonymous
2bc12d3d22
Add --temp-directory argument to set temp directory.
2023-08-11 05:13:03 -04:00
comfyanonymous
c20583286f
Support diffuser text encoder loras.
2023-08-10 20:28:28 -04:00
comfyanonymous
cf10c5592c
Disable calculating uncond when CFG is 1.0
2023-08-09 20:55:03 -04:00
comfyanonymous
1f0f4cc0bd
Add argument to disable auto launching the browser.
2023-08-07 02:25:12 -04:00
comfyanonymous
d8e58f0a7e
Detect hint_channels from controlnet.
2023-08-06 14:08:59 -04:00
comfyanonymous
c5d7593ccf
Support loras in diffusers format.
2023-08-05 01:40:24 -04:00
comfyanonymous
1ce0d8ad68
Add CMP 30HX card to the nvidia_16_series list.
2023-08-04 12:08:45 -04:00
comfyanonymous
c99d8002f8
Make sure the pooled output stays at the EOS token with added embeddings.
2023-08-03 20:27:50 -04:00
comfyanonymous
4a77fcd6ab
Only shift text encoder to vram when CPU cores are under 8.
2023-07-31 00:08:54 -04:00
comfyanonymous
3cd31d0e24
Lower CPU thread check for running the text encoder on the CPU vs GPU.
2023-07-30 17:18:24 -04:00
comfyanonymous
2b13939044
Remove some useless code.
2023-07-30 14:13:33 -04:00
comfyanonymous
95d796fc85
Faster VAE loading.
2023-07-29 16:28:30 -04:00
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
2023-07-29 14:51:56 -04:00
comfyanonymous
c910b4a01c
Remove unused code and torchdiffeq dependency.
2023-07-28 21:32:27 -04:00
comfyanonymous
1141029a4a
Add --disable-metadata argument to disable saving metadata in files.
2023-07-28 12:31:41 -04:00