comfyanonymous
e73ec8c4da
Not used anymore.
2023-11-01 00:01:30 -04:00
comfyanonymous
111f1b5255
Fix some issues with sampling precision.
2023-10-31 23:49:29 -04:00
comfyanonymous
7c0f255de1
Clean up percent start/end and make controlnets work with sigmas.
2023-10-31 22:14:32 -04:00
comfyanonymous
a268a574fa
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
1777b54d02
Sampling code changes.
...
apply_model in model_base now returns the denoised output.
This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
a12cc05323
Add --max-upload-size argument, the default is 100MB.
2023-10-29 03:55:46 -04:00
comfyanonymous
2a134bfab9
Fix checkpoint loader with config.
2023-10-27 22:13:55 -04:00
comfyanonymous
e60ca6929a
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
2023-10-27 15:54:04 -04:00
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
2023-10-27 14:45:15 -04:00
comfyanonymous
434ce25ec0
Restrict loading embeddings from embedding folders.
2023-10-27 02:54:13 -04:00
comfyanonymous
723847f6b3
Faster clip image processing.
2023-10-26 01:53:01 -04:00
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
comfyanonymous
7fbb217d3a
Fix uni_pc returning noisy image when steps <= 3
2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
3783cb8bfd
change 'c_adm' to 'y' in ControlNet.get_control
2023-10-25 08:24:32 -05:00
comfyanonymous
d1d2fea806
Pass extra conds directly to unet.
2023-10-25 00:07:53 -04:00
comfyanonymous
036f88c621
Refactor to make it easier to add custom conds to models.
2023-10-24 23:31:12 -04:00
comfyanonymous
3fce8881ca
Sampling code refactor to make it easier to add more conds.
2023-10-24 03:38:41 -04:00
comfyanonymous
8594c8be4d
Empty the cache when torch cache is more than 25% free mem.
2023-10-22 13:58:12 -04:00
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
comfyanonymous
a0690f9df9
Fix t2i adapter issue.
2023-10-21 20:31:24 -04:00
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
comfyanonymous
4185324a1d
Fix uni_pc sampler math. This changes the images this sampler produces.
2023-10-20 04:16:53 -04:00
comfyanonymous
e6962120c6
Make sure cond_concat is on the right device.
2023-10-19 01:14:25 -04:00
comfyanonymous
45c972aba8
Refactor cond_concat into conditioning.
2023-10-18 20:36:58 -04:00
comfyanonymous
430a8334c5
Fix some potential issues.
2023-10-18 19:48:36 -04:00
comfyanonymous
782a24fce6
Refactor cond_concat into model object.
2023-10-18 16:48:37 -04:00
comfyanonymous
0d45a565da
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
c8013f73e5
Add some Quadro cards to the list of cards with broken fp16.
2023-10-16 16:48:46 -04:00
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
comfyanonymous
fd4c5f07e7
Add a --bf16-unet to test running the unet in bf16.
2023-10-13 14:51:10 -04:00
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
2023-10-13 14:41:17 -04:00
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
20d3852aa1
Pull some small changes from the other repo.
2023-10-11 20:38:48 -04:00
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
2023-10-11 20:38:48 -04:00
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
8cc75c64ff
Let unet wrapper functions have .to attributes.
2023-10-11 01:34:38 -04:00
comfyanonymous
5e885bd9c8
Cleanup.
2023-10-10 21:46:53 -04:00
comfyanonymous
851bb87ca9
Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI
2023-10-10 21:42:35 -04:00
Yukimasa Funaoka
9eb621c95a
Supports TAESD models in safetensors format
2023-10-10 13:21:44 +09:00
comfyanonymous
d1a0abd40b
Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI
2023-10-09 01:53:29 -04:00
comfyanonymous
72188dffc3
load_checkpoint_guess_config can now optionally output the model.
2023-10-06 13:48:18 -04:00
Jairo Correa
63e5fd1790
Option to input directory
2023-10-04 19:45:15 -03:00
City
9bfec2bdbf
Fix quality loss due to low precision
2023-10-04 15:40:59 +02:00
badayvedat
0f17993d05
fix: typo in extra sampler
2023-09-29 06:09:59 +03:00
comfyanonymous
66756de100
Add SamplerDPMPP_2M_SDE node.
2023-09-28 21:56:23 -04:00
comfyanonymous
71713888c4
Print missing VAE keys.
2023-09-28 00:54:57 -04:00
comfyanonymous
d234ca558a
Add missing samplers to KSamplerSelect.
2023-09-28 00:17:03 -04:00
comfyanonymous
1adcc4c3a2
Add a SamplerCustom Node.
...
This node takes a list of sigmas and a sampler object as input.
This lets people easily implement custom schedulers and samplers as nodes.
More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous
bf3fc2f1b7
Refactor sampling related code.
2023-09-27 16:45:22 -04:00
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00
comfyanonymous
1d6dd83184
Scheduler code refactor.
2023-09-26 17:07:07 -04:00
comfyanonymous
446caf711c
Sampling code refactor.
2023-09-26 13:45:15 -04:00
comfyanonymous
76cdc809bf
Support more controlnet models.
2023-09-23 18:47:46 -04:00
comfyanonymous
ae87543653
Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI
2023-09-23 00:57:17 -04:00
Simon Lui
eec449ca8e
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
2023-09-22 21:11:27 -07:00
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
2023-09-22 20:26:47 -04:00
comfyanonymous
492db2de8d
Allow having a different pooled output for each image in a batch.
2023-09-21 01:14:42 -04:00
comfyanonymous
1cdfb3dba4
Only do the cast on the device if the device supports it.
2023-09-20 17:52:41 -04:00
comfyanonymous
7c9a92f552
Don't depend on torchvision.
2023-09-19 13:12:47 -04:00
MoonRide303
2b6b178173
Added support for lanczos scaling
2023-09-19 10:40:38 +02:00
comfyanonymous
b92bf8196e
Do lora cast on GPU instead of CPU for higher performance.
2023-09-18 23:04:49 -04:00
comfyanonymous
321c5fa295
Enable pytorch attention by default on xpu.
2023-09-17 04:09:19 -04:00
comfyanonymous
61b1f67734
Support models without previews.
2023-09-16 12:59:54 -04:00
comfyanonymous
43d4935a1d
Add cond_or_uncond array to transformer_options so hooks can check what is
...
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
415abb275f
Add DDPM sampler.
2023-09-15 19:22:47 -04:00
comfyanonymous
94e4fe39d8
This isn't used anywhere.
2023-09-15 12:03:03 -04:00
comfyanonymous
44361f6344
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
0d8f376446
Set last layer on SD2.x models uses the proper indexes now.
...
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
0966d3ce82
Don't run text encoders on xpu because there are issues.
2023-09-14 12:16:07 -04:00
comfyanonymous
3039b08eb1
Only parse command line args when main.py is called.
2023-09-13 11:38:20 -04:00
comfyanonymous
ed58730658
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
fb3b728203
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
7d401ed1d0
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
e85be36bd2
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
1e6b67101c
Support diffusers format t2i adapters.
2023-09-08 11:36:51 -04:00
comfyanonymous
326577d04c
Allow cancelling of everything with a progress bar.
2023-09-07 23:37:03 -04:00
comfyanonymous
f88f7f413a
Add a ConditioningSetAreaPercentage node.
2023-09-06 03:28:27 -04:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
7746bdf7b0
Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI
2023-09-04 00:43:11 -04:00
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
comfyanonymous
a74c5dbf37
Move some functions to utils.py
2023-09-02 22:33:37 -04:00
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
77a176f9e0
Use common function to reshape batch to.
2023-09-02 03:42:49 -04:00
comfyanonymous
7931ff0fd9
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
0e3b641172
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
5c363a9d86
Fix controlnet bug.
2023-09-01 02:01:08 -04:00
comfyanonymous
cfe1c54de8
Fix controlnet issue.
2023-08-31 15:16:58 -04:00
comfyanonymous
1c012d69af
It doesn't make sense for c_crossattn and c_concat to be lists.
2023-08-31 13:25:00 -04:00
comfyanonymous
7e941f9f24
Clean up DiffusersLoader node.
2023-08-30 12:57:07 -04:00
Simon Lui
18617967e5
Fix error message in model_patcher.py
...
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
fe4c07400c
Fix "Load Checkpoint with config" node.
2023-08-29 23:58:32 -04:00
comfyanonymous
f2f5e5dcbb
Support SDXL t2i adapters with 3 channel input.
2023-08-29 16:44:57 -04:00
comfyanonymous
15adc3699f
Move beta_schedule to model_config and allow disabling unet creation.
2023-08-29 14:22:53 -04:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
65cae62c71
No need to check filename extensions to detect shuffle controlnet.
2023-08-28 16:49:06 -04:00
comfyanonymous
4e89b2c25a
Put clip vision outputs on the CPU.
2023-08-28 16:26:11 -04:00
comfyanonymous
a094b45c93
Load clipvision model to GPU for faster performance.
2023-08-28 15:29:27 -04:00
comfyanonymous
1300a1bb4c
Text encoder should initially load on the offload_device not the regular.
2023-08-28 15:08:45 -04:00
comfyanonymous
f92074b84f
Move ModelPatcher to model_patcher.py
2023-08-28 14:51:31 -04:00
comfyanonymous
4798cf5a62
Implement loras with norm keys.
2023-08-28 11:20:06 -04:00
comfyanonymous
b8c7c770d3
Enable bf16-vae by default on ampere and up.
2023-08-27 23:06:19 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
a57b0c797b
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
f72780a7e3
The new smart memory management makes this unnecessary.
2023-08-25 18:02:15 -04:00
comfyanonymous
c77f02e1c6
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
15a7716fa6
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00
comfyanonymous
ec96f6d03a
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
30eb92c3cb
Code cleanups.
2023-08-24 19:39:18 -04:00
comfyanonymous
51dde87e97
Try to free enough vram for control lora inference.
2023-08-24 17:20:54 -04:00
comfyanonymous
e3d0a9a490
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
cc44ade79e
Always shift text encoder to GPU when the device supports fp16.
2023-08-23 21:45:00 -04:00
comfyanonymous
a6ef08a46a
Even with forced fp16 the cpu device should never use it.
2023-08-23 21:38:28 -04:00
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
f081017c1a
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
afcb9cb1df
All resolutions now work with t2i adapter for SDXL.
2023-08-22 16:23:54 -04:00
comfyanonymous
85fde89d7f
T2I adapter SDXL.
2023-08-22 14:40:43 -04:00
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00
comfyanonymous
763b0cf024
Fix control lora not working in fp32.
2023-08-21 20:38:31 -04:00
comfyanonymous
199d73364a
Fix ControlLora on lowvram.
2023-08-21 00:54:04 -04:00
comfyanonymous
d08e53de2e
Remove autocast from controlnet code.
2023-08-20 21:47:32 -04:00
comfyanonymous
0d7b0a4dc7
Small cleanups.
2023-08-20 14:56:47 -04:00
Simon Lui
9225465975
Further tuning and fix mem_free_total.
2023-08-20 14:19:53 -04:00
Simon Lui
2c096e4260
Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes.
2023-08-20 14:19:51 -04:00
comfyanonymous
e9469e732d
--disable-smart-memory now disables loading model directly to vram.
2023-08-20 04:00:53 -04:00
comfyanonymous
c9b562aed1
Free more memory before VAE encode/decode.
2023-08-19 12:13:13 -04:00
comfyanonymous
b80c3276dc
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
39ac856a33
ReVision support: unclip nodes can now be used with SDXL.
2023-08-18 11:59:36 -04:00
comfyanonymous
76d53c4622
Add support for clip g vision model to CLIPVisionLoader.
2023-08-18 11:13:29 -04:00
Alexopus
e59fe0537a
Fix referenced before assignment
...
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous
be9c5e25bc
Fix issue with not freeing enough memory when sampling.
2023-08-17 15:59:56 -04:00
comfyanonymous
ac0758a1a4
Fix bug with lowvram and controlnet advanced node.
2023-08-17 13:38:51 -04:00
comfyanonymous
c28db1f315
Fix potential issues with patching models when saving checkpoints.
2023-08-17 11:07:08 -04:00
comfyanonymous
3aee33b54e
Add --disable-smart-memory for those that want the old behaviour.
2023-08-17 03:12:37 -04:00
comfyanonymous
2be2742711
Fix issue with regular torch version.
2023-08-17 01:58:54 -04:00
comfyanonymous
89a0767abf
Smarter memory management.
...
Try to keep models on the vram when possible.
Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous
2c97c30256
Support small diffusers controlnet so both types are now supported.
2023-08-16 12:45:56 -04:00
comfyanonymous
53f326a3d8
Support diffusers mini controlnets.
2023-08-16 12:28:01 -04:00
comfyanonymous
58f0c616ed
Fix clip vision issue with old transformers versions.
2023-08-16 11:36:22 -04:00
comfyanonymous
ae270f79bc
Fix potential issue with batch size and clip vision.
2023-08-16 11:05:11 -04:00
comfyanonymous
a2ce9655ca
Refactor unclip code.
2023-08-14 23:48:47 -04:00
comfyanonymous
9cc12c833d
CLIPVisionEncode can now encode multiple images.
2023-08-14 16:54:05 -04:00
comfyanonymous
0cb6dac943
Remove 3m from PR #1213 because of some small issues.
2023-08-14 00:48:45 -04:00
comfyanonymous
e244b2df83
Add sgm_uniform scheduler that acts like the default one in sgm.
2023-08-14 00:29:03 -04:00
comfyanonymous
58c7da3665
Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras.
2023-08-14 00:28:50 -04:00
comfyanonymous
ba319a34e4
Merge branch 'dpmpp3m' of https://github.com/FizzleDorf/ComfyUI
2023-08-14 00:23:15 -04:00
FizzleDorf
3cfad03a68
dpmpp 3m + dpmpp 3m sde added
2023-08-13 22:29:04 -04:00
comfyanonymous
585a062910
Print unet config when model isn't detected.
2023-08-13 01:39:48 -04:00
comfyanonymous
c8a23ce9e8
Support for yet another lora type based on diffusers.
2023-08-11 13:04:21 -04:00
comfyanonymous
2bc12d3d22
Add --temp-directory argument to set temp directory.
2023-08-11 05:13:03 -04:00
comfyanonymous
c20583286f
Support diffuser text encoder loras.
2023-08-10 20:28:28 -04:00
comfyanonymous
cf10c5592c
Disable calculating uncond when CFG is 1.0
2023-08-09 20:55:03 -04:00
comfyanonymous
1f0f4cc0bd
Add argument to disable auto launching the browser.
2023-08-07 02:25:12 -04:00
comfyanonymous
d8e58f0a7e
Detect hint_channels from controlnet.
2023-08-06 14:08:59 -04:00
comfyanonymous
c5d7593ccf
Support loras in diffusers format.
2023-08-05 01:40:24 -04:00
comfyanonymous
1ce0d8ad68
Add CMP 30HX card to the nvidia_16_series list.
2023-08-04 12:08:45 -04:00
comfyanonymous
c99d8002f8
Make sure the pooled output stays at the EOS token with added embeddings.
2023-08-03 20:27:50 -04:00
comfyanonymous
4a77fcd6ab
Only shift text encoder to vram when CPU cores are under 8.
2023-07-31 00:08:54 -04:00
comfyanonymous
3cd31d0e24
Lower CPU thread check for running the text encoder on the CPU vs GPU.
2023-07-30 17:18:24 -04:00
comfyanonymous
2b13939044
Remove some useless code.
2023-07-30 14:13:33 -04:00
comfyanonymous
95d796fc85
Faster VAE loading.
2023-07-29 16:28:30 -04:00
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
2023-07-29 14:51:56 -04:00
comfyanonymous
c910b4a01c
Remove unused code and torchdiffeq dependency.
2023-07-28 21:32:27 -04:00
comfyanonymous
1141029a4a
Add --disable-metadata argument to disable saving metadata in files.
2023-07-28 12:31:41 -04:00
comfyanonymous
fbf5c51c1c
Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI
2023-07-27 16:13:48 -04:00
comfyanonymous
68be24eead
Remove some prints.
2023-07-27 16:12:43 -04:00
asagi4
1ea4d84691
Fix timestep ranges when batch_size > 1
2023-07-27 21:14:09 +03:00
comfyanonymous
5379051d16
Fix diffusers VAE loading.
2023-07-26 18:26:39 -04:00
comfyanonymous
727588d076
Fix some new loras.
2023-07-25 16:39:15 -04:00
comfyanonymous
4f9b6f39d1
Fix potential issue with Save Checkpoint.
2023-07-25 00:45:20 -04:00
comfyanonymous
5f75d784a1
Start is now 0.0 and end is now 1.0 for the timestep ranges.
2023-07-24 18:38:17 -04:00
comfyanonymous
7ff14b62f8
ControlNetApplyAdvanced can now define when controlnet gets applied.
2023-07-24 17:50:49 -04:00
comfyanonymous
d191c4f9ed
Add a ControlNetApplyAdvanced node.
...
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2023-07-24 13:35:20 -04:00
comfyanonymous
0240946ecf
Add a way to set which range of timesteps the cond gets applied to.
2023-07-24 09:25:02 -04:00
comfyanonymous
22f29d66ca
Try to fix memory issue with lora.
2023-07-22 21:38:56 -04:00
comfyanonymous
67be7eb81d
Nodes can now patch the unet function.
2023-07-22 17:01:12 -04:00
comfyanonymous
12a6e93171
Del the right object when applying lora.
2023-07-22 11:25:49 -04:00
comfyanonymous
78e7958d17
Support controlnet in diffusers format.
2023-07-21 22:58:16 -04:00
comfyanonymous
09386a3697
Fix issue with lora in some cases when combined with model merging.
2023-07-21 21:27:27 -04:00
comfyanonymous
58b2364f58
Properly support SDXL diffusers unet with UNETLoader node.
2023-07-21 14:38:56 -04:00
comfyanonymous
0115018695
Print errors and continue when lora weights are not compatible.
2023-07-20 19:56:22 -04:00
comfyanonymous
4760c29380
Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI
2023-07-20 00:34:54 -04:00
comfyanonymous
0b284f650b
Fix typo.
2023-07-19 10:20:32 -04:00
comfyanonymous
e032ca6138
Fix ddim issue with older torch versions.
2023-07-19 10:16:00 -04:00
comfyanonymous
18885f803a
Add MX450 and MX550 to list of cards with broken fp16.
2023-07-19 03:08:30 -04:00
comfyanonymous
9ba440995a
It's actually possible to torch.compile the unet now.
2023-07-18 21:36:35 -04:00
comfyanonymous
51d5477579
Add key to indicate checkpoint is v_prediction when saving.
2023-07-18 00:25:53 -04:00
comfyanonymous
ff6b047a74
Fix device print on old torch version.
2023-07-17 15:18:58 -04:00
comfyanonymous
9871a15cf9
Enable --cuda-malloc by default on torch 2.0 and up.
...
Add --disable-cuda-malloc to disable it.
2023-07-17 15:12:10 -04:00
comfyanonymous
55d0fca9fa
--windows-standalone-build now enables --cuda-malloc
2023-07-17 14:10:36 -04:00
comfyanonymous
1679abd86d
Add a command line argument to enable backend:cudaMallocAsync
2023-07-17 11:00:14 -04:00
comfyanonymous
3a150bad15
Only calculate randn in some samplers when it's actually being used.
2023-07-17 10:11:08 -04:00
comfyanonymous
ee8f8ee07f
Fix regression with ddim and uni_pc when batch size > 1.
2023-07-17 09:35:19 -04:00
comfyanonymous
3ded1a3a04
Refactor of sampler code to deal more easily with different model types.
2023-07-17 01:22:12 -04:00
comfyanonymous
5f57362613
Lower lora ram usage when in normal vram mode.
2023-07-16 02:59:04 -04:00
comfyanonymous
490771b7f4
Speed up lora loading a bit.
2023-07-15 13:25:22 -04:00
comfyanonymous
50b1180dde
Fix CLIPSetLastLayer not reverting when removed.
2023-07-15 01:41:21 -04:00
comfyanonymous
6fb084f39d
Reduce floating point rounding errors in loras.
2023-07-15 00:53:00 -04:00
comfyanonymous
91ed2815d5
Add a node to merge CLIP models.
2023-07-14 02:41:18 -04:00
comfyanonymous
b2f03164c7
Prevent the clip_g position_ids key from being saved in the checkpoint.
...
This is to make it match the official checkpoint.
2023-07-12 20:15:02 -04:00
comfyanonymous
46dc050c9f
Fix potential tensors being on different devices issues.
2023-07-12 19:29:27 -04:00
KarryCharon
3e2309f149
fix mps miss import
2023-07-12 10:06:34 +08:00
comfyanonymous
606a537090
Support SDXL embedding format with 2 CLIP.
2023-07-10 10:34:59 -04:00
comfyanonymous
6ad0a6d7e2
Don't patch weights when multiplier is zero.
2023-07-09 17:46:56 -04:00
comfyanonymous
d5323d16e0
latent2rgb matrix for SDXL.
2023-07-09 13:59:09 -04:00
comfyanonymous
0ae81c03bb
Empty cache after model unloading for normal vram and lower.
2023-07-09 09:56:03 -04:00
comfyanonymous
d3f5998218
Support loading clip_g from diffusers in CLIP Loader nodes.
2023-07-09 09:33:53 -04:00
comfyanonymous
a9a4ba7574
Fix merging not working when model2 of model merge node was a merge.
2023-07-08 22:31:10 -04:00
comfyanonymous
bb5fbd29e9
Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI
2023-07-07 01:52:25 -04:00
comfyanonymous
e7bee85df8
Add arguments to run the VAE in fp16 or bf16 for testing.
2023-07-06 23:23:46 -04:00
comfyanonymous
608fcc2591
Fix bug with weights when prompt is long.
2023-07-06 02:43:40 -04:00
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
comfyanonymous
603f02d613
Fix loras not working when loading checkpoint with config.
2023-07-05 19:42:24 -04:00
comfyanonymous
af7a49916b
Support loading unet files in diffusers format.
2023-07-05 17:38:59 -04:00
comfyanonymous
e57cba4c61
Add gpu variations of the sde samplers that are less deterministic
...
but faster.
2023-07-05 01:39:38 -04:00
comfyanonymous
f81b192944
Add logit scale parameter so it's present when saving the checkpoint.
2023-07-04 23:01:28 -04:00
comfyanonymous
acf95191ff
Properly support SDXL diffusers loras for unet.
2023-07-04 21:15:23 -04:00
mara
c61a95f9f7
Fix size check for conditioning mask
...
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous
8d694cc450
Fix issue with OSX.
2023-07-04 02:09:02 -04:00
comfyanonymous
c3e96e637d
Pass device to CLIP model.
2023-07-03 16:09:37 -04:00
comfyanonymous
5e6bc824aa
Allow passing custom path to clip-g and clip-h.
2023-07-03 15:45:04 -04:00
comfyanonymous
dc9d1f31c8
Improvements for OSX.
2023-07-03 00:08:30 -04:00
comfyanonymous
103c487a89
Cleanup.
2023-07-02 11:58:23 -04:00
comfyanonymous
2c4e0b49b7
Switch to fp16 on some cards when the model is too big.
2023-07-02 10:00:57 -04:00
comfyanonymous
6f3d9f52db
Add a --force-fp16 argument to force fp16 for testing.
2023-07-01 22:42:35 -04:00
comfyanonymous
1c1b0e7299
--gpu-only now keeps the VAE on the device.
2023-07-01 15:22:40 -04:00
comfyanonymous
ce35d8c659
Lower latency by batching some text encoder inputs.
2023-07-01 15:07:39 -04:00
comfyanonymous
3b6fe51c1d
Leave text_encoder on the CPU when it can handle it.
2023-07-01 14:38:51 -04:00
comfyanonymous
b6a60fa696
Try to keep text encoders loaded and patched to increase speed.
...
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
97ee230682
Make highvram and normalvram shift the text encoders to vram and back.
...
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
5a9ddf94eb
LoraLoader node now caches the lora file between executions.
2023-06-29 23:40:51 -04:00
comfyanonymous
9920367d3c
Fix embeddings not working with --gpu-only
2023-06-29 20:43:06 -04:00
comfyanonymous
62db11683b
Move unet to device right after loading on highvram mode.
2023-06-29 20:43:06 -04:00
comfyanonymous
4376b125eb
Remove useless code.
2023-06-29 00:26:33 -04:00
comfyanonymous
89120f1fbe
This is unused but it should be 1280.
2023-06-28 18:04:23 -04:00
comfyanonymous
2c7c14de56
Support for SDXL text encoder lora.
2023-06-28 02:22:49 -04:00
comfyanonymous
fcef47f06e
Fix bug.
2023-06-28 00:38:07 -04:00
comfyanonymous
8248babd44
Use pytorch attention by default on nvidia when xformers isn't present.
...
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
comfyanonymous
9b93b920be
Add CheckpointSave node to save checkpoints.
...
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.
Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32
Anything that patches the model weights like merging or loras will be
saved.
The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
b72a7a835a
Support loras based on the stability unet implementation.
2023-06-26 02:56:11 -04:00
comfyanonymous
c71a7e6b20
Fix ddim + inpainting not working.
2023-06-26 00:48:48 -04:00
comfyanonymous
4eab00e14b
Set the seed in the SDE samplers to make them more reproducible.
2023-06-25 03:04:57 -04:00
comfyanonymous
cef6aa62b2
Add support for TAESD decoder for SDXL.
2023-06-25 02:38:14 -04:00
comfyanonymous
20f579d91d
Add DualClipLoader to load clip models for SDXL.
...
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00