comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00
comfyanonymous
1d6dd83184
Scheduler code refactor.
2023-09-26 17:07:07 -04:00
comfyanonymous
446caf711c
Sampling code refactor.
2023-09-26 13:45:15 -04:00
comfyanonymous
76cdc809bf
Support more controlnet models.
2023-09-23 18:47:46 -04:00
comfyanonymous
ae87543653
Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI
2023-09-23 00:57:17 -04:00
Simon Lui
eec449ca8e
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
2023-09-22 21:11:27 -07:00
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
2023-09-22 20:26:47 -04:00
comfyanonymous
492db2de8d
Allow having a different pooled output for each image in a batch.
2023-09-21 01:14:42 -04:00
comfyanonymous
1cdfb3dba4
Only do the cast on the device if the device supports it.
2023-09-20 17:52:41 -04:00
comfyanonymous
7c9a92f552
Don't depend on torchvision.
2023-09-19 13:12:47 -04:00
MoonRide303
2b6b178173
Added support for lanczos scaling
2023-09-19 10:40:38 +02:00
comfyanonymous
b92bf8196e
Do lora cast on GPU instead of CPU for higher performance.
2023-09-18 23:04:49 -04:00
comfyanonymous
321c5fa295
Enable pytorch attention by default on xpu.
2023-09-17 04:09:19 -04:00
comfyanonymous
61b1f67734
Support models without previews.
2023-09-16 12:59:54 -04:00
comfyanonymous
43d4935a1d
Add cond_or_uncond array to transformer_options so hooks can check what is
...
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
415abb275f
Add DDPM sampler.
2023-09-15 19:22:47 -04:00
comfyanonymous
94e4fe39d8
This isn't used anywhere.
2023-09-15 12:03:03 -04:00
comfyanonymous
44361f6344
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
0d8f376446
Set last layer on SD2.x models uses the proper indexes now.
...
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
0966d3ce82
Don't run text encoders on xpu because there are issues.
2023-09-14 12:16:07 -04:00
comfyanonymous
3039b08eb1
Only parse command line args when main.py is called.
2023-09-13 11:38:20 -04:00
comfyanonymous
ed58730658
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
fb3b728203
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
7d401ed1d0
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
e85be36bd2
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
1e6b67101c
Support diffusers format t2i adapters.
2023-09-08 11:36:51 -04:00
comfyanonymous
326577d04c
Allow cancelling of everything with a progress bar.
2023-09-07 23:37:03 -04:00
comfyanonymous
f88f7f413a
Add a ConditioningSetAreaPercentage node.
2023-09-06 03:28:27 -04:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
7746bdf7b0
Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI
2023-09-04 00:43:11 -04:00
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
comfyanonymous
a74c5dbf37
Move some functions to utils.py
2023-09-02 22:33:37 -04:00
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
77a176f9e0
Use common function to reshape batch to.
2023-09-02 03:42:49 -04:00
comfyanonymous
7931ff0fd9
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
0e3b641172
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
5c363a9d86
Fix controlnet bug.
2023-09-01 02:01:08 -04:00
comfyanonymous
cfe1c54de8
Fix controlnet issue.
2023-08-31 15:16:58 -04:00
comfyanonymous
1c012d69af
It doesn't make sense for c_crossattn and c_concat to be lists.
2023-08-31 13:25:00 -04:00
comfyanonymous
7e941f9f24
Clean up DiffusersLoader node.
2023-08-30 12:57:07 -04:00
Simon Lui
18617967e5
Fix error message in model_patcher.py
...
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
fe4c07400c
Fix "Load Checkpoint with config" node.
2023-08-29 23:58:32 -04:00
comfyanonymous
f2f5e5dcbb
Support SDXL t2i adapters with 3 channel input.
2023-08-29 16:44:57 -04:00
comfyanonymous
15adc3699f
Move beta_schedule to model_config and allow disabling unet creation.
2023-08-29 14:22:53 -04:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
65cae62c71
No need to check filename extensions to detect shuffle controlnet.
2023-08-28 16:49:06 -04:00
comfyanonymous
4e89b2c25a
Put clip vision outputs on the CPU.
2023-08-28 16:26:11 -04:00
comfyanonymous
a094b45c93
Load clipvision model to GPU for faster performance.
2023-08-28 15:29:27 -04:00
comfyanonymous
1300a1bb4c
Text encoder should initially load on the offload_device not the regular.
2023-08-28 15:08:45 -04:00
comfyanonymous
f92074b84f
Move ModelPatcher to model_patcher.py
2023-08-28 14:51:31 -04:00
comfyanonymous
4798cf5a62
Implement loras with norm keys.
2023-08-28 11:20:06 -04:00
comfyanonymous
b8c7c770d3
Enable bf16-vae by default on ampere and up.
2023-08-27 23:06:19 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
a57b0c797b
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
f72780a7e3
The new smart memory management makes this unnecessary.
2023-08-25 18:02:15 -04:00
comfyanonymous
c77f02e1c6
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
15a7716fa6
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00
comfyanonymous
ec96f6d03a
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
30eb92c3cb
Code cleanups.
2023-08-24 19:39:18 -04:00
comfyanonymous
51dde87e97
Try to free enough vram for control lora inference.
2023-08-24 17:20:54 -04:00
comfyanonymous
e3d0a9a490
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
cc44ade79e
Always shift text encoder to GPU when the device supports fp16.
2023-08-23 21:45:00 -04:00
comfyanonymous
a6ef08a46a
Even with forced fp16 the cpu device should never use it.
2023-08-23 21:38:28 -04:00
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
f081017c1a
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
afcb9cb1df
All resolutions now work with t2i adapter for SDXL.
2023-08-22 16:23:54 -04:00
comfyanonymous
85fde89d7f
T2I adapter SDXL.
2023-08-22 14:40:43 -04:00
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00
comfyanonymous
763b0cf024
Fix control lora not working in fp32.
2023-08-21 20:38:31 -04:00
comfyanonymous
199d73364a
Fix ControlLora on lowvram.
2023-08-21 00:54:04 -04:00
comfyanonymous
d08e53de2e
Remove autocast from controlnet code.
2023-08-20 21:47:32 -04:00
comfyanonymous
0d7b0a4dc7
Small cleanups.
2023-08-20 14:56:47 -04:00
Simon Lui
9225465975
Further tuning and fix mem_free_total.
2023-08-20 14:19:53 -04:00
Simon Lui
2c096e4260
Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes.
2023-08-20 14:19:51 -04:00
comfyanonymous
e9469e732d
--disable-smart-memory now disables loading model directly to vram.
2023-08-20 04:00:53 -04:00
comfyanonymous
c9b562aed1
Free more memory before VAE encode/decode.
2023-08-19 12:13:13 -04:00
comfyanonymous
b80c3276dc
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
39ac856a33
ReVision support: unclip nodes can now be used with SDXL.
2023-08-18 11:59:36 -04:00
comfyanonymous
76d53c4622
Add support for clip g vision model to CLIPVisionLoader.
2023-08-18 11:13:29 -04:00
Alexopus
e59fe0537a
Fix referenced before assignment
...
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous
be9c5e25bc
Fix issue with not freeing enough memory when sampling.
2023-08-17 15:59:56 -04:00
comfyanonymous
ac0758a1a4
Fix bug with lowvram and controlnet advanced node.
2023-08-17 13:38:51 -04:00
comfyanonymous
c28db1f315
Fix potential issues with patching models when saving checkpoints.
2023-08-17 11:07:08 -04:00
comfyanonymous
3aee33b54e
Add --disable-smart-memory for those that want the old behaviour.
2023-08-17 03:12:37 -04:00
comfyanonymous
2be2742711
Fix issue with regular torch version.
2023-08-17 01:58:54 -04:00
comfyanonymous
89a0767abf
Smarter memory management.
...
Try to keep models on the vram when possible.
Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous
2c97c30256
Support small diffusers controlnet so both types are now supported.
2023-08-16 12:45:56 -04:00
comfyanonymous
53f326a3d8
Support diffusers mini controlnets.
2023-08-16 12:28:01 -04:00
comfyanonymous
58f0c616ed
Fix clip vision issue with old transformers versions.
2023-08-16 11:36:22 -04:00
comfyanonymous
ae270f79bc
Fix potential issue with batch size and clip vision.
2023-08-16 11:05:11 -04:00
comfyanonymous
a2ce9655ca
Refactor unclip code.
2023-08-14 23:48:47 -04:00
comfyanonymous
9cc12c833d
CLIPVisionEncode can now encode multiple images.
2023-08-14 16:54:05 -04:00
comfyanonymous
0cb6dac943
Remove 3m from PR #1213 because of some small issues.
2023-08-14 00:48:45 -04:00
comfyanonymous
e244b2df83
Add sgm_uniform scheduler that acts like the default one in sgm.
2023-08-14 00:29:03 -04:00
comfyanonymous
58c7da3665
Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras.
2023-08-14 00:28:50 -04:00
comfyanonymous
ba319a34e4
Merge branch 'dpmpp3m' of https://github.com/FizzleDorf/ComfyUI
2023-08-14 00:23:15 -04:00
FizzleDorf
3cfad03a68
dpmpp 3m + dpmpp 3m sde added
2023-08-13 22:29:04 -04:00
comfyanonymous
585a062910
Print unet config when model isn't detected.
2023-08-13 01:39:48 -04:00
comfyanonymous
c8a23ce9e8
Support for yet another lora type based on diffusers.
2023-08-11 13:04:21 -04:00
comfyanonymous
2bc12d3d22
Add --temp-directory argument to set temp directory.
2023-08-11 05:13:03 -04:00
comfyanonymous
c20583286f
Support diffuser text encoder loras.
2023-08-10 20:28:28 -04:00
comfyanonymous
cf10c5592c
Disable calculating uncond when CFG is 1.0
2023-08-09 20:55:03 -04:00
comfyanonymous
1f0f4cc0bd
Add argument to disable auto launching the browser.
2023-08-07 02:25:12 -04:00
comfyanonymous
d8e58f0a7e
Detect hint_channels from controlnet.
2023-08-06 14:08:59 -04:00
comfyanonymous
c5d7593ccf
Support loras in diffusers format.
2023-08-05 01:40:24 -04:00
comfyanonymous
1ce0d8ad68
Add CMP 30HX card to the nvidia_16_series list.
2023-08-04 12:08:45 -04:00
comfyanonymous
c99d8002f8
Make sure the pooled output stays at the EOS token with added embeddings.
2023-08-03 20:27:50 -04:00
comfyanonymous
4a77fcd6ab
Only shift text encoder to vram when CPU cores are under 8.
2023-07-31 00:08:54 -04:00
comfyanonymous
3cd31d0e24
Lower CPU thread check for running the text encoder on the CPU vs GPU.
2023-07-30 17:18:24 -04:00
comfyanonymous
2b13939044
Remove some useless code.
2023-07-30 14:13:33 -04:00
comfyanonymous
95d796fc85
Faster VAE loading.
2023-07-29 16:28:30 -04:00
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
2023-07-29 14:51:56 -04:00
comfyanonymous
c910b4a01c
Remove unused code and torchdiffeq dependency.
2023-07-28 21:32:27 -04:00
comfyanonymous
1141029a4a
Add --disable-metadata argument to disable saving metadata in files.
2023-07-28 12:31:41 -04:00
comfyanonymous
fbf5c51c1c
Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI
2023-07-27 16:13:48 -04:00
comfyanonymous
68be24eead
Remove some prints.
2023-07-27 16:12:43 -04:00
asagi4
1ea4d84691
Fix timestep ranges when batch_size > 1
2023-07-27 21:14:09 +03:00
comfyanonymous
5379051d16
Fix diffusers VAE loading.
2023-07-26 18:26:39 -04:00
comfyanonymous
727588d076
Fix some new loras.
2023-07-25 16:39:15 -04:00
comfyanonymous
4f9b6f39d1
Fix potential issue with Save Checkpoint.
2023-07-25 00:45:20 -04:00
comfyanonymous
5f75d784a1
Start is now 0.0 and end is now 1.0 for the timestep ranges.
2023-07-24 18:38:17 -04:00
comfyanonymous
7ff14b62f8
ControlNetApplyAdvanced can now define when controlnet gets applied.
2023-07-24 17:50:49 -04:00
comfyanonymous
d191c4f9ed
Add a ControlNetApplyAdvanced node.
...
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2023-07-24 13:35:20 -04:00
comfyanonymous
0240946ecf
Add a way to set which range of timesteps the cond gets applied to.
2023-07-24 09:25:02 -04:00
comfyanonymous
22f29d66ca
Try to fix memory issue with lora.
2023-07-22 21:38:56 -04:00
comfyanonymous
67be7eb81d
Nodes can now patch the unet function.
2023-07-22 17:01:12 -04:00
comfyanonymous
12a6e93171
Del the right object when applying lora.
2023-07-22 11:25:49 -04:00
comfyanonymous
78e7958d17
Support controlnet in diffusers format.
2023-07-21 22:58:16 -04:00
comfyanonymous
09386a3697
Fix issue with lora in some cases when combined with model merging.
2023-07-21 21:27:27 -04:00
comfyanonymous
58b2364f58
Properly support SDXL diffusers unet with UNETLoader node.
2023-07-21 14:38:56 -04:00
comfyanonymous
0115018695
Print errors and continue when lora weights are not compatible.
2023-07-20 19:56:22 -04:00
comfyanonymous
4760c29380
Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI
2023-07-20 00:34:54 -04:00
comfyanonymous
0b284f650b
Fix typo.
2023-07-19 10:20:32 -04:00
comfyanonymous
e032ca6138
Fix ddim issue with older torch versions.
2023-07-19 10:16:00 -04:00
comfyanonymous
18885f803a
Add MX450 and MX550 to list of cards with broken fp16.
2023-07-19 03:08:30 -04:00
comfyanonymous
9ba440995a
It's actually possible to torch.compile the unet now.
2023-07-18 21:36:35 -04:00
comfyanonymous
51d5477579
Add key to indicate checkpoint is v_prediction when saving.
2023-07-18 00:25:53 -04:00
comfyanonymous
ff6b047a74
Fix device print on old torch version.
2023-07-17 15:18:58 -04:00
comfyanonymous
9871a15cf9
Enable --cuda-malloc by default on torch 2.0 and up.
...
Add --disable-cuda-malloc to disable it.
2023-07-17 15:12:10 -04:00
comfyanonymous
55d0fca9fa
--windows-standalone-build now enables --cuda-malloc
2023-07-17 14:10:36 -04:00
comfyanonymous
1679abd86d
Add a command line argument to enable backend:cudaMallocAsync
2023-07-17 11:00:14 -04:00
comfyanonymous
3a150bad15
Only calculate randn in some samplers when it's actually being used.
2023-07-17 10:11:08 -04:00
comfyanonymous
ee8f8ee07f
Fix regression with ddim and uni_pc when batch size > 1.
2023-07-17 09:35:19 -04:00
comfyanonymous
3ded1a3a04
Refactor of sampler code to deal more easily with different model types.
2023-07-17 01:22:12 -04:00
comfyanonymous
5f57362613
Lower lora ram usage when in normal vram mode.
2023-07-16 02:59:04 -04:00
comfyanonymous
490771b7f4
Speed up lora loading a bit.
2023-07-15 13:25:22 -04:00
comfyanonymous
50b1180dde
Fix CLIPSetLastLayer not reverting when removed.
2023-07-15 01:41:21 -04:00
comfyanonymous
6fb084f39d
Reduce floating point rounding errors in loras.
2023-07-15 00:53:00 -04:00
comfyanonymous
91ed2815d5
Add a node to merge CLIP models.
2023-07-14 02:41:18 -04:00
comfyanonymous
b2f03164c7
Prevent the clip_g position_ids key from being saved in the checkpoint.
...
This is to make it match the official checkpoint.
2023-07-12 20:15:02 -04:00
comfyanonymous
46dc050c9f
Fix potential tensors being on different devices issues.
2023-07-12 19:29:27 -04:00
KarryCharon
3e2309f149
fix mps miss import
2023-07-12 10:06:34 +08:00
comfyanonymous
606a537090
Support SDXL embedding format with 2 CLIP.
2023-07-10 10:34:59 -04:00
comfyanonymous
6ad0a6d7e2
Don't patch weights when multiplier is zero.
2023-07-09 17:46:56 -04:00
comfyanonymous
d5323d16e0
latent2rgb matrix for SDXL.
2023-07-09 13:59:09 -04:00
comfyanonymous
0ae81c03bb
Empty cache after model unloading for normal vram and lower.
2023-07-09 09:56:03 -04:00
comfyanonymous
d3f5998218
Support loading clip_g from diffusers in CLIP Loader nodes.
2023-07-09 09:33:53 -04:00
comfyanonymous
a9a4ba7574
Fix merging not working when model2 of model merge node was a merge.
2023-07-08 22:31:10 -04:00
comfyanonymous
bb5fbd29e9
Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI
2023-07-07 01:52:25 -04:00
comfyanonymous
e7bee85df8
Add arguments to run the VAE in fp16 or bf16 for testing.
2023-07-06 23:23:46 -04:00
comfyanonymous
608fcc2591
Fix bug with weights when prompt is long.
2023-07-06 02:43:40 -04:00
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
comfyanonymous
603f02d613
Fix loras not working when loading checkpoint with config.
2023-07-05 19:42:24 -04:00
comfyanonymous
af7a49916b
Support loading unet files in diffusers format.
2023-07-05 17:38:59 -04:00
comfyanonymous
e57cba4c61
Add gpu variations of the sde samplers that are less deterministic
...
but faster.
2023-07-05 01:39:38 -04:00
comfyanonymous
f81b192944
Add logit scale parameter so it's present when saving the checkpoint.
2023-07-04 23:01:28 -04:00
comfyanonymous
acf95191ff
Properly support SDXL diffusers loras for unet.
2023-07-04 21:15:23 -04:00
mara
c61a95f9f7
Fix size check for conditioning mask
...
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous
8d694cc450
Fix issue with OSX.
2023-07-04 02:09:02 -04:00
comfyanonymous
c3e96e637d
Pass device to CLIP model.
2023-07-03 16:09:37 -04:00
comfyanonymous
5e6bc824aa
Allow passing custom path to clip-g and clip-h.
2023-07-03 15:45:04 -04:00
comfyanonymous
dc9d1f31c8
Improvements for OSX.
2023-07-03 00:08:30 -04:00
comfyanonymous
103c487a89
Cleanup.
2023-07-02 11:58:23 -04:00
comfyanonymous
2c4e0b49b7
Switch to fp16 on some cards when the model is too big.
2023-07-02 10:00:57 -04:00
comfyanonymous
6f3d9f52db
Add a --force-fp16 argument to force fp16 for testing.
2023-07-01 22:42:35 -04:00
comfyanonymous
1c1b0e7299
--gpu-only now keeps the VAE on the device.
2023-07-01 15:22:40 -04:00
comfyanonymous
ce35d8c659
Lower latency by batching some text encoder inputs.
2023-07-01 15:07:39 -04:00
comfyanonymous
3b6fe51c1d
Leave text_encoder on the CPU when it can handle it.
2023-07-01 14:38:51 -04:00
comfyanonymous
b6a60fa696
Try to keep text encoders loaded and patched to increase speed.
...
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
97ee230682
Make highvram and normalvram shift the text encoders to vram and back.
...
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
5a9ddf94eb
LoraLoader node now caches the lora file between executions.
2023-06-29 23:40:51 -04:00
comfyanonymous
9920367d3c
Fix embeddings not working with --gpu-only
2023-06-29 20:43:06 -04:00
comfyanonymous
62db11683b
Move unet to device right after loading on highvram mode.
2023-06-29 20:43:06 -04:00
comfyanonymous
4376b125eb
Remove useless code.
2023-06-29 00:26:33 -04:00
comfyanonymous
89120f1fbe
This is unused but it should be 1280.
2023-06-28 18:04:23 -04:00
comfyanonymous
2c7c14de56
Support for SDXL text encoder lora.
2023-06-28 02:22:49 -04:00
comfyanonymous
fcef47f06e
Fix bug.
2023-06-28 00:38:07 -04:00
comfyanonymous
8248babd44
Use pytorch attention by default on nvidia when xformers isn't present.
...
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
comfyanonymous
9b93b920be
Add CheckpointSave node to save checkpoints.
...
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.
Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32
Anything that patches the model weights like merging or loras will be
saved.
The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
b72a7a835a
Support loras based on the stability unet implementation.
2023-06-26 02:56:11 -04:00
comfyanonymous
c71a7e6b20
Fix ddim + inpainting not working.
2023-06-26 00:48:48 -04:00
comfyanonymous
4eab00e14b
Set the seed in the SDE samplers to make them more reproducible.
2023-06-25 03:04:57 -04:00
comfyanonymous
cef6aa62b2
Add support for TAESD decoder for SDXL.
2023-06-25 02:38:14 -04:00
comfyanonymous
20f579d91d
Add DualClipLoader to load clip models for SDXL.
...
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
b7933960bb
Fix CLIPLoader node.
2023-06-24 13:56:46 -04:00
comfyanonymous
78d8035f73
Fix bug with controlnet.
2023-06-24 11:02:38 -04:00
comfyanonymous
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
fa28d7334b
Remove useless code.
2023-06-23 12:35:26 -04:00
comfyanonymous
8607c2d42d
Move latent scale factor from VAE to model.
2023-06-23 02:33:31 -04:00
comfyanonymous
30a3861946
Fix bug when yaml config has no clip params.
2023-06-23 01:12:59 -04:00
comfyanonymous
9e37f4c7d5
Fix error with ClipVision loader node.
2023-06-23 01:08:05 -04:00
comfyanonymous
9f83b098c9
Don't merge weights when shapes don't match and print a warning.
2023-06-22 19:08:31 -04:00
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
9fccf4aa03
Add original_shape parameter to transformer patch extra_options.
2023-06-21 13:22:01 -04:00
comfyanonymous
51581dbfa9
Fix last commits causing an issue with the text encoder lora.
2023-06-20 19:44:39 -04:00
comfyanonymous
8125b51a62
Keep a set of model_keys for faster add_patches.
2023-06-20 19:08:48 -04:00
comfyanonymous
45beebd33c
Add a type of model patch useful for model merging.
2023-06-20 17:34:11 -04:00
comfyanonymous
036a22077c
Fix k_diffusion math being off by a tiny bit during txt2img.
2023-06-19 15:28:54 -04:00
comfyanonymous
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
cd930d4e7f
pop clip vision keys after loading them.
2023-06-18 21:21:17 -04:00
comfyanonymous
c9e4a8c9e5
Not needed anymore.
2023-06-18 13:06:59 -04:00
comfyanonymous
fb4bf7f591
This is not needed anymore and causes issues with alphas_cumprod.
2023-06-18 03:18:25 -04:00
comfyanonymous
45be2e92c1
Fix DDIM v-prediction.
2023-06-17 20:48:21 -04:00
comfyanonymous
e6e50ab2dd
Fix an issue when alphas_comprod are half floats.
2023-06-16 17:16:51 -04:00
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
2023-06-15 18:42:30 -04:00
comfyanonymous
f7edcfd927
Add a --gpu-only argument to keep and run everything on the GPU.
...
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
7bf89ba923
Initialize more unet weights as the right dtype.
2023-06-15 15:00:10 -04:00
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
2023-06-15 14:29:26 -04:00
comfyanonymous
bb1f45d6e8
Properly disable weight initialization in clip models.
2023-06-14 20:13:08 -04:00
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
2023-06-14 19:46:08 -04:00
comfyanonymous
9d54066ebc
This isn't needed for inference.
2023-06-14 13:05:08 -04:00
comfyanonymous
fa2cca056c
Don't initialize CLIPVision weights to default values.
2023-06-14 12:57:02 -04:00
comfyanonymous
6b774589a5
Set model to fp16 before loading the state dict to lower ram bump.
2023-06-14 12:48:02 -04:00
comfyanonymous
0c7cad404c
Don't initialize clip weights to default values.
2023-06-14 12:47:36 -04:00
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous
388567f20b
sampler_cfg_function now uses a dict for the argument.
...
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous
ff9b22d79e
Turn on safe load for a few models.
2023-06-13 10:12:03 -04:00
comfyanonymous
735ac4cf81
Remove pytorch_lightning dependency.
2023-06-13 10:11:33 -04:00
comfyanonymous
2b14041d4b
Remove useless code.
2023-06-13 02:40:58 -04:00
comfyanonymous
274dff3257
Remove more useless files.
2023-06-13 02:22:19 -04:00
comfyanonymous
f0a2b81cd0
Cleanup: Remove a bunch of useless files.
2023-06-13 02:19:08 -04:00
comfyanonymous
f8c5931053
Split the batch in VAEEncode if there's not enough memory.
2023-06-12 00:21:50 -04:00
comfyanonymous
c069fc0730
Auto switch to tiled VAE encode if regular one runs out of memory.
2023-06-11 23:25:39 -04:00
comfyanonymous
c64ca8c0b2
Refactor unCLIP noise augment out of samplers.py
2023-06-11 04:01:18 -04:00
comfyanonymous
de142eaad5
Simpler base model code.
2023-06-09 12:31:16 -04:00
comfyanonymous
23cf8ca7c5
Fix bug when embedding gets ignored because of mismatched size.
2023-06-08 23:48:14 -04:00
comfyanonymous
0e425603fb
Small refactor.
2023-06-06 13:23:01 -04:00
comfyanonymous
a3a713b6c5
Refactor previews into one command line argument.
...
Clean up a few things.
2023-06-06 02:13:05 -04:00
space-nuko
3e17971acb
preview method autodetection
2023-06-05 18:59:10 -05:00
space-nuko
d5a28fadaa
Add latent2rgb preview
2023-06-05 18:39:56 -05:00
space-nuko
48f7ec750c
Make previews into cli option
2023-06-05 13:19:02 -05:00
space-nuko
b4f434ee66
Preview sampled images with TAESD
2023-06-05 09:20:17 -05:00
comfyanonymous
fed0a4dd29
Some comments to say what the vram state options mean.
2023-06-04 17:51:04 -04:00
comfyanonymous
0a5fefd621
Cleanups and fixes for model_management.py
...
Hopefully fix regression on MPS and CPU.
2023-06-03 11:05:37 -04:00
comfyanonymous
700491d81a
Implement global average pooling for controlnet.
2023-06-03 01:49:03 -04:00
comfyanonymous
67892b5ac5
Refactor and improve model_management code related to free memory.
2023-06-02 15:21:33 -04:00
space-nuko
499641ebf1
More accurate total
2023-06-02 00:14:41 -05:00
space-nuko
b5dd15c67a
System stats endpoint
2023-06-01 23:26:23 -05:00
comfyanonymous
5c38958e49
Tweak lowvram model memory so it's closer to what it was before.
2023-06-01 04:04:35 -04:00
comfyanonymous
94680732d3
Empty cache on mps.
2023-06-01 03:52:51 -04:00
comfyanonymous
03da8a3426
This is useless for inference.
2023-05-31 13:03:24 -04:00
comfyanonymous
eb448dd8e1
Auto load model in lowvram if not enough memory.
2023-05-30 12:36:41 -04:00
comfyanonymous
b9818eb910
Add route to get safetensors metadata:
...
/view_metadata/loras?filename=lora.safetensors
2023-05-29 02:48:50 -04:00
comfyanonymous
a532888846
Support VAEs in diffusers format.
2023-05-28 02:02:09 -04:00
comfyanonymous
0fc483dcfd
Refactor diffusers model convert code to be able to reuse it.
2023-05-28 01:55:40 -04:00
comfyanonymous
eb4bd7711a
Remove einops.
2023-05-25 18:42:56 -04:00
comfyanonymous
87ab25fac7
Do operations in same order as the one it replaces.
2023-05-25 18:31:27 -04:00
comfyanonymous
2b1fac9708
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2023-05-25 14:44:16 -04:00
comfyanonymous
e1278fa925
Support old pytorch versions that don't have weights_only.
2023-05-25 13:30:59 -04:00
BlenderNeko
8b4b0c3188
vecorized bislerp
2023-05-25 19:23:47 +02:00
comfyanonymous
b8ccbec6d8
Various improvements to bislerp.
2023-05-23 11:40:24 -04:00
comfyanonymous
34887b8885
Add experimental bislerp algorithm for latent upscaling.
...
It's like bilinear but with slerp.
2023-05-23 03:12:56 -04:00
comfyanonymous
6cc450579b
Auto transpose images from exif data.
2023-05-22 00:22:24 -04:00
comfyanonymous
dc198650c0
sample_dpmpp_2m_sde no longer crashes when step == 1.
2023-05-21 11:34:29 -04:00
comfyanonymous
069657fbf3
Add DPM-Solver++(2M) SDE and exponential scheduler.
...
exponential scheduler is the one recommended with this sampler.
2023-05-21 01:46:03 -04:00
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2023-05-20 16:01:02 -04:00
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2023-05-20 15:07:21 -04:00
comfyanonymous
ef815ba1e2
Switch default scheduler to normal.
2023-05-15 00:29:56 -04:00
comfyanonymous
68d12b530e
Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI
2023-05-14 15:39:39 -04:00
comfyanonymous
3a1f47764d
Print the torch device that is used on startup.
2023-05-13 17:11:27 -04:00
BlenderNeko
1201d2eae5
Make nodes map over input lists ( #579 )
...
* allow nodes to map over lists
* make work with IS_CHANGED and VALIDATE_INPUTS
* give list outputs distinct socket shape
* add rebatch node
* add batch index logic
* add repeat latent batch
* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
BlenderNeko
19c014f429
comment out annoying print statement
2023-05-12 23:57:40 +02:00
BlenderNeko
d9e088ddfd
minor changes for tiled sampler
2023-05-12 23:49:09 +02:00
comfyanonymous
f7c0f75d1f
Auto batching improvements.
...
Try batching when cond sizes don't match with smart padding.
2023-05-10 13:59:24 -04:00
comfyanonymous
314e526c5c
Not needed anymore because sampling works with any latent size.
2023-05-09 12:18:18 -04:00
comfyanonymous
c6e34963e4
Make t2i adapter work with any latent resolution.
2023-05-08 18:15:19 -04:00
comfyanonymous
a1f12e370d
Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI
2023-05-07 17:19:03 -04:00
comfyanonymous
6fc4917634
Make maximum_batch_area take into account python2.0 attention function.
...
More conservative xformers maximum_batch_area.
2023-05-06 19:58:54 -04:00
comfyanonymous
678f933d38
maximum_batch_area for xformers.
...
Remove useless code.
2023-05-06 19:28:46 -04:00
EllangoK
8e03c789a2
auto-launch cli arg
2023-05-06 16:59:40 -04:00
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2023-05-05 18:11:41 -04:00
comfyanonymous
af9cc1fb6a
Search recursively in subfolders for embeddings.
2023-05-05 01:28:48 -04:00
comfyanonymous
6ee11d7bc0
Fix import.
2023-05-05 00:19:35 -04:00
comfyanonymous
bae4fb4a9d
Fix imports.
2023-05-04 18:10:29 -04:00
comfyanonymous
fcf513e0b6
Refactor.
2023-05-03 17:48:35 -04:00
comfyanonymous
a74e176a24
Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI
2023-05-03 16:24:56 -04:00
pythongosssss
5eeecf3fd5
remove unused import
2023-05-03 18:21:23 +01:00
pythongosssss
8912623ea9
use comfy progress bar
2023-05-03 18:19:22 +01:00
comfyanonymous
908dc1d5a8
Add a total_steps value to sampler callback.
2023-05-03 12:58:10 -04:00
pythongosssss
fdf57325f4
Merge remote-tracking branch 'origin/master' into tiled-progress
2023-05-03 17:33:42 +01:00
pythongosssss
27df74101e
reduce duplication
2023-05-03 17:33:19 +01:00
comfyanonymous
93c64afaa9
Use sampler callback instead of tqdm hook for progress bar.
2023-05-02 23:00:49 -04:00
pythongosssss
06ad35b493
added progress to encode + upscale
2023-05-02 19:18:07 +01:00
comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2023-05-02 14:17:51 -04:00
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2023-05-02 13:31:43 -04:00
comfyanonymous
9c335a553f
LoKR support.
2023-05-01 18:18:23 -04:00
comfyanonymous
d3293c8339
Properly disable all progress bars when disable_pbar=True
2023-05-01 15:52:17 -04:00
BlenderNeko
a2e18b1504
allow disabling of progress bar when sampling
2023-04-30 18:59:58 +02:00
comfyanonymous
071011aebe
Mask strength should be separate from area strength.
2023-04-29 20:06:53 -04:00
comfyanonymous
870fae62e7
Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI
2023-04-29 15:05:18 -04:00
Jacob Segal
af02393c2a
Default to sampling entire image
...
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.
I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
used to index into tensors.
2023-04-29 00:16:58 -07:00
comfyanonymous
056e5545ff
Don't try to get vram from xpu or cuda when directml is enabled.
2023-04-29 00:28:48 -04:00
comfyanonymous
2ca934f7d4
You can now select the device index with: --directml id
...
Like this for example: --directml 1
2023-04-28 16:51:35 -04:00
comfyanonymous
3baded9892
Basic torch_directml support. Use --directml to use it.
2023-04-28 14:28:57 -04:00
Jacob Segal
e214c917ae
Add Condition by Mask node
...
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2023-04-27 20:03:27 -07:00
comfyanonymous
5a971cecdb
Add callback to sampler function.
...
Callback format is: callback(step, x0, x)
2023-04-27 04:38:44 -04:00
comfyanonymous
aa57136dae
Some fixes to the batch masks PR.
2023-04-25 01:12:40 -04:00
comfyanonymous
c50208a703
Refactor more code to sample.py
2023-04-24 23:25:51 -04:00
comfyanonymous
7983b3a975
This is cleaner this way.
2023-04-24 22:45:35 -04:00
BlenderNeko
0b07b2cc0f
gligen tuple
2023-04-24 21:47:57 +02:00
pythongosssss
c8c9926eeb
Add progress to vae decode tiled
2023-04-24 11:55:44 +01:00
BlenderNeko
d9b1595f85
made sample functions more explicit
2023-04-24 12:53:10 +02:00
BlenderNeko
5818539743
add docstrings
2023-04-23 20:09:09 +02:00
BlenderNeko
8d2de420d3
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2023-04-23 20:02:18 +02:00
BlenderNeko
2a09e2aa27
refactor/split various bits of code for sampling
2023-04-23 20:02:08 +02:00
comfyanonymous
5282f56434
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
6908f9c949
This makes pytorch2.0 attention perform a bit faster.
2023-04-22 14:30:39 -04:00
comfyanonymous
907010e082
Remove some useless code.
2023-04-20 23:58:25 -04:00
comfyanonymous
96b57a9ad6
Don't pass adm to model when it doesn't support it.
2023-04-19 21:11:38 -04:00
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2023-04-19 11:06:32 -04:00
comfyanonymous
884ea653c8
Add a way for nodes to set a custom CFG function.
2023-04-17 11:05:15 -04:00
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2023-04-15 19:04:33 -04:00
comfyanonymous
81d1f00df3
Some refactoring: from_tokens -> encode_from_tokens
2023-04-15 18:46:58 -04:00
comfyanonymous
719c26c3c9
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2023-04-15 14:16:50 -04:00
BlenderNeko
d0b1b6c6bf
fixed improper padding
2023-04-15 19:38:21 +02:00
comfyanonymous
deb2b93e79
Move code to empty gpu cache to model_management.py
2023-04-15 11:19:07 -04:00
comfyanonymous
04d9bc13af
Safely load pickled embeds that don't load with weights_only=True.
2023-04-14 15:33:43 -04:00
BlenderNeko
da115bd78d
ensure backwards compat with optional args
2023-04-14 21:16:55 +02:00
BlenderNeko
752f7a162b
align behavior with old tokenize function
2023-04-14 21:02:45 +02:00
comfyanonymous
334aab05e5
Don't stop workflow if loading embedding fails.
2023-04-14 13:54:00 -04:00
BlenderNeko
73175cf58c
split tokenizer from encoder
2023-04-13 22:06:50 +02:00
BlenderNeko
8489cba140
add unique ID per word/embedding for tokenizer
2023-04-13 22:01:01 +02:00
comfyanonymous
92eca60ec9
Fix for new transformers version.
2023-04-09 15:55:21 -04:00
comfyanonymous
1e1875f674
Print xformers version and warning about 0.0.18
2023-04-09 01:31:47 -04:00
comfyanonymous
7e254d2f69
Clarify what --windows-standalone-build does.
2023-04-07 15:52:56 -04:00
comfyanonymous
44fea05064
Cleanup.
2023-04-07 02:31:46 -04:00
comfyanonymous
58ed0f2da4
Fix loading SD1.5 diffusers checkpoint.
2023-04-07 01:30:33 -04:00
comfyanonymous
8b9ac8fedb
Merge branch 'master' of https://github.com/sALTaccount/ComfyUI
2023-04-07 01:03:43 -04:00
comfyanonymous
64557d6781
Add a --force-fp32 argument to force fp32 for debugging.
2023-04-07 00:27:54 -04:00
comfyanonymous
bceccca0e5
Small refactor.
2023-04-06 23:53:54 -04:00
comfyanonymous
28a7205739
Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX
2023-04-06 23:45:29 -04:00
藍+85CD
05eeaa2de5
Merge branch 'master' into ipex
2023-04-07 09:11:30 +08:00
EllangoK
28fff5d1db
fixes lack of support for multi configs
...
also adds some metavars to argarse
2023-04-06 19:06:39 -04:00
comfyanonymous
f84f2508cc
Rename the cors parameter to something more verbose.
2023-04-06 15:24:55 -04:00
EllangoK
48efae1608
makes cors a cli parameter
2023-04-06 15:06:22 -04:00
EllangoK
01c1fc669f
set listen flag to listen on all if specifed
2023-04-06 13:19:00 -04:00
藍+85CD
3e2608e12b
Fix auto lowvram detection on CUDA
2023-04-06 15:44:05 +08:00
sALTaccount
60127a8304
diffusers loader
2023-04-05 23:57:31 -07:00
藍+85CD
7cb924f684
Use separate variables instead of `vram_state`
2023-04-06 14:24:47 +08:00
藍+85CD
84b9c0ac2f
Import intel_extension_for_pytorch as ipex
2023-04-06 12:27:22 +08:00
EllangoK
e5e587b1c0
seperates out arg parser and imports args
2023-04-05 23:41:23 -04:00
藍+85CD
37713e3b0a
Add basic XPU device support
...
closed #387
2023-04-05 21:22:14 +08:00
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2023-04-04 22:22:02 -04:00
comfyanonymous
1718730e80
Ignore embeddings when sizes don't match and print a WARNING.
2023-04-04 11:49:29 -04:00
comfyanonymous
23524ad8c5
Remove print.
2023-04-03 22:58:54 -04:00
comfyanonymous
539ff487a8
Pull latest tomesd code from upstream.
2023-04-03 15:49:28 -04:00
comfyanonymous
f50b1fec69
Add noise augmentation setting to unCLIPConditioning.
2023-04-03 13:50:29 -04:00
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
0d972b85e6
This seems to give better quality in tome.
2023-03-31 18:36:18 -04:00
comfyanonymous
18a6c1db33
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2023-03-31 13:04:39 -04:00
comfyanonymous
afd65d3819
Fix noise mask not working with > 1 batch size on ksamplers.
2023-03-30 03:50:12 -04:00
comfyanonymous
b2554bc4dd
Split VAE decode batches depending on free memory.
2023-03-29 02:24:37 -04:00
comfyanonymous
0d65cb17b7
Fix ddim_uniform crashing with 37 steps.
2023-03-28 16:29:35 -04:00
Francesco Yoshi Gobbo
f55755f0d2
code cleanup
2023-03-27 06:48:09 +02:00
Francesco Yoshi Gobbo
cf0098d539
no lowvram state if cpu only
2023-03-27 04:51:18 +02:00
comfyanonymous
f5365c9c81
Fix ddim for Mac: #264
2023-03-26 00:36:54 -04:00
comfyanonymous
4adcea7228
I don't think controlnets were being handled correctly by MPS.
2023-03-24 14:33:16 -04:00
comfyanonymous
3c6ff8821c
Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI
2023-03-24 13:56:43 -04:00
Yurii Mazurevich
fc71e7ea08
Fixed typo
2023-03-24 19:39:55 +02:00
comfyanonymous
7f0fd99b5d
Make ddim work with --cpu
2023-03-24 11:39:51 -04:00
Yurii Mazurevich
4b943d2b60
Removed unnecessary comment
2023-03-24 14:15:30 +02:00
Yurii Mazurevich
89fd5ed574
Added MPS device support
2023-03-24 14:12:56 +02:00
comfyanonymous
dd095efc2c
Support loha that use cp decomposition.
2023-03-23 04:32:25 -04:00
comfyanonymous
94a7c895f4
Add loha support.
2023-03-23 03:40:12 -04:00
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
4039616ca6
Less seams in tiled outputs at the cost of more processing.
2023-03-22 03:29:09 -04:00
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2023-03-22 02:45:18 -04:00
comfyanonymous
9d0665c8d0
Add laptop quadro cards to fp32 list.
2023-03-21 16:57:35 -04:00
comfyanonymous
cc309568e1
Add support for locon mid weights.
2023-03-21 14:51:51 -04:00
comfyanonymous
edfc4ca663
Try to fix a vram issue with controlnets.
2023-03-19 10:50:38 -04:00
comfyanonymous
b4b21be707
Fix area composition feathering not working properly.
2023-03-19 02:00:52 -04:00
comfyanonymous
50099bcd96
Support multiple paths for embeddings.
2023-03-18 03:08:43 -04:00
comfyanonymous
2e73367f45
Merge T2IAdapterLoader and ControlNetLoader.
...
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
ee46bef03a
Make --cpu have priority over everything else.
2023-03-13 21:30:01 -04:00
comfyanonymous
0e836d525e
use half() on fp16 models loaded with config.
2023-03-13 21:12:48 -04:00
comfyanonymous
986dd820dc
Use half() function on model when loading in fp16.
2023-03-13 20:58:09 -04:00
comfyanonymous
54dbfaf2ec
Remove omegaconf dependency and some ci changes.
2023-03-13 14:49:18 -04:00
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2023-03-13 12:45:54 -04:00
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2023-03-13 11:36:48 -04:00
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
e33dc2b33b
Add a VAEEncodeTiled node.
2023-03-11 15:28:15 -05:00
comfyanonymous
1de86851b1
Try to fix memory issue.
2023-03-11 15:15:13 -05:00
comfyanonymous
2b1fce2943
Make tiled_scale work for downscaling.
2023-03-11 14:58:55 -05:00
comfyanonymous
9db2e97b47
Tiled upscaling with the upscale models.
2023-03-11 14:04:13 -05:00
comfyanonymous
cd64111c83
Add locon support.
2023-03-09 21:41:24 -05:00
comfyanonymous
c70f0ac64b
SD2.x controlnets now work.
2023-03-08 01:13:38 -05:00
comfyanonymous
19415c3ace
Relative imports to test something.
2023-03-07 11:00:35 -05:00
edikius
165be5828a
Fixed import ( #44 )
...
* fixed import error
I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app
* Update main.py
* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous
501f19eec6
Fix clip_skip no longer being loaded from yaml file.
2023-03-06 11:34:02 -05:00
comfyanonymous
afff30fc0a
Add --cpu to use the cpu for inference.
2023-03-06 10:50:50 -05:00
comfyanonymous
47acb3d73e
Implement support for t2i style model.
...
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.
Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models
StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00
comfyanonymous
cc8baf1080
Make VAE use common function to get free memory.
2023-03-05 14:20:07 -05:00
comfyanonymous
798c90e1c0
Fix pytorch 2.0 cross attention not working.
2023-03-05 14:14:54 -05:00
comfyanonymous
16130c7546
Add support for new colour T2I adapter model.
2023-03-03 19:13:07 -05:00
comfyanonymous
9d00235b41
Update T2I adapter code to latest.
2023-03-03 18:46:49 -05:00
comfyanonymous
ebfcf0a9c9
Fix issue.
2023-03-03 13:18:01 -05:00
comfyanonymous
4215206281
Add a node to set CLIP skip.
...
Use a more simple way to detect if the model is -v prediction.
2023-03-03 13:04:36 -05:00
comfyanonymous
fed315a76a
To be really simple CheckpointLoaderSimple should pick the right type.
2023-03-03 11:07:10 -05:00
comfyanonymous
94bb0375b0
New CheckpointLoaderSimple to load checkpoints without a config.
2023-03-03 03:37:35 -05:00
comfyanonymous
c1f5855ac1
Make some cross attention functions work on the CPU.
2023-03-03 03:27:33 -05:00
comfyanonymous
1a612e1c74
Add some pytorch scaled_dot_product_attention code for testing.
...
--use-pytorch-cross-attention to use it.
2023-03-02 17:01:20 -05:00
comfyanonymous
69cc75fbf8
Add a way to interrupt current processing in the backend.
2023-03-02 14:42:03 -05:00
comfyanonymous
9502ee45c3
Hopefully fix a strange issue with xformers + lowvram.
2023-02-28 13:48:52 -05:00
comfyanonymous
b31daadc03
Try to improve memory issues with del.
2023-02-28 12:27:43 -05:00
comfyanonymous
2c5f0ec681
Small adjustment.
2023-02-27 20:04:18 -05:00
comfyanonymous
86721d5158
Enable highvram automatically when vram >> ram
2023-02-27 19:57:39 -05:00
comfyanonymous
75fa162531
Remove sample_ from some sampler names.
...
Old workflows will still work.
2023-02-27 01:43:06 -05:00
comfyanonymous
9f4214e534
Preparing to add another function to load checkpoints.
2023-02-26 17:29:01 -05:00
comfyanonymous
3cd7d84b53
Fix uni_pc sampler not working with 1 or 2 steps.
2023-02-26 04:01:01 -05:00
comfyanonymous
dfb397e034
Fix multiple controlnets not working.
2023-02-25 22:12:22 -05:00
comfyanonymous
af3cc1b5fb
Fixed issue when batched image was used as a controlnet input.
2023-02-25 14:57:28 -05:00
comfyanonymous
d2da346b0b
Fix missing variable.
2023-02-25 12:19:03 -05:00
comfyanonymous
4e6b83a80a
Add a T2IAdapterLoader node to load T2I-Adapter models.
...
They are loaded as CONTROL_NET objects because they are similar.
2023-02-25 01:24:56 -05:00
comfyanonymous
fcb25d37db
Prepare for t2i adapter.
2023-02-24 23:36:17 -05:00
comfyanonymous
cf5a211efc
Remove some useless imports
2023-02-24 12:36:55 -05:00
comfyanonymous
87b00b37f6
Added an experimental VAEDecodeTiled.
...
This decodes the image with the VAE in tiles which should be faster and
use less vram.
It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2023-02-24 02:10:10 -05:00
comfyanonymous
62df8dd62a
Add a node to load diff controlnets.
2023-02-22 23:22:03 -05:00
comfyanonymous
f04dc2c2f4
Implement DDIM sampler.
2023-02-22 21:10:19 -05:00
comfyanonymous
2976c1ad28
Uni_PC: make max denoise behave more like other samplers.
...
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2023-02-22 02:21:06 -05:00
comfyanonymous
c9daec4c89
Remove prints that are useless when xformers is enabled.
2023-02-21 22:16:13 -05:00
comfyanonymous
a7328e4945
Add uni_pc bh2 variant.
2023-02-21 16:11:48 -05:00
comfyanonymous
d80af7ca30
ControlNetApply now stacks.
...
It can be used to apply multiple control nets at the same time.
2023-02-21 01:18:53 -05:00
comfyanonymous
00a9189e30
Support old pytorch.
2023-02-19 16:59:03 -05:00
comfyanonymous
137ae2606c
Support people putting commas after the embedding name in the prompt.
2023-02-19 02:50:48 -05:00
comfyanonymous
2326ff1263
Add: --highvram for when you want models to stay on the vram.
2023-02-17 21:27:02 -05:00
comfyanonymous
09f1d76ed8
Fix an OOM issue.
2023-02-17 16:21:01 -05:00
comfyanonymous
d66415c021
Low vram mode for controlnets.
2023-02-17 15:48:16 -05:00
comfyanonymous
220a72d36b
Use fp16 for fp16 control nets.
2023-02-17 15:31:38 -05:00
comfyanonymous
6135a21ee8
Add a way to control controlnet strength.
2023-02-16 18:08:01 -05:00
comfyanonymous
4efa67fa12
Add ControlNet support.
2023-02-16 10:38:08 -05:00
comfyanonymous
bc69fb5245
Use inpaint models the proper way by using VAEEncodeForInpaint.
2023-02-15 20:44:51 -05:00
comfyanonymous
cef2cc3cb0
Support for inpaint models.
2023-02-15 16:38:20 -05:00
comfyanonymous
07db00355f
Add masks to samplers code for inpainting.
2023-02-15 13:16:38 -05:00
comfyanonymous
e3451cea4f
uni_pc now works with KSamplerAdvanced return_with_leftover_noise.
2023-02-13 12:29:21 -05:00
comfyanonymous
f542f248f1
Show the right amount of steps in the progress bar for uni_pc.
...
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2023-02-11 14:59:42 -05:00
comfyanonymous
f10b8948c3
768-v support for uni_pc sampler.
2023-02-11 04:34:58 -05:00
comfyanonymous
ce0aeb109e
Remove print.
2023-02-11 03:41:40 -05:00
comfyanonymous
5489d5af04
Add uni_pc sampler to KSampler* nodes.
2023-02-11 03:34:09 -05:00
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2023-02-10 11:47:41 -05:00
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2023-02-10 03:13:49 -05:00
comfyanonymous
7e1e193f39
Automatically enable lowvram mode if vram is less than 4GB.
...
Use: --normalvram to disable it.
2023-02-10 00:47:56 -05:00
comfyanonymous
324273fff2
Fix embedding not working when on new line.
2023-02-09 14:12:02 -05:00
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2023-02-09 13:47:36 -05:00
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2023-02-09 12:43:29 -05:00
comfyanonymous
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2023-02-09 12:33:27 -05:00
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2023-02-08 22:04:20 -05:00
comfyanonymous
5b4e312749
Use inplace operations for less OOM issues.
2023-02-08 22:04:13 -05:00
comfyanonymous
3fd87cbd21
Slightly smarter batching behaviour.
...
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
comfyanonymous
bbdcf0b737
Use relative imports for k_diffusion.
2023-02-08 16:51:19 -05:00
comfyanonymous
708138c77d
Remove print.
2023-02-08 14:51:18 -05:00
comfyanonymous
047775615b
Lower the chances of an OOM.
2023-02-08 14:24:27 -05:00
comfyanonymous
853e96ada3
Increase it/s by batching together some stuff sent to unet.
2023-02-08 14:24:00 -05:00
comfyanonymous
c92633eaa2
Auto calculate amount of memory to use for --lowvram
2023-02-08 11:42:37 -05:00
comfyanonymous
534736b924
Add some low vram modes: --lowvram and --novram
2023-02-08 11:37:10 -05:00
comfyanonymous
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2023-02-08 03:40:43 -05:00
comfyanonymous
b1a7c9ebf6
Embeddings/textual inversion support for SD2.x
2023-02-05 15:49:03 -05:00
comfyanonymous
1de5aa6a59
Add a CLIPLoader node to load standalone clip weights.
...
Put them in models/clip
2023-02-05 15:20:18 -05:00
comfyanonymous
56d802e1f3
Use transformers CLIP instead of open_clip for SD2.x
...
This should make things a bit cleaner.
2023-02-05 14:36:28 -05:00
comfyanonymous
bf9ccffb17
Small fix for SD2.x loras.
2023-02-05 11:38:25 -05:00
comfyanonymous
678105fade
SD2.x CLIP support for Loras.
2023-02-05 01:54:09 -05:00
comfyanonymous
ef90e9c376
Add a LoraLoader node to apply loras to models and clip.
...
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00
comfyanonymous
69df7eba94
Add KSamplerAdvanced node.
...
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2023-01-31 03:09:38 -05:00
comfyanonymous
1daccf3678
Run softmax in place if it OOMs.
2023-01-30 19:55:01 -05:00
comfyanonymous
f73e57d881
Add support for textual inversion embedding for SD1.x CLIP.
2023-01-29 18:46:44 -05:00
comfyanonymous
50db297cf6
Try to fix OOM issues with cards that have less vram than mine.
2023-01-29 00:50:46 -05:00
comfyanonymous
73f60740c8
Slightly cleaner code.
2023-01-28 02:14:22 -05:00
comfyanonymous
0108616b77
Fix issue with some models.
2023-01-28 01:38:42 -05:00
comfyanonymous
2973ff24c5
Round CLIP position ids to fix float issues in some checkpoints.
2023-01-28 00:19:33 -05:00
comfyanonymous
c4b02059d0
Add ConditioningSetArea node.
...
to apply conditioning/prompts only to a specific area of the image.
Add ConditioningCombine node.
so that multiple conditioning/prompts can be applied to the image at the
same time
2023-01-26 12:06:48 -05:00
comfyanonymous
acdc6f42e0
Fix loading some malformed checkpoints?
2023-01-25 15:20:55 -05:00
comfyanonymous
051f472e8f
Fix sub quadratic attention for SD2 and make it the default optimization.
2023-01-25 01:22:43 -05:00
comfyanonymous
220afe3310
Initial commit.
2023-01-16 22:37:14 -05:00