comfyanonymous
539ff487a8
Pull latest tomesd code from upstream.
2023-04-03 15:49:28 -04:00
comfyanonymous
f50b1fec69
Add noise augmentation setting to unCLIPConditioning.
2023-04-03 13:50:29 -04:00
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
0d972b85e6
This seems to give better quality in tome.
2023-03-31 18:36:18 -04:00
comfyanonymous
18a6c1db33
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2023-03-31 13:04:39 -04:00
comfyanonymous
afd65d3819
Fix noise mask not working with > 1 batch size on ksamplers.
2023-03-30 03:50:12 -04:00
comfyanonymous
b2554bc4dd
Split VAE decode batches depending on free memory.
2023-03-29 02:24:37 -04:00
comfyanonymous
0d65cb17b7
Fix ddim_uniform crashing with 37 steps.
2023-03-28 16:29:35 -04:00
Francesco Yoshi Gobbo
f55755f0d2
code cleanup
2023-03-27 06:48:09 +02:00
Francesco Yoshi Gobbo
cf0098d539
no lowvram state if cpu only
2023-03-27 04:51:18 +02:00
comfyanonymous
f5365c9c81
Fix ddim for Mac: #264
2023-03-26 00:36:54 -04:00
comfyanonymous
4adcea7228
I don't think controlnets were being handled correctly by MPS.
2023-03-24 14:33:16 -04:00
comfyanonymous
3c6ff8821c
Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI
2023-03-24 13:56:43 -04:00
Yurii Mazurevich
fc71e7ea08
Fixed typo
2023-03-24 19:39:55 +02:00
comfyanonymous
7f0fd99b5d
Make ddim work with --cpu
2023-03-24 11:39:51 -04:00
Yurii Mazurevich
4b943d2b60
Removed unnecessary comment
2023-03-24 14:15:30 +02:00
Yurii Mazurevich
89fd5ed574
Added MPS device support
2023-03-24 14:12:56 +02:00
comfyanonymous
dd095efc2c
Support loha that use cp decomposition.
2023-03-23 04:32:25 -04:00
comfyanonymous
94a7c895f4
Add loha support.
2023-03-23 03:40:12 -04:00
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
4039616ca6
Less seams in tiled outputs at the cost of more processing.
2023-03-22 03:29:09 -04:00
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2023-03-22 02:45:18 -04:00
comfyanonymous
9d0665c8d0
Add laptop quadro cards to fp32 list.
2023-03-21 16:57:35 -04:00
comfyanonymous
cc309568e1
Add support for locon mid weights.
2023-03-21 14:51:51 -04:00
comfyanonymous
edfc4ca663
Try to fix a vram issue with controlnets.
2023-03-19 10:50:38 -04:00
comfyanonymous
b4b21be707
Fix area composition feathering not working properly.
2023-03-19 02:00:52 -04:00
comfyanonymous
50099bcd96
Support multiple paths for embeddings.
2023-03-18 03:08:43 -04:00
comfyanonymous
2e73367f45
Merge T2IAdapterLoader and ControlNetLoader.
...
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
ee46bef03a
Make --cpu have priority over everything else.
2023-03-13 21:30:01 -04:00
comfyanonymous
0e836d525e
use half() on fp16 models loaded with config.
2023-03-13 21:12:48 -04:00
comfyanonymous
986dd820dc
Use half() function on model when loading in fp16.
2023-03-13 20:58:09 -04:00
comfyanonymous
54dbfaf2ec
Remove omegaconf dependency and some ci changes.
2023-03-13 14:49:18 -04:00
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2023-03-13 12:45:54 -04:00
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2023-03-13 11:36:48 -04:00
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
e33dc2b33b
Add a VAEEncodeTiled node.
2023-03-11 15:28:15 -05:00
comfyanonymous
1de86851b1
Try to fix memory issue.
2023-03-11 15:15:13 -05:00
comfyanonymous
2b1fce2943
Make tiled_scale work for downscaling.
2023-03-11 14:58:55 -05:00
comfyanonymous
9db2e97b47
Tiled upscaling with the upscale models.
2023-03-11 14:04:13 -05:00
comfyanonymous
cd64111c83
Add locon support.
2023-03-09 21:41:24 -05:00
comfyanonymous
c70f0ac64b
SD2.x controlnets now work.
2023-03-08 01:13:38 -05:00
comfyanonymous
19415c3ace
Relative imports to test something.
2023-03-07 11:00:35 -05:00
edikius
165be5828a
Fixed import ( #44 )
...
* fixed import error
I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app
* Update main.py
* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous
501f19eec6
Fix clip_skip no longer being loaded from yaml file.
2023-03-06 11:34:02 -05:00
comfyanonymous
afff30fc0a
Add --cpu to use the cpu for inference.
2023-03-06 10:50:50 -05:00
comfyanonymous
47acb3d73e
Implement support for t2i style model.
...
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.
Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models
StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00
comfyanonymous
cc8baf1080
Make VAE use common function to get free memory.
2023-03-05 14:20:07 -05:00
comfyanonymous
798c90e1c0
Fix pytorch 2.0 cross attention not working.
2023-03-05 14:14:54 -05:00
comfyanonymous
16130c7546
Add support for new colour T2I adapter model.
2023-03-03 19:13:07 -05:00
comfyanonymous
9d00235b41
Update T2I adapter code to latest.
2023-03-03 18:46:49 -05:00
comfyanonymous
ebfcf0a9c9
Fix issue.
2023-03-03 13:18:01 -05:00
comfyanonymous
4215206281
Add a node to set CLIP skip.
...
Use a more simple way to detect if the model is -v prediction.
2023-03-03 13:04:36 -05:00
comfyanonymous
fed315a76a
To be really simple CheckpointLoaderSimple should pick the right type.
2023-03-03 11:07:10 -05:00
comfyanonymous
94bb0375b0
New CheckpointLoaderSimple to load checkpoints without a config.
2023-03-03 03:37:35 -05:00
comfyanonymous
c1f5855ac1
Make some cross attention functions work on the CPU.
2023-03-03 03:27:33 -05:00
comfyanonymous
1a612e1c74
Add some pytorch scaled_dot_product_attention code for testing.
...
--use-pytorch-cross-attention to use it.
2023-03-02 17:01:20 -05:00
comfyanonymous
69cc75fbf8
Add a way to interrupt current processing in the backend.
2023-03-02 14:42:03 -05:00
comfyanonymous
9502ee45c3
Hopefully fix a strange issue with xformers + lowvram.
2023-02-28 13:48:52 -05:00
comfyanonymous
b31daadc03
Try to improve memory issues with del.
2023-02-28 12:27:43 -05:00
comfyanonymous
2c5f0ec681
Small adjustment.
2023-02-27 20:04:18 -05:00
comfyanonymous
86721d5158
Enable highvram automatically when vram >> ram
2023-02-27 19:57:39 -05:00
comfyanonymous
75fa162531
Remove sample_ from some sampler names.
...
Old workflows will still work.
2023-02-27 01:43:06 -05:00
comfyanonymous
9f4214e534
Preparing to add another function to load checkpoints.
2023-02-26 17:29:01 -05:00
comfyanonymous
3cd7d84b53
Fix uni_pc sampler not working with 1 or 2 steps.
2023-02-26 04:01:01 -05:00
comfyanonymous
dfb397e034
Fix multiple controlnets not working.
2023-02-25 22:12:22 -05:00
comfyanonymous
af3cc1b5fb
Fixed issue when batched image was used as a controlnet input.
2023-02-25 14:57:28 -05:00
comfyanonymous
d2da346b0b
Fix missing variable.
2023-02-25 12:19:03 -05:00
comfyanonymous
4e6b83a80a
Add a T2IAdapterLoader node to load T2I-Adapter models.
...
They are loaded as CONTROL_NET objects because they are similar.
2023-02-25 01:24:56 -05:00
comfyanonymous
fcb25d37db
Prepare for t2i adapter.
2023-02-24 23:36:17 -05:00
comfyanonymous
cf5a211efc
Remove some useless imports
2023-02-24 12:36:55 -05:00
comfyanonymous
87b00b37f6
Added an experimental VAEDecodeTiled.
...
This decodes the image with the VAE in tiles which should be faster and
use less vram.
It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2023-02-24 02:10:10 -05:00
comfyanonymous
62df8dd62a
Add a node to load diff controlnets.
2023-02-22 23:22:03 -05:00
comfyanonymous
f04dc2c2f4
Implement DDIM sampler.
2023-02-22 21:10:19 -05:00
comfyanonymous
2976c1ad28
Uni_PC: make max denoise behave more like other samplers.
...
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2023-02-22 02:21:06 -05:00
comfyanonymous
c9daec4c89
Remove prints that are useless when xformers is enabled.
2023-02-21 22:16:13 -05:00
comfyanonymous
a7328e4945
Add uni_pc bh2 variant.
2023-02-21 16:11:48 -05:00
comfyanonymous
d80af7ca30
ControlNetApply now stacks.
...
It can be used to apply multiple control nets at the same time.
2023-02-21 01:18:53 -05:00
comfyanonymous
00a9189e30
Support old pytorch.
2023-02-19 16:59:03 -05:00
comfyanonymous
137ae2606c
Support people putting commas after the embedding name in the prompt.
2023-02-19 02:50:48 -05:00
comfyanonymous
2326ff1263
Add: --highvram for when you want models to stay on the vram.
2023-02-17 21:27:02 -05:00
comfyanonymous
09f1d76ed8
Fix an OOM issue.
2023-02-17 16:21:01 -05:00
comfyanonymous
d66415c021
Low vram mode for controlnets.
2023-02-17 15:48:16 -05:00
comfyanonymous
220a72d36b
Use fp16 for fp16 control nets.
2023-02-17 15:31:38 -05:00
comfyanonymous
6135a21ee8
Add a way to control controlnet strength.
2023-02-16 18:08:01 -05:00
comfyanonymous
4efa67fa12
Add ControlNet support.
2023-02-16 10:38:08 -05:00
comfyanonymous
bc69fb5245
Use inpaint models the proper way by using VAEEncodeForInpaint.
2023-02-15 20:44:51 -05:00
comfyanonymous
cef2cc3cb0
Support for inpaint models.
2023-02-15 16:38:20 -05:00
comfyanonymous
07db00355f
Add masks to samplers code for inpainting.
2023-02-15 13:16:38 -05:00
comfyanonymous
e3451cea4f
uni_pc now works with KSamplerAdvanced return_with_leftover_noise.
2023-02-13 12:29:21 -05:00
comfyanonymous
f542f248f1
Show the right amount of steps in the progress bar for uni_pc.
...
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2023-02-11 14:59:42 -05:00
comfyanonymous
f10b8948c3
768-v support for uni_pc sampler.
2023-02-11 04:34:58 -05:00
comfyanonymous
ce0aeb109e
Remove print.
2023-02-11 03:41:40 -05:00
comfyanonymous
5489d5af04
Add uni_pc sampler to KSampler* nodes.
2023-02-11 03:34:09 -05:00
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2023-02-10 11:47:41 -05:00
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2023-02-10 03:13:49 -05:00
comfyanonymous
7e1e193f39
Automatically enable lowvram mode if vram is less than 4GB.
...
Use: --normalvram to disable it.
2023-02-10 00:47:56 -05:00
comfyanonymous
324273fff2
Fix embedding not working when on new line.
2023-02-09 14:12:02 -05:00
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2023-02-09 13:47:36 -05:00
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2023-02-09 12:43:29 -05:00