comfyanonymous
40e124c6be
SV3D support.
2024-03-18 16:54:13 -04:00
comfyanonymous
cacb022c4a
Make saved SD1 checkpoints match more closely the official one.
2024-03-18 00:26:23 -04:00
comfyanonymous
d7897fff2c
Move cascade scale factor from stage_a to latent_formats.py
2024-03-16 14:49:35 -04:00
comfyanonymous
f2fe635c9f
SamplerDPMAdaptative node to test the different options.
2024-03-15 22:36:10 -04:00
comfyanonymous
448d9263a2
Fix control loras breaking.
2024-03-14 09:30:21 -04:00
comfyanonymous
db8b59ecff
Lower memory usage for loras in lowvram mode at the cost of perf.
2024-03-13 20:07:27 -04:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
comfyanonymous
0ed72befe1
Change log levels.
...
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
2024-03-10 12:14:23 -04:00
comfyanonymous
5f60ee246e
Support loading the sr cascade controlnet.
2024-03-07 01:22:48 -05:00
comfyanonymous
03e6e81629
Set upscale algorithm to bilinear for stable cascade controlnet.
2024-03-06 02:59:40 -05:00
comfyanonymous
03e83bb5d0
Support stable cascade canny controlnet.
2024-03-06 02:25:42 -05:00
comfyanonymous
10860bcd28
Add compression_ratio to controlnet code.
2024-03-05 15:15:20 -05:00
comfyanonymous
478f71a249
Remove useless check.
2024-03-04 08:51:25 -05:00
comfyanonymous
12c1080ebc
Simplify differential diffusion code.
2024-03-03 15:34:42 -05:00
Shiimizu
727021bdea
Implement Differential Diffusion ( #2876 )
...
* Implement Differential Diffusion
* Cleanup.
* Fix.
* Masks should be applied at full strength.
* Fix colors.
* Register the node.
* Cleaner code.
* Fix issue with getting unipc sampler.
* Adjust thresholds.
* Switch to linear thresholds.
* Only calculate nearest_idx on valid thresholds.
2024-03-03 15:34:13 -05:00
comfyanonymous
1abf8374ec
utils.set_attr can now be used to set any attribute.
...
The old set_attr has been renamed to set_attr_param.
2024-03-02 17:27:23 -05:00
comfyanonymous
dce3555339
Add some tesla pascal GPUs to the fp16 working but slower list.
2024-03-02 17:16:31 -05:00
comfyanonymous
51df846598
Let conditioning specify custom concat conds.
2024-03-02 11:44:06 -05:00
comfyanonymous
9f71e4b62d
Let model patches patch sub objects.
2024-03-02 11:43:27 -05:00
comfyanonymous
00425563c0
Cleanup: Use sampling noise scaling function for inpainting.
2024-03-01 14:24:41 -05:00
comfyanonymous
c62e836167
Move noise scaling to object with sampling math.
2024-03-01 12:54:38 -05:00
comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
2024-02-29 13:11:30 -05:00
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
37a86e4618
Remove duplicate text_projection key from some saved models.
2024-02-28 03:57:41 -05:00
comfyanonymous
8daedc5bf2
Auto detect playground v2.5 model.
2024-02-27 18:03:03 -05:00
comfyanonymous
d46583ecec
Playground V2.5 support with ModelSamplingContinuousEDM node.
...
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
2024-02-27 15:12:33 -05:00
comfyanonymous
1e0fcc9a65
Make XL checkpoints save in a more standard format.
2024-02-27 02:07:40 -05:00
comfyanonymous
b416be7d78
Make the text projection saved in the checkpoint the right format.
2024-02-27 01:52:23 -05:00
comfyanonymous
03c47fc0f2
Add a min_length property to tokenizer class.
2024-02-26 21:36:37 -05:00
comfyanonymous
8ac69f62e5
Make return_projected_pooled setable from the __init__
2024-02-25 14:49:13 -05:00
comfyanonymous
ca7c310a0e
Support loading old CLIP models saved with CLIPSave.
2024-02-25 08:29:12 -05:00
comfyanonymous
c2cb8e889b
Always return unprojected pooled output for gligen.
2024-02-25 07:33:13 -05:00
comfyanonymous
1cb3f6a83b
Move text projection into the CLIP model code.
...
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous
6533b172c1
Support text encoder text_projection in lora.
2024-02-24 23:50:46 -05:00
comfyanonymous
1e5f0f66be
Support lora keys with lora_prior_unet_ and lora_prior_te_
2024-02-23 12:21:20 -05:00
logtd
e1cb93c383
Fix model and cond transformer options merge
2024-02-23 01:19:43 -07:00
comfyanonymous
10847dfafe
Cleanup uni_pc inpainting.
...
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
2024-02-23 02:39:35 -05:00
comfyanonymous
18c151b3e3
Add some latent2rgb matrices for previews.
2024-02-20 10:57:24 -05:00
comfyanonymous
0d0fbabd1d
Pass pooled CLIP to stage b.
2024-02-20 04:24:45 -05:00
comfyanonymous
c6b7a157ed
Align simple scheduling closer to official stable cascade scheduler.
2024-02-20 04:24:39 -05:00
comfyanonymous
88f300401c
Enable fp16 by default on mps.
2024-02-19 12:00:48 -05:00
comfyanonymous
e93cdd0ad0
Remove print.
2024-02-19 11:47:26 -05:00
comfyanonymous
3711b31dff
Support Stable Cascade in checkpoint format.
2024-02-19 11:20:48 -05:00
comfyanonymous
d91f45ef28
Some cleanups to how the text encoders are loaded.
2024-02-19 10:46:30 -05:00
comfyanonymous
a7b5eaa7e3
Forgot to commit this.
2024-02-19 04:25:46 -05:00
comfyanonymous
3b2e579926
Support loading the Stable Cascade effnet and previewer as a VAE.
...
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous
dccca1daa5
Fix gligen lowvram mode.
2024-02-18 02:20:23 -05:00
comfyanonymous
8b60d33bb7
Add ModelSamplingStableCascade to control the shift sampling parameter.
...
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
2024-02-18 00:55:23 -05:00
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
comfyanonymous
11e3221f1f
fp8 weight support for Stable Cascade.
2024-02-17 15:27:31 -05:00
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
comfyanonymous
5b40e7a5ed
Implement shift schedule for cascade stage C.
2024-02-17 11:38:47 -05:00
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
2024-02-17 09:01:17 -05:00
comfyanonymous
6c875d846b
Fix clip attention mask issues on some hardware.
2024-02-17 07:53:52 -05:00
comfyanonymous
805c36ac9c
Make Stable Cascade work on old pytorch 2.0
2024-02-17 00:42:30 -05:00
comfyanonymous
f2d1d16f4f
Support Stable Cascade Stage B lite.
2024-02-16 23:41:23 -05:00
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
2024-02-16 23:01:54 -05:00
comfyanonymous
97d03ae04a
StableCascade CLIP model support.
2024-02-16 13:29:04 -05:00
comfyanonymous
667c92814e
Stable Cascade Stage B.
2024-02-16 13:02:03 -05:00
comfyanonymous
f83109f09b
Stable Cascade Stage C.
2024-02-16 10:55:08 -05:00
comfyanonymous
5e06baf112
Stable Cascade Stage A.
2024-02-16 06:30:39 -05:00
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
2024-02-15 21:10:10 -05:00
comfyanonymous
38b7ac6e26
Don't init the CLIP model when the checkpoint has no CLIP weights.
2024-02-13 00:01:08 -05:00
comfyanonymous
7dd352cbd7
Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI
2024-02-11 12:23:30 -05:00
Jedrzej Kosinski
f44225fd5f
Fix infinite while loop being possible in ddim_scheduler
2024-02-09 17:11:34 -06:00
comfyanonymous
25a4805e51
Add a way to set different conditioning for the controlnet.
2024-02-09 14:13:31 -05:00
blepping
a352c021ec
Allow custom samplers to request discard penultimate sigma
2024-02-08 02:24:23 -07:00
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
2024-02-07 18:52:51 -05:00
comfyanonymous
236bda2683
Make minimum tile size the size of the overlap.
2024-02-05 01:29:26 -05:00
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
2024-02-02 10:02:49 -05:00
comfyanonymous
da7a8df0d2
Put VAE key name in model config.
2024-01-30 02:24:38 -05:00
comfyanonymous
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler ( #2619 )
...
self.scheduler -> scheduler_name
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2024-01-23 03:47:01 -05:00
comfyanonymous
4871a36458
Cleanup some unused imports.
2024-01-21 21:51:22 -05:00
comfyanonymous
78a70fda87
Remove useless import.
2024-01-19 15:38:05 -05:00
comfyanonymous
d76a04b6ea
Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
...
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
2024-01-15 03:10:22 -05:00
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
53c8a99e6c
Make server storage the default.
...
Remove --server-storage argument.
2024-01-11 17:21:40 -05:00
comfyanonymous
977eda19a6
Don't round noise mask.
2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd
Add InpaintModelConditioning node.
...
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.
This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
1a57423d30
Fix issue when using multiple t2i adapters with batched images.
2024-01-10 04:00:49 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support ( #2160 )
...
* wip per user data
* Rename, hide menu
* better error
rework default user
* store pretty
* Add userdata endpoints
Change nodetemplates to userdata
* add multi user message
* make normal arg
* Fix tests
* Ignore user dir
* user tests
* Changed to default to browser storage and add server-storage arg
* fix crash on empty templates
* fix settings added before load
* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb
Fix model patches not working in custom sampling scheduler nodes.
2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b
Add support for the stable diffusion x4 upscaling model.
...
This is an old model.
Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b
Fix regression.
2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c
Refactor VAE code.
...
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
a47f609f90
Auto detect out_channels from model.
2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33
Remove useless code.
2024-01-02 01:50:29 -05:00
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
2023-12-30 05:49:07 -05:00