comfyanonymous
6b774589a5
Set model to fp16 before loading the state dict to lower ram bump.
2023-06-14 12:48:02 -04:00
comfyanonymous
388567f20b
sampler_cfg_function now uses a dict for the argument.
...
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous
ff9b22d79e
Turn on safe load for a few models.
2023-06-13 10:12:03 -04:00
comfyanonymous
f0a2b81cd0
Cleanup: Remove a bunch of useless files.
2023-06-13 02:19:08 -04:00
comfyanonymous
f8c5931053
Split the batch in VAEEncode if there's not enough memory.
2023-06-12 00:21:50 -04:00
comfyanonymous
c069fc0730
Auto switch to tiled VAE encode if regular one runs out of memory.
2023-06-11 23:25:39 -04:00
comfyanonymous
de142eaad5
Simpler base model code.
2023-06-09 12:31:16 -04:00
comfyanonymous
0e425603fb
Small refactor.
2023-06-06 13:23:01 -04:00
comfyanonymous
700491d81a
Implement global average pooling for controlnet.
2023-06-03 01:49:03 -04:00
comfyanonymous
03da8a3426
This is useless for inference.
2023-05-31 13:03:24 -04:00
comfyanonymous
eb448dd8e1
Auto load model in lowvram if not enough memory.
2023-05-30 12:36:41 -04:00
comfyanonymous
a532888846
Support VAEs in diffusers format.
2023-05-28 02:02:09 -04:00
BlenderNeko
19c014f429
comment out annoying print statement
2023-05-12 23:57:40 +02:00
BlenderNeko
d9e088ddfd
minor changes for tiled sampler
2023-05-12 23:49:09 +02:00
comfyanonymous
bae4fb4a9d
Fix imports.
2023-05-04 18:10:29 -04:00
comfyanonymous
fcf513e0b6
Refactor.
2023-05-03 17:48:35 -04:00
pythongosssss
5eeecf3fd5
remove unused import
2023-05-03 18:21:23 +01:00
pythongosssss
8912623ea9
use comfy progress bar
2023-05-03 18:19:22 +01:00
pythongosssss
fdf57325f4
Merge remote-tracking branch 'origin/master' into tiled-progress
2023-05-03 17:33:42 +01:00
pythongosssss
27df74101e
reduce duplication
2023-05-03 17:33:19 +01:00
pythongosssss
06ad35b493
added progress to encode + upscale
2023-05-02 19:18:07 +01:00
comfyanonymous
9c335a553f
LoKR support.
2023-05-01 18:18:23 -04:00
pythongosssss
c8c9926eeb
Add progress to vae decode tiled
2023-04-24 11:55:44 +01:00
comfyanonymous
5282f56434
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2023-04-19 11:06:32 -04:00
comfyanonymous
884ea653c8
Add a way for nodes to set a custom CFG function.
2023-04-17 11:05:15 -04:00
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2023-04-15 19:04:33 -04:00
comfyanonymous
81d1f00df3
Some refactoring: from_tokens -> encode_from_tokens
2023-04-15 18:46:58 -04:00
BlenderNeko
da115bd78d
ensure backwards compat with optional args
2023-04-14 21:16:55 +02:00
BlenderNeko
73175cf58c
split tokenizer from encoder
2023-04-13 22:06:50 +02:00
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
18a6c1db33
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous
b2554bc4dd
Split VAE decode batches depending on free memory.
2023-03-29 02:24:37 -04:00
comfyanonymous
dd095efc2c
Support loha that use cp decomposition.
2023-03-23 04:32:25 -04:00
comfyanonymous
94a7c895f4
Add loha support.
2023-03-23 03:40:12 -04:00
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
4039616ca6
Less seams in tiled outputs at the cost of more processing.
2023-03-22 03:29:09 -04:00
comfyanonymous
cc309568e1
Add support for locon mid weights.
2023-03-21 14:51:51 -04:00
comfyanonymous
edfc4ca663
Try to fix a vram issue with controlnets.
2023-03-19 10:50:38 -04:00
comfyanonymous
2e73367f45
Merge T2IAdapterLoader and ControlNetLoader.
...
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
0e836d525e
use half() on fp16 models loaded with config.
2023-03-13 21:12:48 -04:00
comfyanonymous
986dd820dc
Use half() function on model when loading in fp16.
2023-03-13 20:58:09 -04:00
comfyanonymous
54dbfaf2ec
Remove omegaconf dependency and some ci changes.
2023-03-13 14:49:18 -04:00
comfyanonymous
e33dc2b33b
Add a VAEEncodeTiled node.
2023-03-11 15:28:15 -05:00
comfyanonymous
9db2e97b47
Tiled upscaling with the upscale models.
2023-03-11 14:04:13 -05:00
comfyanonymous
cd64111c83
Add locon support.
2023-03-09 21:41:24 -05:00
comfyanonymous
c70f0ac64b
SD2.x controlnets now work.
2023-03-08 01:13:38 -05:00
comfyanonymous
19415c3ace
Relative imports to test something.
2023-03-07 11:00:35 -05:00
comfyanonymous
501f19eec6
Fix clip_skip no longer being loaded from yaml file.
2023-03-06 11:34:02 -05:00
comfyanonymous
afff30fc0a
Add --cpu to use the cpu for inference.
2023-03-06 10:50:50 -05:00
comfyanonymous
47acb3d73e
Implement support for t2i style model.
...
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.
Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models
StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00
comfyanonymous
16130c7546
Add support for new colour T2I adapter model.
2023-03-03 19:13:07 -05:00
comfyanonymous
4215206281
Add a node to set CLIP skip.
...
Use a more simple way to detect if the model is -v prediction.
2023-03-03 13:04:36 -05:00
comfyanonymous
fed315a76a
To be really simple CheckpointLoaderSimple should pick the right type.
2023-03-03 11:07:10 -05:00
comfyanonymous
94bb0375b0
New CheckpointLoaderSimple to load checkpoints without a config.
2023-03-03 03:37:35 -05:00
comfyanonymous
b31daadc03
Try to improve memory issues with del.
2023-02-28 12:27:43 -05:00
comfyanonymous
9f4214e534
Preparing to add another function to load checkpoints.
2023-02-26 17:29:01 -05:00
comfyanonymous
dfb397e034
Fix multiple controlnets not working.
2023-02-25 22:12:22 -05:00
comfyanonymous
af3cc1b5fb
Fixed issue when batched image was used as a controlnet input.
2023-02-25 14:57:28 -05:00
comfyanonymous
d2da346b0b
Fix missing variable.
2023-02-25 12:19:03 -05:00
comfyanonymous
4e6b83a80a
Add a T2IAdapterLoader node to load T2I-Adapter models.
...
They are loaded as CONTROL_NET objects because they are similar.
2023-02-25 01:24:56 -05:00
comfyanonymous
fcb25d37db
Prepare for t2i adapter.
2023-02-24 23:36:17 -05:00
comfyanonymous
87b00b37f6
Added an experimental VAEDecodeTiled.
...
This decodes the image with the VAE in tiles which should be faster and
use less vram.
It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2023-02-24 02:10:10 -05:00
comfyanonymous
62df8dd62a
Add a node to load diff controlnets.
2023-02-22 23:22:03 -05:00
comfyanonymous
d80af7ca30
ControlNetApply now stacks.
...
It can be used to apply multiple control nets at the same time.
2023-02-21 01:18:53 -05:00
comfyanonymous
d66415c021
Low vram mode for controlnets.
2023-02-17 15:48:16 -05:00
comfyanonymous
220a72d36b
Use fp16 for fp16 control nets.
2023-02-17 15:31:38 -05:00
comfyanonymous
6135a21ee8
Add a way to control controlnet strength.
2023-02-16 18:08:01 -05:00
comfyanonymous
4efa67fa12
Add ControlNet support.
2023-02-16 10:38:08 -05:00
comfyanonymous
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2023-02-08 03:40:43 -05:00
comfyanonymous
b1a7c9ebf6
Embeddings/textual inversion support for SD2.x
2023-02-05 15:49:03 -05:00
comfyanonymous
1de5aa6a59
Add a CLIPLoader node to load standalone clip weights.
...
Put them in models/clip
2023-02-05 15:20:18 -05:00
comfyanonymous
56d802e1f3
Use transformers CLIP instead of open_clip for SD2.x
...
This should make things a bit cleaner.
2023-02-05 14:36:28 -05:00
comfyanonymous
bf9ccffb17
Small fix for SD2.x loras.
2023-02-05 11:38:25 -05:00
comfyanonymous
678105fade
SD2.x CLIP support for Loras.
2023-02-05 01:54:09 -05:00
comfyanonymous
ef90e9c376
Add a LoraLoader node to apply loras to models and clip.
...
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00
comfyanonymous
f73e57d881
Add support for textual inversion embedding for SD1.x CLIP.
2023-01-29 18:46:44 -05:00
comfyanonymous
73f60740c8
Slightly cleaner code.
2023-01-28 02:14:22 -05:00
comfyanonymous
0108616b77
Fix issue with some models.
2023-01-28 01:38:42 -05:00
comfyanonymous
2973ff24c5
Round CLIP position ids to fix float issues in some checkpoints.
2023-01-28 00:19:33 -05:00
comfyanonymous
acdc6f42e0
Fix loading some malformed checkpoints?
2023-01-25 15:20:55 -05:00
comfyanonymous
220afe3310
Initial commit.
2023-01-16 22:37:14 -05:00