comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
comfyanonymous
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
comfyanonymous
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
comfyanonymous
1b80895285
Make clip loader nodes support loading sd3 t5xxl in lower precision.
...
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 ( #5121 )
...
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
comfyanonymous
7d2467e830
Some minor cleanups.
2024-10-05 13:22:39 -04:00
comfyanonymous
abcd006b8c
Allow more permutations of clip/t5 in dual clip loader.
2024-10-03 09:26:11 -04:00
comfyanonymous
d985d1d7dc
CLIP Loader node now supports clip_l and clip_g only for SD3.
2024-10-02 04:25:17 -04:00
comfyanonymous
d1cdf51e1b
Refactor some of the TE detection code.
2024-10-01 07:08:41 -04:00
comfyanonymous
ad66f7c7d8
Add model_options to load_controlnet function.
2024-09-19 08:23:35 -04:00
comfyanonymous
d514bb38ee
Add some option to model_options for the text encoder.
...
load_device, offload_device and the initial_device can now be set.
2024-09-17 03:49:54 -04:00
comfyanonymous
e813abbb2c
Long CLIP L support for SDXL, SD3 and Flux.
...
Use the *CLIPLoader nodes.
2024-09-15 07:59:38 -04:00
comfyanonymous
d1a6bd6845
Support loading long clipl model with the CLIP loader node.
2024-08-20 10:46:36 -04:00
comfyanonymous
9eee470244
New load_text_encoder_state_dicts function.
...
Now you can load text encoders straight from a list of state dicts.
2024-08-19 17:36:35 -04:00
comfyanonymous
4f7a3cb6fb
unet -> diffusion_models.
2024-08-17 21:31:04 -04:00
comfyanonymous
fca42836f2
Add model_options for text encoder.
2024-08-17 11:17:20 -04:00
comfyanonymous
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
comfyanonymous
a562c17e8a
load_unet -> load_diffusion_model with a model_options argument.
2024-08-12 23:20:57 -04:00
comfyanonymous
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
comfyanonymous
9acfe4df41
Support loading directly to vram with CLIPLoader node.
2024-08-12 00:06:01 -04:00
comfyanonymous
9829b013ea
Fix mistake in last commit.
2024-08-12 00:00:17 -04:00
comfyanonymous
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
comfyanonymous
e9589d6d92
Add a way to set model dtype and ops from load_checkpoint_guess_config.
2024-08-11 08:50:34 -04:00
comfyanonymous
0d82a798a5
Remove the ckpt_path from load_state_dict_guess_config.
2024-08-11 08:37:35 -04:00
ljleb
925fff26fd
alternative to `load_checkpoint_guess_config` that accepts a loaded state dict ( #4249 )
...
* make alternative fn
* add back ckpt path as 2nd argument?
2024-08-11 08:36:52 -04:00
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
comfyanonymous
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
comfyanonymous
d7430a1651
Add a way to load the diffusion model in fp8 with UNETLoader node.
2024-08-01 13:30:51 -04:00
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
comfyanonymous
4ba7fa0244
Refactor: Move sd2_clip.py to text_encoders folder.
2024-07-28 01:19:20 -04:00
comfyanonymous
a5f4292f9f
Basic hunyuan dit implementation. ( #4102 )
...
* Let tokenizers return weights to be stored in the saved checkpoint.
* Basic hunyuan dit implementation.
* Fix some resolutions not working.
* Support hydit checkpoint save.
* Init with right dtype.
* Switch to optimized attention in pooler.
* Fix black images on hunyuan dit.
2024-07-25 18:21:08 -04:00
comfyanonymous
f87810cd3e
Let tokenizers return weights to be stored in the saved checkpoint.
2024-07-25 10:52:09 -04:00
comfyanonymous
10c919f4c7
Make it possible to load tokenizer data from checkpoints.
2024-07-24 16:43:53 -04:00
comfyanonymous
1305fb294c
Refactor: Move some code to the comfy/text_encoders folder.
2024-07-15 17:36:24 -04:00
comfyanonymous
a3dffc447a
Support AuraFlow Lora and loading model weights in diffusers format.
...
You can load model weights in diffusers format using the UNETLoader node.
2024-07-13 13:51:40 -04:00
comfyanonymous
9f291d75b3
AuraFlow model implementation.
2024-07-11 16:52:26 -04:00
comfyanonymous
f45157e3ac
Fix error message never being shown.
2024-07-11 11:46:51 -04:00
comfyanonymous
5e1fced639
Cleaner support for loading different diffusion model types.
2024-07-11 11:37:31 -04:00
comfyanonymous
ffe0bb0a33
Remove useless code.
2024-07-10 20:33:12 -04:00
comfyanonymous
391c1046cf
More flexibility with text encoder return values.
...
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
comfyanonymous
4040491149
Better T5xxl detection.
2024-07-06 00:53:33 -04:00
comfyanonymous
d7484ef30c
Support loading checkpoints with the UNETLoader node.
2024-07-03 11:34:32 -04:00
comfyanonymous
537f35c7bc
Don't update dict if contiguous.
2024-07-02 20:21:51 -04:00
Alex "mcmonkey" Goodwin
3f46362d22
fix non-contiguous tensor saving (from channels-last) ( #3932 )
2024-07-02 20:16:33 -04:00
comfyanonymous
8ceb5a02a3
Support saving stable audio checkpoint that can be loaded back.
2024-06-27 11:06:52 -04:00