comfyanonymous
391c1046cf
More flexibility with text encoder return values.
...
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
comfyanonymous
e44fa5667f
Support returning text encoder attention masks.
2024-07-10 19:31:22 -04:00
comfyanonymous
bb663bcd6c
Rename clip_t5base to t5base for stable audio text encoder.
2024-07-08 08:53:55 -04:00
comfyanonymous
80c4590998
Allow specifying the padding token for the tokenizer.
2024-07-06 00:06:49 -04:00
comfyanonymous
ce649d61c0
Allow zeroing out of embeds with unused attention mask.
2024-07-05 23:48:17 -04:00
Mario Klingemann
eee815ec99
Update sd1_clip.py ( #3684 )
...
Made token instance check more flexible so it also works with integers from numpy arrays or long tensors
2024-06-19 16:42:41 -04:00
comfyanonymous
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
2024-06-11 17:03:26 -04:00
comfyanonymous
742d5720d1
Support zeroing out text embeddings with the attention mask.
2024-06-09 16:51:58 -04:00
comfyanonymous
56333d4850
Use the end token for the text encoder attention mask.
2024-06-07 03:05:23 -04:00
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
2024-03-10 12:14:23 -04:00
comfyanonymous
03c47fc0f2
Add a min_length property to tokenizer class.
2024-02-26 21:36:37 -05:00
comfyanonymous
8ac69f62e5
Make return_projected_pooled setable from the __init__
2024-02-25 14:49:13 -05:00
comfyanonymous
c2cb8e889b
Always return unprojected pooled output for gligen.
2024-02-25 07:33:13 -05:00
comfyanonymous
1cb3f6a83b
Move text projection into the CLIP model code.
...
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous
97d03ae04a
StableCascade CLIP model support.
2024-02-16 13:29:04 -05:00
comfyanonymous
4871a36458
Cleanup some unused imports.
2024-01-21 21:51:22 -05:00
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
be3468ddd5
Less useless downcasting.
2023-12-04 12:53:46 -05:00
comfyanonymous
728613bb3e
Fix last pr.
2023-11-14 14:41:31 -05:00
Jianqi Pan
f2e49b1d57
fix: adaptation to older versions of pytroch
2023-11-14 14:32:05 +09:00
comfyanonymous
656c0b5d90
CLIP code refactor and improvements.
...
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
b3fcd64c6c
Make SDTokenizer class work with more types of tokenizers.
2023-11-06 01:09:18 -05:00
comfyanonymous
2a134bfab9
Fix checkpoint loader with config.
2023-10-27 22:13:55 -04:00
comfyanonymous
e60ca6929a
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
2023-10-27 15:54:04 -04:00
comfyanonymous
434ce25ec0
Restrict loading embeddings from embedding folders.
2023-10-27 02:54:13 -04:00
comfyanonymous
44361f6344
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
fb3b728203
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
ec96f6d03a
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
e3d0a9a490
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
f081017c1a
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
c99d8002f8
Make sure the pooled output stays at the EOS token with added embeddings.
2023-08-03 20:27:50 -04:00
comfyanonymous
50b1180dde
Fix CLIPSetLastLayer not reverting when removed.
2023-07-15 01:41:21 -04:00
comfyanonymous
46dc050c9f
Fix potential tensors being on different devices issues.
2023-07-12 19:29:27 -04:00
comfyanonymous
606a537090
Support SDXL embedding format with 2 CLIP.
2023-07-10 10:34:59 -04:00
comfyanonymous
608fcc2591
Fix bug with weights when prompt is long.
2023-07-06 02:43:40 -04:00
comfyanonymous
ce35d8c659
Lower latency by batching some text encoder inputs.
2023-07-01 15:07:39 -04:00
comfyanonymous
b6a60fa696
Try to keep text encoders loaded and patched to increase speed.
...
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
97ee230682
Make highvram and normalvram shift the text encoders to vram and back.
...
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
9920367d3c
Fix embeddings not working with --gpu-only
2023-06-29 20:43:06 -04:00
comfyanonymous
20f579d91d
Add DualClipLoader to load clip models for SDXL.
...
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
f7edcfd927
Add a --gpu-only argument to keep and run everything on the GPU.
...
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
bb1f45d6e8
Properly disable weight initialization in clip models.
2023-06-14 20:13:08 -04:00
comfyanonymous
0c7cad404c
Don't initialize clip weights to default values.
2023-06-14 12:47:36 -04:00
comfyanonymous
23cf8ca7c5
Fix bug when embedding gets ignored because of mismatched size.
2023-06-08 23:48:14 -04:00
comfyanonymous
af9cc1fb6a
Search recursively in subfolders for embeddings.
2023-05-05 01:28:48 -04:00
comfyanonymous
81d1f00df3
Some refactoring: from_tokens -> encode_from_tokens
2023-04-15 18:46:58 -04:00