Commit Graph

10 Commits

Author SHA1 Message Date
comfyanonymous 47acb3d73e Implement support for t2i style model.
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.

Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models

StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00
comfyanonymous 4e6b83a80a Add a T2IAdapterLoader node to load T2I-Adapter models.
They are loaded as CONTROL_NET objects because they are similar.
2023-02-25 01:24:56 -05:00
comfyanonymous 191af3ef71 Add the config for the SD1.x inpainting model. 2023-02-19 14:58:00 -05:00
comfyanonymous 56498d505a Create controlnet directory. 2023-02-16 10:50:30 -05:00
comfyanonymous 59bef84bc8 Add the config for SD2.x inpainting models. 2023-02-15 17:52:34 -05:00
comfyanonymous 1de5aa6a59 Add a CLIPLoader node to load standalone clip weights.
Put them in models/clip
2023-02-05 15:20:18 -05:00
comfyanonymous ef90e9c376 Add a LoraLoader node to apply loras to models and clip.
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00
comfyanonymous f73e57d881 Add support for textual inversion embedding for SD1.x CLIP. 2023-01-29 18:46:44 -05:00
comfyanonymous 36ec5690a6 Add some more model configs including some to use SD1 models in fp16. 2023-01-28 23:23:49 -05:00
comfyanonymous 220afe3310 Initial commit. 2023-01-16 22:37:14 -05:00