comfyanonymous
3fd87cbd21
Slightly smarter batching behaviour.
...
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
comfyanonymous
bbdcf0b737
Use relative imports for k_diffusion.
2023-02-08 16:51:19 -05:00
comfyanonymous
3e22815a9a
Fix k_diffusion not getting imported from the folder.
2023-02-08 16:29:22 -05:00
comfyanonymous
708138c77d
Remove print.
2023-02-08 14:51:18 -05:00
comfyanonymous
047775615b
Lower the chances of an OOM.
2023-02-08 14:24:27 -05:00
comfyanonymous
853e96ada3
Increase it/s by batching together some stuff sent to unet.
2023-02-08 14:24:00 -05:00
comfyanonymous
c92633eaa2
Auto calculate amount of memory to use for --lowvram
2023-02-08 11:42:37 -05:00
comfyanonymous
534736b924
Add some low vram modes: --lowvram and --novram
2023-02-08 11:37:10 -05:00
comfyanonymous
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2023-02-08 03:40:43 -05:00
comfyanonymous
e3e65947f2
Add a --help to main.py
2023-02-07 22:13:42 -05:00
comfyanonymous
1f18221e17
Add --port to set custom port.
2023-02-07 21:57:17 -05:00
comfyanonymous
6e40393b6b
Fix delete sometimes not properly refreshing queue state.
2023-02-07 00:07:31 -05:00
comfyanonymous
d71d0c88e5
Add some simple queue management to the GUI.
2023-02-06 23:40:38 -05:00
comfyanonymous
b1a7c9ebf6
Embeddings/textual inversion support for SD2.x
2023-02-05 15:49:03 -05:00
comfyanonymous
1de5aa6a59
Add a CLIPLoader node to load standalone clip weights.
...
Put them in models/clip
2023-02-05 15:20:18 -05:00
comfyanonymous
56d802e1f3
Use transformers CLIP instead of open_clip for SD2.x
...
This should make things a bit cleaner.
2023-02-05 14:36:28 -05:00
comfyanonymous
bf9ccffb17
Small fix for SD2.x loras.
2023-02-05 11:38:25 -05:00
comfyanonymous
678105fade
SD2.x CLIP support for Loras.
2023-02-05 01:54:09 -05:00
comfyanonymous
3f3d77a324
Fix image node always executing instead of only when the image changed.
2023-02-04 16:08:29 -05:00
comfyanonymous
4225d1cb9f
Add a basic ImageScale node.
...
It's pretty much the same as the LatentUpscale node for now but for images
in pixel space.
2023-02-04 16:01:01 -05:00
comfyanonymous
bff0e11941
Add a LatentCrop node.
2023-02-04 15:21:46 -05:00
comfyanonymous
43c795f462
Add a --listen argument to listen on 0.0.0.0
2023-02-04 12:01:53 -05:00
comfyanonymous
41a7532c15
A bit bigger.
2023-02-03 13:56:00 -05:00
comfyanonymous
7bc3f91bd6
Add some instructions how to use the venv from another SD install.
2023-02-03 13:54:45 -05:00
comfyanonymous
149a4de3f2
Fix potential issue if exception happens when patching model.
2023-02-03 03:55:50 -05:00
comfyanonymous
ef90e9c376
Add a LoraLoader node to apply loras to models and clip.
...
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00
comfyanonymous
96664f5d5e
Web interface bug fix for multiple inputs from the same node.
2023-02-03 00:39:28 -05:00
comfyanonymous
1d84a44b08
Fix some small annoyances with the UI.
2023-02-02 14:36:11 -05:00
comfyanonymous
e65a20e62a
Add a button to queue prompts to the front of the queue.
2023-02-01 22:34:59 -05:00
comfyanonymous
4b08314257
Add more features to the backend queue code.
...
The queue can now be queried, entries can be deleted and prompts easily
queued to the front of the queue.
Just need to expose it in the UI next.
2023-02-01 22:33:10 -05:00
comfyanonymous
9d611a90e8
Small web interface fixes.
2023-01-31 03:37:34 -05:00
comfyanonymous
fef41d0a72
Add LatentComposite node.
...
This can be used to "paste" one latent image on top of the other.
2023-01-31 03:35:03 -05:00
comfyanonymous
3fa009f4cc
Add a LatentFlip node.
2023-01-31 03:28:38 -05:00
comfyanonymous
69df7eba94
Add KSamplerAdvanced node.
...
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2023-01-31 03:09:38 -05:00
comfyanonymous
f8f165e2c3
Add a LatentRotate node.
2023-01-31 02:28:07 -05:00
comfyanonymous
1daccf3678
Run softmax in place if it OOMs.
2023-01-30 19:55:01 -05:00
comfyanonymous
0d8ad93852
Add link to examples github page.
2023-01-30 01:09:35 -05:00
comfyanonymous
f73e57d881
Add support for textual inversion embedding for SD1.x CLIP.
2023-01-29 18:46:44 -05:00
comfyanonymous
702ac43d0c
Readme formatting.
2023-01-29 13:23:57 -05:00
comfyanonymous
da6f56235b
Add section to readme explaining how to get better speeds.
2023-01-29 13:15:03 -05:00
comfyanonymous
3661e10648
Add a command line option to disable upcasting in some cross attention ops.
2023-01-29 13:12:22 -05:00
comfyanonymous
50db297cf6
Try to fix OOM issues with cards that have less vram than mine.
2023-01-29 00:50:46 -05:00
comfyanonymous
36ec5690a6
Add some more model configs including some to use SD1 models in fp16.
2023-01-28 23:23:49 -05:00
comfyanonymous
484b957c7a
Quick fix for chrome issue.
2023-01-28 12:43:43 -05:00
comfyanonymous
2706c0b7a5
Some VAEs come in .pt files.
2023-01-28 12:28:29 -05:00
comfyanonymous
d133cf4f06
Added some AMD stuff to readme.
2023-01-28 04:06:25 -05:00
comfyanonymous
73f60740c8
Slightly cleaner code.
2023-01-28 02:14:22 -05:00
comfyanonymous
0108616b77
Fix issue with some models.
2023-01-28 01:38:42 -05:00
comfyanonymous
2973ff24c5
Round CLIP position ids to fix float issues in some checkpoints.
2023-01-28 00:19:33 -05:00
comfyanonymous
e615d40ca1
Fix UI annoyance with multiline textboxes sometimes getting stuck.
2023-01-27 23:33:27 -05:00