Commit Graph

2834 Commits

Author SHA1 Message Date
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2023-02-09 12:43:29 -05:00
comfyanonymous df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2023-02-09 12:33:27 -05:00
comfyanonymous 1d9ec62cfb Use absolute output directory path. 2023-02-09 09:59:43 -05:00
comfyanonymous 05d571fe7f Merge branch 'master' of https://github.com/bazettfraga/ComfyUI into merge_pr2 2023-02-09 00:44:38 -05:00
comfyanonymous e8c499ddd4 Split optimization for VAE attention block. 2023-02-08 22:04:20 -05:00
comfyanonymous 5b4e312749 Use inplace operations for less OOM issues. 2023-02-08 22:04:13 -05:00
BazettFraga e58887dfa7 forgot windows does double backslashes for paths due to its use as escape char. 2023-02-09 01:30:06 +01:00
BazettFraga 81082045c2 add recursive_search, swap relevant os.listdirs 2023-02-09 01:22:33 +01:00
comfyanonymous 3fd87cbd21 Slightly smarter batching behaviour.
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
comfyanonymous bbdcf0b737 Use relative imports for k_diffusion. 2023-02-08 16:51:19 -05:00
comfyanonymous 3e22815a9a Fix k_diffusion not getting imported from the folder. 2023-02-08 16:29:22 -05:00
comfyanonymous 708138c77d Remove print. 2023-02-08 14:51:18 -05:00
comfyanonymous 047775615b Lower the chances of an OOM. 2023-02-08 14:24:27 -05:00
comfyanonymous 853e96ada3 Increase it/s by batching together some stuff sent to unet. 2023-02-08 14:24:00 -05:00
comfyanonymous c92633eaa2 Auto calculate amount of memory to use for --lowvram 2023-02-08 11:42:37 -05:00
comfyanonymous 534736b924 Add some low vram modes: --lowvram and --novram 2023-02-08 11:37:10 -05:00
comfyanonymous a84cd0d1ad Don't unload/reload model from CPU uselessly. 2023-02-08 03:40:43 -05:00
comfyanonymous e3e65947f2 Add a --help to main.py 2023-02-07 22:13:42 -05:00
comfyanonymous 1f18221e17 Add --port to set custom port. 2023-02-07 21:57:17 -05:00
comfyanonymous 6e40393b6b Fix delete sometimes not properly refreshing queue state. 2023-02-07 00:07:31 -05:00
comfyanonymous d71d0c88e5 Add some simple queue management to the GUI. 2023-02-06 23:40:38 -05:00
comfyanonymous b1a7c9ebf6 Embeddings/textual inversion support for SD2.x 2023-02-05 15:49:03 -05:00
comfyanonymous 1de5aa6a59 Add a CLIPLoader node to load standalone clip weights.
Put them in models/clip
2023-02-05 15:20:18 -05:00
comfyanonymous 56d802e1f3 Use transformers CLIP instead of open_clip for SD2.x
This should make things a bit cleaner.
2023-02-05 14:36:28 -05:00
comfyanonymous bf9ccffb17 Small fix for SD2.x loras. 2023-02-05 11:38:25 -05:00
comfyanonymous 678105fade SD2.x CLIP support for Loras. 2023-02-05 01:54:09 -05:00
comfyanonymous 3f3d77a324 Fix image node always executing instead of only when the image changed. 2023-02-04 16:08:29 -05:00
comfyanonymous 4225d1cb9f Add a basic ImageScale node.
It's pretty much the same as the LatentUpscale node for now but for images
in pixel space.
2023-02-04 16:01:01 -05:00
comfyanonymous bff0e11941 Add a LatentCrop node. 2023-02-04 15:21:46 -05:00
comfyanonymous 43c795f462 Add a --listen argument to listen on 0.0.0.0 2023-02-04 12:01:53 -05:00
comfyanonymous 41a7532c15 A bit bigger. 2023-02-03 13:56:00 -05:00
comfyanonymous 7bc3f91bd6 Add some instructions how to use the venv from another SD install. 2023-02-03 13:54:45 -05:00
comfyanonymous 149a4de3f2 Fix potential issue if exception happens when patching model. 2023-02-03 03:55:50 -05:00
comfyanonymous ef90e9c376 Add a LoraLoader node to apply loras to models and clip.
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00
comfyanonymous 96664f5d5e Web interface bug fix for multiple inputs from the same node. 2023-02-03 00:39:28 -05:00
comfyanonymous 1d84a44b08 Fix some small annoyances with the UI. 2023-02-02 14:36:11 -05:00
comfyanonymous e65a20e62a Add a button to queue prompts to the front of the queue. 2023-02-01 22:34:59 -05:00
comfyanonymous 4b08314257 Add more features to the backend queue code.
The queue can now be queried, entries can be deleted and prompts easily
queued to the front of the queue.

Just need to expose it in the UI next.
2023-02-01 22:33:10 -05:00
comfyanonymous 9d611a90e8 Small web interface fixes. 2023-01-31 03:37:34 -05:00
comfyanonymous fef41d0a72 Add LatentComposite node.
This can be used to "paste" one latent image on top of the other.
2023-01-31 03:35:03 -05:00
comfyanonymous 3fa009f4cc Add a LatentFlip node. 2023-01-31 03:28:38 -05:00
comfyanonymous 69df7eba94 Add KSamplerAdvanced node.
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
 and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2023-01-31 03:09:38 -05:00
comfyanonymous f8f165e2c3 Add a LatentRotate node. 2023-01-31 02:28:07 -05:00
comfyanonymous 1daccf3678 Run softmax in place if it OOMs. 2023-01-30 19:55:01 -05:00
comfyanonymous 0d8ad93852 Add link to examples github page. 2023-01-30 01:09:35 -05:00
comfyanonymous f73e57d881 Add support for textual inversion embedding for SD1.x CLIP. 2023-01-29 18:46:44 -05:00
comfyanonymous 702ac43d0c Readme formatting. 2023-01-29 13:23:57 -05:00
comfyanonymous da6f56235b Add section to readme explaining how to get better speeds. 2023-01-29 13:15:03 -05:00
comfyanonymous 3661e10648 Add a command line option to disable upcasting in some cross attention ops. 2023-01-29 13:12:22 -05:00
comfyanonymous 50db297cf6 Try to fix OOM issues with cards that have less vram than mine. 2023-01-29 00:50:46 -05:00