comfyanonymous
3fd6b7027c
Support dark mode in GUI.
2023-02-12 13:29:34 -05:00
comfyanonymous
05ad64d22c
Add a feather option to the latent composite node.
2023-02-12 13:01:52 -05:00
pythongosssss
9d1edfde1f
Changed polling to use websockets
2023-02-12 15:54:22 +00:00
pythongosssss
5d14e9b959
Changed HTTP Server + Added WebSockets
...
Moved the existing API endpoints to use aoihttp and added websocket notifications
2023-02-12 15:53:48 +00:00
comfyanonymous
f542f248f1
Show the right amount of steps in the progress bar for uni_pc.
...
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2023-02-11 14:59:42 -05:00
comfyanonymous
f10b8948c3
768-v support for uni_pc sampler.
2023-02-11 04:34:58 -05:00
comfyanonymous
ce0aeb109e
Remove print.
2023-02-11 03:41:40 -05:00
comfyanonymous
5489d5af04
Add uni_pc sampler to KSampler* nodes.
2023-02-11 03:34:09 -05:00
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2023-02-10 11:47:41 -05:00
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2023-02-10 03:13:49 -05:00
comfyanonymous
7e1e193f39
Automatically enable lowvram mode if vram is less than 4GB.
...
Use: --normalvram to disable it.
2023-02-10 00:47:56 -05:00
comfyanonymous
e9d3ac2ba0
Typo.
2023-02-09 16:31:54 -05:00
comfyanonymous
9ae4e1b0ff
Add a list of features to the Readme.
2023-02-09 16:27:52 -05:00
comfyanonymous
324273fff2
Fix embedding not working when on new line.
2023-02-09 14:12:02 -05:00
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2023-02-09 13:47:36 -05:00
BazettFraga
642516a3a6
create output dir if none is present
2023-02-09 12:49:31 -05:00
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2023-02-09 12:43:29 -05:00
comfyanonymous
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2023-02-09 12:33:27 -05:00
comfyanonymous
1d9ec62cfb
Use absolute output directory path.
2023-02-09 09:59:43 -05:00
comfyanonymous
05d571fe7f
Merge branch 'master' of https://github.com/bazettfraga/ComfyUI into merge_pr2
2023-02-09 00:44:38 -05:00
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2023-02-08 22:04:20 -05:00
comfyanonymous
5b4e312749
Use inplace operations for less OOM issues.
2023-02-08 22:04:13 -05:00
BazettFraga
e58887dfa7
forgot windows does double backslashes for paths due to its use as escape char.
2023-02-09 01:30:06 +01:00
BazettFraga
81082045c2
add recursive_search, swap relevant os.listdirs
2023-02-09 01:22:33 +01:00
comfyanonymous
3fd87cbd21
Slightly smarter batching behaviour.
...
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
comfyanonymous
bbdcf0b737
Use relative imports for k_diffusion.
2023-02-08 16:51:19 -05:00
comfyanonymous
3e22815a9a
Fix k_diffusion not getting imported from the folder.
2023-02-08 16:29:22 -05:00
comfyanonymous
708138c77d
Remove print.
2023-02-08 14:51:18 -05:00
comfyanonymous
047775615b
Lower the chances of an OOM.
2023-02-08 14:24:27 -05:00
comfyanonymous
853e96ada3
Increase it/s by batching together some stuff sent to unet.
2023-02-08 14:24:00 -05:00
comfyanonymous
c92633eaa2
Auto calculate amount of memory to use for --lowvram
2023-02-08 11:42:37 -05:00
comfyanonymous
534736b924
Add some low vram modes: --lowvram and --novram
2023-02-08 11:37:10 -05:00
comfyanonymous
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2023-02-08 03:40:43 -05:00
comfyanonymous
e3e65947f2
Add a --help to main.py
2023-02-07 22:13:42 -05:00
comfyanonymous
1f18221e17
Add --port to set custom port.
2023-02-07 21:57:17 -05:00
comfyanonymous
6e40393b6b
Fix delete sometimes not properly refreshing queue state.
2023-02-07 00:07:31 -05:00
comfyanonymous
d71d0c88e5
Add some simple queue management to the GUI.
2023-02-06 23:40:38 -05:00
comfyanonymous
b1a7c9ebf6
Embeddings/textual inversion support for SD2.x
2023-02-05 15:49:03 -05:00
comfyanonymous
1de5aa6a59
Add a CLIPLoader node to load standalone clip weights.
...
Put them in models/clip
2023-02-05 15:20:18 -05:00
comfyanonymous
56d802e1f3
Use transformers CLIP instead of open_clip for SD2.x
...
This should make things a bit cleaner.
2023-02-05 14:36:28 -05:00
comfyanonymous
bf9ccffb17
Small fix for SD2.x loras.
2023-02-05 11:38:25 -05:00
comfyanonymous
678105fade
SD2.x CLIP support for Loras.
2023-02-05 01:54:09 -05:00
comfyanonymous
3f3d77a324
Fix image node always executing instead of only when the image changed.
2023-02-04 16:08:29 -05:00
comfyanonymous
4225d1cb9f
Add a basic ImageScale node.
...
It's pretty much the same as the LatentUpscale node for now but for images
in pixel space.
2023-02-04 16:01:01 -05:00
comfyanonymous
bff0e11941
Add a LatentCrop node.
2023-02-04 15:21:46 -05:00
comfyanonymous
43c795f462
Add a --listen argument to listen on 0.0.0.0
2023-02-04 12:01:53 -05:00
comfyanonymous
41a7532c15
A bit bigger.
2023-02-03 13:56:00 -05:00
comfyanonymous
7bc3f91bd6
Add some instructions how to use the venv from another SD install.
2023-02-03 13:54:45 -05:00
comfyanonymous
149a4de3f2
Fix potential issue if exception happens when patching model.
2023-02-03 03:55:50 -05:00
comfyanonymous
ef90e9c376
Add a LoraLoader node to apply loras to models and clip.
...
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00