realazthat
1b3d65bd84
Add error, status to /history endpoint
2024-01-11 10:16:42 -05:00
comfyanonymous
977eda19a6
Don't round noise mask.
2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd
Add InpaintModelConditioning node.
...
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.
This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
b4e915e745
Skip SAG when latent is too small.
2024-01-10 04:08:43 -05:00
comfyanonymous
1a57423d30
Fix issue when using multiple t2i adapters with batched images.
2024-01-10 04:00:49 -05:00
comfyanonymous
2c80d9acb9
Round up to nearest power of 2 in SAG node to fix some resolution issues.
2024-01-09 15:12:12 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
comfyanonymous
b3b5ddb07a
Support I mode images in LoadImageMask.
2024-01-08 17:08:17 -05:00
comfyanonymous
2d74fc4360
Fix issue with user manager parent dir not being created.
2024-01-08 17:08:00 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support ( #2160 )
...
* wip per user data
* Rename, hide menu
* better error
rework default user
* store pretty
* Add userdata endpoints
Change nodetemplates to userdata
* add multi user message
* make normal arg
* Fix tests
* Ignore user dir
* user tests
* Changed to default to browser storage and add server-storage arg
* fix crash on empty templates
* fix settings added before load
* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
6a10640f0d
Support properly loading images with mode I.
2024-01-08 03:46:36 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
ramyma
af94eb14e3
fix: `/free` handler function name
2024-01-06 04:27:09 +02:00
comfyanonymous
7c9a0f7e0a
Fix BasicScheduler issue with Loras.
2024-01-05 12:31:13 -05:00
comfyanonymous
35322a3766
StableZero123_Conditioning_Batched node.
...
This node lets you generate a batch of images with different elevations or
azimuths by setting the elevation_batch_increment and/or
azimuth_batch_increment.
It also sets the batch index for the latents so that the same init noise is
used on each frame.
2024-01-05 04:20:03 -05:00
comfyanonymous
6d281b4ff4
Add a /free route to unload models or free all memory.
...
A POST request to /free with: {"unload_models":true}
will unload models from vram.
A POST request to /free with: {"free_memory":true}
will unload models and free all cached data from the last run workflow.
2024-01-04 17:15:22 -05:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb
Fix model patches not working in custom sampling scheduler nodes.
2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b
Add support for the stable diffusion x4 upscaling model.
...
This is an old model.
Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b
Fix regression.
2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c
Refactor VAE code.
...
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
8e2c99e3cf
Fix issue when websocket is deleted when data is being sent.
2024-01-02 11:50:00 -05:00
comfyanonymous
a47f609f90
Auto detect out_channels from model.
2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33
Remove useless code.
2024-01-02 01:50:29 -05:00
comfyanonymous
66831eb6e9
Add node id and prompt id to websocket progress packet.
2024-01-01 14:27:56 -05:00
comfyanonymous
d1f3637a5a
Add a denoise parameter to BasicScheduler node.
2023-12-31 15:37:20 -05:00
comfyanonymous
36e15f2507
Reregister nodes when pressing refresh button.
2023-12-31 05:05:14 -05:00
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
2023-12-30 05:49:07 -05:00
comfyanonymous
144e6580a4
This cache timeout is pretty useless in practice.
2023-12-29 17:47:24 -05:00
comfyanonymous
04b713dda1
Fix VALIDATE_INPUTS getting called multiple times.
...
Allow VALIDATE_INPUTS to only validate specific inputs.
2023-12-29 17:36:40 -05:00
comfyanonymous
12e822c6c8
Use function to calculate model size in model patcher.
2023-12-28 21:46:20 -05:00
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
2023-12-28 21:41:10 -05:00
comfyanonymous
a8baa40d85
Cleanup.
2023-12-28 12:23:07 -05:00
comfyanonymous
c782144433
Fix clip vision lowvram mode not working.
2023-12-27 13:50:57 -05:00
comfyanonymous
e478b1794e
Only add _meta title to api prompt when dev mode is enabled in UI.
2023-12-27 01:07:02 -05:00
AYF
f15dce71fd
Add title to the API workflow json. ( #2380 )
...
* Add `title` to the API workflow json.
* API: Move `title` to `_meta` dictionary, imply unused.
2023-12-27 00:55:11 -05:00
comfyanonymous
f21bb41787
Fix taesd VAE in lowvram mode.
2023-12-26 12:52:21 -05:00
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
2023-12-26 05:02:02 -05:00
shiimizu
392878a262
Fix hiding dom widgets.
2023-12-25 19:17:40 -08:00
comfyanonymous
257c2eaaa4
Merge branch 'patch-1' of https://github.com/savolla/ComfyUI
2023-12-25 12:24:31 -05:00
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
2023-12-24 07:13:18 -05:00
comfyanonymous
a252963f95
--disable-smart-memory now unloads everything like it did originally.
2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
d35267e85a
Litegraph updates.
...
Update from upstream repo.
Auto select value in prompt.
Increase maximum number of nodes to 10k.
2023-12-21 13:21:25 -05:00
comfyanonymous
6781b181ef
Fix potential tensor device issue with ImageCompositeMasked.
2023-12-21 02:35:01 -05:00
comfyanonymous
a1e1c69f7d
LoadImage now loads all the frames from animated images as a batch.
2023-12-20 16:39:09 -05:00