comfyanonymous
b8c7c770d3
Enable bf16-vae by default on ampere and up.
2023-08-27 23:06:19 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
412596d325
Merge branch 'increase_client_max_size' of https://github.com/ramyma/ComfyUI
2023-08-27 13:12:39 -04:00
Dr.Lt.Data
d9f4922993
fix: cannot disable dynamicPrompts ( #1327 )
...
* fix: cannot disable dynamicPrompts
* indent fix
---------
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2023-08-27 12:34:24 -04:00
ramyma
0b6cf7a558
Increase client_max_size to allow bigger request bodies
2023-08-26 19:48:20 +03:00
comfyanonymous
a57b0c797b
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
f72780a7e3
The new smart memory management makes this unnecessary.
2023-08-25 18:02:15 -04:00
comfyanonymous
c77f02e1c6
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
15a7716fa6
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00
comfyanonymous
ec96f6d03a
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
30eb92c3cb
Code cleanups.
2023-08-24 19:39:18 -04:00
comfyanonymous
51dde87e97
Try to free enough vram for control lora inference.
2023-08-24 17:20:54 -04:00
comfyanonymous
e3d0a9a490
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
cc44ade79e
Always shift text encoder to GPU when the device supports fp16.
2023-08-23 21:45:00 -04:00
comfyanonymous
a6ef08a46a
Even with forced fp16 the cpu device should never use it.
2023-08-23 21:38:28 -04:00
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
f081017c1a
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
d7b3b0f8c1
Don't hardcode node names for image upload widget.
2023-08-22 19:41:49 -04:00
comfyanonymous
afcb9cb1df
All resolutions now work with t2i adapter for SDXL.
2023-08-22 16:23:54 -04:00
comfyanonymous
85fde89d7f
T2I adapter SDXL.
2023-08-22 14:40:43 -04:00
comfyanonymous
f2a7cc9121
Add control lora links to colab notebook.
2023-08-22 01:55:09 -04:00
comfyanonymous
e2256b4087
Add clip_vision_g download command to colab notebook for ReVision.
2023-08-22 01:44:31 -04:00
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00
comfyanonymous
763b0cf024
Fix control lora not working in fp32.
2023-08-21 20:38:31 -04:00
comfyanonymous
bc76b3829f
Merge branch 'custom-node-js' of https://github.com/pythongosssss/ComfyUI
2023-08-21 00:58:38 -04:00
comfyanonymous
199d73364a
Fix ControlLora on lowvram.
2023-08-21 00:54:04 -04:00
comfyanonymous
d08e53de2e
Remove autocast from controlnet code.
2023-08-20 21:47:32 -04:00
pythongosssss
cdaf65ceb1
remove log
2023-08-20 20:01:25 +01:00
comfyanonymous
0d7b0a4dc7
Small cleanups.
2023-08-20 14:56:47 -04:00
pythongosssss
9b1d5a587c
Allow loading js extensions without copying to /web folder
2023-08-20 19:55:48 +01:00
Simon Lui
9225465975
Further tuning and fix mem_free_total.
2023-08-20 14:19:53 -04:00
Simon Lui
2c096e4260
Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes.
2023-08-20 14:19:51 -04:00
comfyanonymous
8ee0473687
Merge branch 'parallel-extensions-load' of https://github.com/NoCrypt/ComfyUI
2023-08-20 14:14:01 -04:00
comfyanonymous
e9469e732d
--disable-smart-memory now disables loading model directly to vram.
2023-08-20 04:00:53 -04:00
comfyanonymous
c9b562aed1
Free more memory before VAE encode/decode.
2023-08-19 12:13:13 -04:00
ncpt
81ccacaa7c
Make the extensions loads in parallel instead of waiting one by one
2023-08-19 17:36:13 +07:00
comfyanonymous
b80c3276dc
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
39ac856a33
ReVision support: unclip nodes can now be used with SDXL.
2023-08-18 11:59:36 -04:00
comfyanonymous
76d53c4622
Add support for clip g vision model to CLIPVisionLoader.
2023-08-18 11:13:29 -04:00
comfyanonymous
fc99fa56a9
Add node to scale image to a total amount of pixels keeping aspect.
2023-08-18 02:32:39 -04:00
comfyanonymous
eb5c991a8c
Merge branch 'add-user-css' of https://github.com/pythongosssss/ComfyUI
2023-08-17 16:41:54 -04:00
comfyanonymous
bd7321c8ac
Update aiohttp in nightly workflow.
2023-08-17 16:41:24 -04:00
Alexopus
e59fe0537a
Fix referenced before assignment
...
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous
be9c5e25bc
Fix issue with not freeing enough memory when sampling.
2023-08-17 15:59:56 -04:00
comfyanonymous
ac0758a1a4
Fix bug with lowvram and controlnet advanced node.
2023-08-17 13:38:51 -04:00
comfyanonymous
c28db1f315
Fix potential issues with patching models when saving checkpoints.
2023-08-17 11:07:08 -04:00
pythongosssss
c828543a77
Allow user customizable css
2023-08-17 13:36:55 +01:00
comfyanonymous
1498f1a342
Merge branch 'add-growmask-node' of https://github.com/coreyryanhanson/ComfyUI
2023-08-17 03:21:20 -04:00