Commit Graph

1856 Commits

Author SHA1 Message Date
comfyanonymous 90aa597099 Add back roundRect to fix issue on firefox ESR. 2023-07-12 02:07:48 -04:00
KarryCharon 3e2309f149 fix mps miss import 2023-07-12 10:06:34 +08:00
comfyanonymous f4b9390623 Add a random string to the temp prefix for PreviewImage. 2023-07-11 17:35:55 -04:00
comfyanonymous 2b2a1474f7 Move to litegraph. 2023-07-11 03:12:00 -04:00
comfyanonymous cef30cc6b6 Merge branch 'hidpi-canvas' of https://github.com/EHfive/ComfyUI 2023-07-11 03:04:10 -04:00
comfyanonymous 880c9b928b Update litegraph to latest. 2023-07-11 03:00:52 -04:00
Huang-Huang Bao 05e6eac7b3
Scale graph canvas based on DPI factor
Similar to fixes in litegraph.js editor demo:
3ef215cf11/editor/js/code.js (L19-L28)

Also workarounds to address viewpoint problem of lightgrapgh.js in DPI scaling scenario.

Fixes #161
2023-07-11 14:47:58 +08:00
Dr.Lt.Data 99abcbef41
feat/startup-script: Feature to avoid package installation errors when installing custom nodes. (#856)
* support startup script for installation without locking on windows

* modified: Instead of executing scripts from the startup-scripts directory, I will change it to execute the prestartup_script.py for each custom node.
2023-07-11 02:33:21 -04:00
comfyanonymous 606a537090 Support SDXL embedding format with 2 CLIP. 2023-07-10 10:34:59 -04:00
Alex "mcmonkey" Goodwin 5797ff89b0 use relative paths for all web connections
This enables local reverse-proxies to host ComfyUI on a path, eg "http://example.com/ComfyUI/" in such a way that at least everything I tested works. Without this patch, proxying ComfyUI in this way will yield errors.
2023-07-10 02:09:03 -07:00
comfyanonymous 6ad0a6d7e2 Don't patch weights when multiplier is zero. 2023-07-09 17:46:56 -04:00
comfyanonymous af15add967 Fix annoyance with textbox unselecting in chromium. 2023-07-09 15:41:19 -04:00
comfyanonymous d5323d16e0 latent2rgb matrix for SDXL. 2023-07-09 13:59:09 -04:00
comfyanonymous 0ae81c03bb Empty cache after model unloading for normal vram and lower. 2023-07-09 09:56:03 -04:00
comfyanonymous d3f5998218 Support loading clip_g from diffusers in CLIP Loader nodes. 2023-07-09 09:33:53 -04:00
comfyanonymous a9a4ba7574 Fix merging not working when model2 of model merge node was a merge. 2023-07-08 22:31:10 -04:00
comfyanonymous febea8c101 Merge branch 'bugfix/img-offset' of https://github.com/ltdrdata/ComfyUI 2023-07-08 03:45:37 -04:00
Dr.Lt.Data 9caab9380d
fix: Image.ANTIALIAS is no longer available. (#847)
* modify deprecated api call

* prevent breaking old Pillow users

* change LANCZOS to BILINEAR
2023-07-08 02:36:48 -04:00
Dr.Lt.Data d43cff2105 bugfix: image widget's was mis-aligned when node has multiline widget 2023-07-08 01:42:33 +09:00
comfyanonymous c2d407b0f7 Merge branch 'Yaruze66-patch-1' of https://github.com/Yaruze66/ComfyUI 2023-07-07 01:55:10 -04:00
comfyanonymous bb5fbd29e9 Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI 2023-07-07 01:52:25 -04:00
comfyanonymous 2c9d98f3e6 CLIPTextEncodeSDXL now works when prompts are of very different sizes. 2023-07-06 23:23:54 -04:00
comfyanonymous e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. 2023-07-06 23:23:46 -04:00
comfyanonymous f5232c4869 Fix 7z error when extracting package. 2023-07-06 04:18:36 -04:00
comfyanonymous 608fcc2591 Fix bug with weights when prompt is long. 2023-07-06 02:43:40 -04:00
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous 603f02d613 Fix loras not working when loading checkpoint with config. 2023-07-05 19:42:24 -04:00
comfyanonymous ccb1b25908 Add a conditioning concat node. 2023-07-05 17:40:22 -04:00
comfyanonymous af7a49916b Support loading unet files in diffusers format. 2023-07-05 17:38:59 -04:00
comfyanonymous e57cba4c61 Add gpu variations of the sde samplers that are less deterministic
but faster.
2023-07-05 01:39:38 -04:00
comfyanonymous f81b192944 Add logit scale parameter so it's present when saving the checkpoint. 2023-07-04 23:01:28 -04:00
comfyanonymous acf95191ff Properly support SDXL diffusers loras for unet. 2023-07-04 21:15:23 -04:00
mara c61a95f9f7 Fix size check for conditioning mask
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous 8d694cc450 Fix issue with OSX. 2023-07-04 02:09:02 -04:00
comfyanonymous c02f3baeaf Now the model merge blocks node will use the longest match. 2023-07-04 00:51:17 -04:00
comfyanonymous 3a09fac835 ConditioningAverage now also averages the pooled output. 2023-07-03 21:44:37 -04:00
comfyanonymous d94ddd8548 Add text encode nodes to control the extra parameters in SDXL. 2023-07-03 19:11:36 -04:00
comfyanonymous c3e96e637d Pass device to CLIP model. 2023-07-03 16:09:37 -04:00
comfyanonymous 5e6bc824aa Allow passing custom path to clip-g and clip-h. 2023-07-03 15:45:04 -04:00
comfyanonymous dc9d1f31c8 Improvements for OSX. 2023-07-03 00:08:30 -04:00
Yaruze66 9ae6ff65bc
Update extra_model_paths.yaml.example: add RealESRGAN path 2023-07-02 22:59:55 +05:00
comfyanonymous 103c487a89 Cleanup. 2023-07-02 11:58:23 -04:00
comfyanonymous ae948b42fa Add taesd weights to standalones. 2023-07-02 11:47:30 -04:00
comfyanonymous 2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 2023-07-02 10:00:57 -04:00
comfyanonymous 6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. 2023-07-01 22:42:35 -04:00
comfyanonymous 1c1b0e7299 --gpu-only now keeps the VAE on the device. 2023-07-01 15:22:40 -04:00
comfyanonymous ce35d8c659 Lower latency by batching some text encoder inputs. 2023-07-01 15:07:39 -04:00
comfyanonymous 3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 2023-07-01 14:38:51 -04:00
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00