Commit Graph

1318 Commits

Author SHA1 Message Date
comfyanonymous c910b4a01c Remove unused code and torchdiffeq dependency. 2023-07-28 21:32:27 -04:00
comfyanonymous 1141029a4a Add --disable-metadata argument to disable saving metadata in files. 2023-07-28 12:31:41 -04:00
comfyanonymous fbf5c51c1c Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI 2023-07-27 16:13:48 -04:00
comfyanonymous 68be24eead Remove some prints. 2023-07-27 16:12:43 -04:00
asagi4 1ea4d84691 Fix timestep ranges when batch_size > 1 2023-07-27 21:14:09 +03:00
comfyanonymous 4ab75d9cb8 Update colab notebook with SDXL links. 2023-07-26 21:50:44 -04:00
comfyanonymous 5379051d16 Fix diffusers VAE loading. 2023-07-26 18:26:39 -04:00
comfyanonymous 00da9b3268 Merge branch 'fix/types' of https://github.com/melMass/ComfyUI 2023-07-26 01:55:55 -04:00
comfyanonymous 5e3ac1928a Implement modelspec metadata in CheckpointSave for SDXL and refiner. 2023-07-25 22:02:34 -04:00
comfyanonymous 727588d076 Fix some new loras. 2023-07-25 16:39:15 -04:00
comfyanonymous 315ba30c81 Update nightly ROCm pytorch command in readme to 5.6 2023-07-25 15:48:26 -04:00
comfyanonymous 4f9b6f39d1 Fix potential issue with Save Checkpoint. 2023-07-25 00:45:20 -04:00
comfyanonymous 7c0a5a3e0e Disable cuda malloc on a bunch of quadro cards. 2023-07-25 00:09:01 -04:00
comfyanonymous a51f33ee49 Use bigger tiles when upscaling with model and fallback on OOM. 2023-07-24 19:47:32 -04:00
comfyanonymous 5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 2023-07-24 18:38:17 -04:00
comfyanonymous 7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2023-07-24 17:50:49 -04:00
comfyanonymous d191c4f9ed Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2023-07-24 13:35:20 -04:00
comfyanonymous 0240946ecf Add a way to set which range of timesteps the cond gets applied to. 2023-07-24 09:25:02 -04:00
comfyanonymous 30de083dd0 Disable cuda malloc on all the 9xx series. 2023-07-23 13:29:14 -04:00
comfyanonymous 22f29d66ca Try to fix memory issue with lora. 2023-07-22 21:38:56 -04:00
comfyanonymous 67be7eb81d Nodes can now patch the unet function. 2023-07-22 17:01:12 -04:00
comfyanonymous 12a6e93171 Del the right object when applying lora. 2023-07-22 11:25:49 -04:00
comfyanonymous 85a8900a14 Disable cuda malloc on regular GTX 960. 2023-07-22 11:05:33 -04:00
comfyanonymous 78e7958d17 Support controlnet in diffusers format. 2023-07-21 22:58:16 -04:00
comfyanonymous 09386a3697 Fix issue with lora in some cases when combined with model merging. 2023-07-21 21:27:27 -04:00
comfyanonymous 58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 2023-07-21 14:38:56 -04:00
melMass 5190aa284d fix: ️ small type fix
getCustomWidgets expects a plain record and not an array of records
2023-07-21 13:19:05 +02:00
comfyanonymous 0115018695 Print errors and continue when lora weights are not compatible. 2023-07-20 19:56:22 -04:00
comfyanonymous 4760c29380 Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI 2023-07-20 00:34:54 -04:00
comfyanonymous ccb6b70de1 Move image encoding outside of sampling loop for better preview perf. 2023-07-19 18:06:58 -04:00
comfyanonymous 39c58b227f Disable cuda malloc on GTX 750 Ti. 2023-07-19 15:14:10 -04:00
comfyanonymous d5c0765f4e Update how to get the prompt in api format in the example. 2023-07-19 15:07:12 -04:00
comfyanonymous 799c08a4ce Auto disable cuda malloc on some GPUs on windows. 2023-07-19 14:43:55 -04:00
comfyanonymous 0b284f650b Fix typo. 2023-07-19 10:20:32 -04:00
comfyanonymous e032ca6138 Fix ddim issue with older torch versions. 2023-07-19 10:16:00 -04:00
comfyanonymous 18885f803a Add MX450 and MX550 to list of cards with broken fp16. 2023-07-19 03:08:30 -04:00
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 2023-07-18 21:36:35 -04:00
comfyanonymous 51d5477579 Add key to indicate checkpoint is v_prediction when saving. 2023-07-18 00:25:53 -04:00
comfyanonymous ff6b047a74 Fix device print on old torch version. 2023-07-17 15:18:58 -04:00
comfyanonymous 9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up.
Add --disable-cuda-malloc to disable it.
2023-07-17 15:12:10 -04:00
comfyanonymous 55d0fca9fa --windows-standalone-build now enables --cuda-malloc 2023-07-17 14:10:36 -04:00
comfyanonymous 1679abd86d Add a command line argument to enable backend:cudaMallocAsync 2023-07-17 11:00:14 -04:00
comfyanonymous 3a150bad15 Only calculate randn in some samplers when it's actually being used. 2023-07-17 10:11:08 -04:00
comfyanonymous ee8f8ee07f Fix regression with ddim and uni_pc when batch size > 1. 2023-07-17 09:35:19 -04:00
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 2023-07-17 01:22:12 -04:00
comfyanonymous ac9c038ac2 Merge branch 'master' of https://github.com/ComfyUI-Community/ComfyUI 2023-07-16 03:04:45 -04:00
comfyanonymous 5f57362613 Lower lora ram usage when in normal vram mode. 2023-07-16 02:59:04 -04:00
ComfyUI-Community a8f3bbc35d
Patch del self.loaded_lora to prevent error with persistent lora_name swapping 2023-07-15 17:11:12 -07:00
comfyanonymous 490771b7f4 Speed up lora loading a bit. 2023-07-15 13:25:22 -04:00
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 2023-07-15 01:41:21 -04:00