Commit Graph

599 Commits

Author SHA1 Message Date
comfyanonymous 5c363a9d86 Fix controlnet bug. 2023-09-01 02:01:08 -04:00
comfyanonymous cfe1c54de8 Fix controlnet issue. 2023-08-31 15:16:58 -04:00
comfyanonymous 1c012d69af It doesn't make sense for c_crossattn and c_concat to be lists. 2023-08-31 13:25:00 -04:00
comfyanonymous 7e941f9f24 Clean up DiffusersLoader node. 2023-08-30 12:57:07 -04:00
Simon Lui 18617967e5
Fix error message in model_patcher.py
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous fe4c07400c Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous f2f5e5dcbb Support SDXL t2i adapters with 3 channel input. 2023-08-29 16:44:57 -04:00
comfyanonymous 15adc3699f Move beta_schedule to model_config and allow disabling unet creation. 2023-08-29 14:22:53 -04:00
comfyanonymous bed116a1f9 Remove optimization that caused border. 2023-08-29 11:21:36 -04:00
comfyanonymous 65cae62c71 No need to check filename extensions to detect shuffle controlnet. 2023-08-28 16:49:06 -04:00
comfyanonymous 4e89b2c25a Put clip vision outputs on the CPU. 2023-08-28 16:26:11 -04:00
comfyanonymous a094b45c93 Load clipvision model to GPU for faster performance. 2023-08-28 15:29:27 -04:00
comfyanonymous 1300a1bb4c Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous f92074b84f Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
comfyanonymous 4798cf5a62 Implement loras with norm keys. 2023-08-28 11:20:06 -04:00
comfyanonymous b8c7c770d3 Enable bf16-vae by default on ampere and up. 2023-08-27 23:06:19 -04:00
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 2023-08-27 22:24:42 -04:00
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 2023-08-27 21:33:53 -04:00
comfyanonymous a57b0c797b Fix lowvram model merging. 2023-08-26 11:52:07 -04:00
comfyanonymous f72780a7e3 The new smart memory management makes this unnecessary. 2023-08-25 18:02:15 -04:00
comfyanonymous c77f02e1c6 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous 15a7716fa6 Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00
comfyanonymous ec96f6d03a Move text_projection to base clip model. 2023-08-24 23:43:48 -04:00
comfyanonymous 30eb92c3cb Code cleanups. 2023-08-24 19:39:18 -04:00
comfyanonymous 51dde87e97 Try to free enough vram for control lora inference. 2023-08-24 17:20:54 -04:00
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 2023-08-24 00:54:16 -04:00
comfyanonymous cc44ade79e Always shift text encoder to GPU when the device supports fp16. 2023-08-23 21:45:00 -04:00
comfyanonymous a6ef08a46a Even with forced fp16 the cpu device should never use it. 2023-08-23 21:38:28 -04:00
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 2023-08-23 21:01:15 -04:00
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous afcb9cb1df All resolutions now work with t2i adapter for SDXL. 2023-08-22 16:23:54 -04:00
comfyanonymous 85fde89d7f T2I adapter SDXL. 2023-08-22 14:40:43 -04:00
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 2023-08-22 01:06:26 -04:00
comfyanonymous 763b0cf024 Fix control lora not working in fp32. 2023-08-21 20:38:31 -04:00
comfyanonymous 199d73364a Fix ControlLora on lowvram. 2023-08-21 00:54:04 -04:00
comfyanonymous d08e53de2e Remove autocast from controlnet code. 2023-08-20 21:47:32 -04:00
comfyanonymous 0d7b0a4dc7 Small cleanups. 2023-08-20 14:56:47 -04:00
Simon Lui 9225465975 Further tuning and fix mem_free_total. 2023-08-20 14:19:53 -04:00
Simon Lui 2c096e4260 Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes. 2023-08-20 14:19:51 -04:00
comfyanonymous e9469e732d --disable-smart-memory now disables loading model directly to vram. 2023-08-20 04:00:53 -04:00
comfyanonymous c9b562aed1 Free more memory before VAE encode/decode. 2023-08-19 12:13:13 -04:00
comfyanonymous b80c3276dc Fix issue with gligen. 2023-08-18 16:32:23 -04:00
comfyanonymous d6e4b342e6 Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous 39ac856a33 ReVision support: unclip nodes can now be used with SDXL. 2023-08-18 11:59:36 -04:00
comfyanonymous 76d53c4622 Add support for clip g vision model to CLIPVisionLoader. 2023-08-18 11:13:29 -04:00
Alexopus e59fe0537a
Fix referenced before assignment
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous be9c5e25bc Fix issue with not freeing enough memory when sampling. 2023-08-17 15:59:56 -04:00
comfyanonymous ac0758a1a4 Fix bug with lowvram and controlnet advanced node. 2023-08-17 13:38:51 -04:00
comfyanonymous c28db1f315 Fix potential issues with patching models when saving checkpoints. 2023-08-17 11:07:08 -04:00
comfyanonymous 3aee33b54e Add --disable-smart-memory for those that want the old behaviour. 2023-08-17 03:12:37 -04:00
comfyanonymous 2be2742711 Fix issue with regular torch version. 2023-08-17 01:58:54 -04:00
comfyanonymous 89a0767abf Smarter memory management.
Try to keep models on the vram when possible.

Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous 2c97c30256 Support small diffusers controlnet so both types are now supported. 2023-08-16 12:45:56 -04:00
comfyanonymous 53f326a3d8 Support diffusers mini controlnets. 2023-08-16 12:28:01 -04:00
comfyanonymous 58f0c616ed Fix clip vision issue with old transformers versions. 2023-08-16 11:36:22 -04:00
comfyanonymous ae270f79bc Fix potential issue with batch size and clip vision. 2023-08-16 11:05:11 -04:00
comfyanonymous a2ce9655ca Refactor unclip code. 2023-08-14 23:48:47 -04:00
comfyanonymous 9cc12c833d CLIPVisionEncode can now encode multiple images. 2023-08-14 16:54:05 -04:00
comfyanonymous 0cb6dac943 Remove 3m from PR #1213 because of some small issues. 2023-08-14 00:48:45 -04:00
comfyanonymous e244b2df83 Add sgm_uniform scheduler that acts like the default one in sgm. 2023-08-14 00:29:03 -04:00
comfyanonymous 58c7da3665 Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras. 2023-08-14 00:28:50 -04:00
comfyanonymous ba319a34e4 Merge branch 'dpmpp3m' of https://github.com/FizzleDorf/ComfyUI 2023-08-14 00:23:15 -04:00
FizzleDorf 3cfad03a68 dpmpp 3m + dpmpp 3m sde added 2023-08-13 22:29:04 -04:00
comfyanonymous 585a062910 Print unet config when model isn't detected. 2023-08-13 01:39:48 -04:00
comfyanonymous c8a23ce9e8 Support for yet another lora type based on diffusers. 2023-08-11 13:04:21 -04:00
comfyanonymous 2bc12d3d22 Add --temp-directory argument to set temp directory. 2023-08-11 05:13:03 -04:00
comfyanonymous c20583286f Support diffuser text encoder loras. 2023-08-10 20:28:28 -04:00
comfyanonymous cf10c5592c Disable calculating uncond when CFG is 1.0 2023-08-09 20:55:03 -04:00
comfyanonymous 1f0f4cc0bd Add argument to disable auto launching the browser. 2023-08-07 02:25:12 -04:00
comfyanonymous d8e58f0a7e Detect hint_channels from controlnet. 2023-08-06 14:08:59 -04:00
comfyanonymous c5d7593ccf Support loras in diffusers format. 2023-08-05 01:40:24 -04:00
comfyanonymous 1ce0d8ad68 Add CMP 30HX card to the nvidia_16_series list. 2023-08-04 12:08:45 -04:00
comfyanonymous c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 2023-08-03 20:27:50 -04:00
comfyanonymous 4a77fcd6ab Only shift text encoder to vram when CPU cores are under 8. 2023-07-31 00:08:54 -04:00
comfyanonymous 3cd31d0e24 Lower CPU thread check for running the text encoder on the CPU vs GPU. 2023-07-30 17:18:24 -04:00
comfyanonymous 2b13939044 Remove some useless code. 2023-07-30 14:13:33 -04:00
comfyanonymous 95d796fc85 Faster VAE loading. 2023-07-29 16:28:30 -04:00
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 2023-07-29 14:51:56 -04:00
comfyanonymous c910b4a01c Remove unused code and torchdiffeq dependency. 2023-07-28 21:32:27 -04:00
comfyanonymous 1141029a4a Add --disable-metadata argument to disable saving metadata in files. 2023-07-28 12:31:41 -04:00
comfyanonymous fbf5c51c1c Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI 2023-07-27 16:13:48 -04:00
comfyanonymous 68be24eead Remove some prints. 2023-07-27 16:12:43 -04:00
asagi4 1ea4d84691 Fix timestep ranges when batch_size > 1 2023-07-27 21:14:09 +03:00
comfyanonymous 5379051d16 Fix diffusers VAE loading. 2023-07-26 18:26:39 -04:00
comfyanonymous 727588d076 Fix some new loras. 2023-07-25 16:39:15 -04:00
comfyanonymous 4f9b6f39d1 Fix potential issue with Save Checkpoint. 2023-07-25 00:45:20 -04:00
comfyanonymous 5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 2023-07-24 18:38:17 -04:00
comfyanonymous 7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2023-07-24 17:50:49 -04:00
comfyanonymous d191c4f9ed Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2023-07-24 13:35:20 -04:00
comfyanonymous 0240946ecf Add a way to set which range of timesteps the cond gets applied to. 2023-07-24 09:25:02 -04:00
comfyanonymous 22f29d66ca Try to fix memory issue with lora. 2023-07-22 21:38:56 -04:00
comfyanonymous 67be7eb81d Nodes can now patch the unet function. 2023-07-22 17:01:12 -04:00
comfyanonymous 12a6e93171 Del the right object when applying lora. 2023-07-22 11:25:49 -04:00
comfyanonymous 78e7958d17 Support controlnet in diffusers format. 2023-07-21 22:58:16 -04:00
comfyanonymous 09386a3697 Fix issue with lora in some cases when combined with model merging. 2023-07-21 21:27:27 -04:00
comfyanonymous 58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 2023-07-21 14:38:56 -04:00
comfyanonymous 0115018695 Print errors and continue when lora weights are not compatible. 2023-07-20 19:56:22 -04:00
comfyanonymous 4760c29380 Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI 2023-07-20 00:34:54 -04:00
comfyanonymous 0b284f650b Fix typo. 2023-07-19 10:20:32 -04:00
comfyanonymous e032ca6138 Fix ddim issue with older torch versions. 2023-07-19 10:16:00 -04:00
comfyanonymous 18885f803a Add MX450 and MX550 to list of cards with broken fp16. 2023-07-19 03:08:30 -04:00
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 2023-07-18 21:36:35 -04:00
comfyanonymous 51d5477579 Add key to indicate checkpoint is v_prediction when saving. 2023-07-18 00:25:53 -04:00
comfyanonymous ff6b047a74 Fix device print on old torch version. 2023-07-17 15:18:58 -04:00
comfyanonymous 9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up.
Add --disable-cuda-malloc to disable it.
2023-07-17 15:12:10 -04:00
comfyanonymous 55d0fca9fa --windows-standalone-build now enables --cuda-malloc 2023-07-17 14:10:36 -04:00
comfyanonymous 1679abd86d Add a command line argument to enable backend:cudaMallocAsync 2023-07-17 11:00:14 -04:00
comfyanonymous 3a150bad15 Only calculate randn in some samplers when it's actually being used. 2023-07-17 10:11:08 -04:00
comfyanonymous ee8f8ee07f Fix regression with ddim and uni_pc when batch size > 1. 2023-07-17 09:35:19 -04:00
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 2023-07-17 01:22:12 -04:00
comfyanonymous 5f57362613 Lower lora ram usage when in normal vram mode. 2023-07-16 02:59:04 -04:00
comfyanonymous 490771b7f4 Speed up lora loading a bit. 2023-07-15 13:25:22 -04:00
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 2023-07-15 01:41:21 -04:00
comfyanonymous 6fb084f39d Reduce floating point rounding errors in loras. 2023-07-15 00:53:00 -04:00
comfyanonymous 91ed2815d5 Add a node to merge CLIP models. 2023-07-14 02:41:18 -04:00
comfyanonymous b2f03164c7 Prevent the clip_g position_ids key from being saved in the checkpoint.
This is to make it match the official checkpoint.
2023-07-12 20:15:02 -04:00
comfyanonymous 46dc050c9f Fix potential tensors being on different devices issues. 2023-07-12 19:29:27 -04:00
KarryCharon 3e2309f149 fix mps miss import 2023-07-12 10:06:34 +08:00
comfyanonymous 606a537090 Support SDXL embedding format with 2 CLIP. 2023-07-10 10:34:59 -04:00
comfyanonymous 6ad0a6d7e2 Don't patch weights when multiplier is zero. 2023-07-09 17:46:56 -04:00
comfyanonymous d5323d16e0 latent2rgb matrix for SDXL. 2023-07-09 13:59:09 -04:00
comfyanonymous 0ae81c03bb Empty cache after model unloading for normal vram and lower. 2023-07-09 09:56:03 -04:00
comfyanonymous d3f5998218 Support loading clip_g from diffusers in CLIP Loader nodes. 2023-07-09 09:33:53 -04:00
comfyanonymous a9a4ba7574 Fix merging not working when model2 of model merge node was a merge. 2023-07-08 22:31:10 -04:00
comfyanonymous bb5fbd29e9 Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI 2023-07-07 01:52:25 -04:00
comfyanonymous e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. 2023-07-06 23:23:46 -04:00
comfyanonymous 608fcc2591 Fix bug with weights when prompt is long. 2023-07-06 02:43:40 -04:00
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous 603f02d613 Fix loras not working when loading checkpoint with config. 2023-07-05 19:42:24 -04:00
comfyanonymous af7a49916b Support loading unet files in diffusers format. 2023-07-05 17:38:59 -04:00
comfyanonymous e57cba4c61 Add gpu variations of the sde samplers that are less deterministic
but faster.
2023-07-05 01:39:38 -04:00
comfyanonymous f81b192944 Add logit scale parameter so it's present when saving the checkpoint. 2023-07-04 23:01:28 -04:00
comfyanonymous acf95191ff Properly support SDXL diffusers loras for unet. 2023-07-04 21:15:23 -04:00
mara c61a95f9f7 Fix size check for conditioning mask
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous 8d694cc450 Fix issue with OSX. 2023-07-04 02:09:02 -04:00
comfyanonymous c3e96e637d Pass device to CLIP model. 2023-07-03 16:09:37 -04:00
comfyanonymous 5e6bc824aa Allow passing custom path to clip-g and clip-h. 2023-07-03 15:45:04 -04:00
comfyanonymous dc9d1f31c8 Improvements for OSX. 2023-07-03 00:08:30 -04:00
comfyanonymous 103c487a89 Cleanup. 2023-07-02 11:58:23 -04:00
comfyanonymous 2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 2023-07-02 10:00:57 -04:00
comfyanonymous 6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. 2023-07-01 22:42:35 -04:00
comfyanonymous 1c1b0e7299 --gpu-only now keeps the VAE on the device. 2023-07-01 15:22:40 -04:00
comfyanonymous ce35d8c659 Lower latency by batching some text encoder inputs. 2023-07-01 15:07:39 -04:00
comfyanonymous 3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 2023-07-01 14:38:51 -04:00
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous 5a9ddf94eb LoraLoader node now caches the lora file between executions. 2023-06-29 23:40:51 -04:00
comfyanonymous 9920367d3c Fix embeddings not working with --gpu-only 2023-06-29 20:43:06 -04:00
comfyanonymous 62db11683b Move unet to device right after loading on highvram mode. 2023-06-29 20:43:06 -04:00
comfyanonymous 4376b125eb Remove useless code. 2023-06-29 00:26:33 -04:00
comfyanonymous 89120f1fbe This is unused but it should be 1280. 2023-06-28 18:04:23 -04:00
comfyanonymous 2c7c14de56 Support for SDXL text encoder lora. 2023-06-28 02:22:49 -04:00
comfyanonymous fcef47f06e Fix bug. 2023-06-28 00:38:07 -04:00
comfyanonymous 8248babd44 Use pytorch attention by default on nvidia when xformers isn't present.
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
comfyanonymous 9b93b920be Add CheckpointSave node to save checkpoints.
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.

Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32

Anything that patches the model weights like merging or loras will be
saved.

The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous b72a7a835a Support loras based on the stability unet implementation. 2023-06-26 02:56:11 -04:00
comfyanonymous c71a7e6b20 Fix ddim + inpainting not working. 2023-06-26 00:48:48 -04:00
comfyanonymous 4eab00e14b Set the seed in the SDE samplers to make them more reproducible. 2023-06-25 03:04:57 -04:00
comfyanonymous cef6aa62b2 Add support for TAESD decoder for SDXL. 2023-06-25 02:38:14 -04:00
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous b7933960bb Fix CLIPLoader node. 2023-06-24 13:56:46 -04:00
comfyanonymous 78d8035f73 Fix bug with controlnet. 2023-06-24 11:02:38 -04:00
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous fa28d7334b Remove useless code. 2023-06-23 12:35:26 -04:00
comfyanonymous 8607c2d42d Move latent scale factor from VAE to model. 2023-06-23 02:33:31 -04:00
comfyanonymous 30a3861946 Fix bug when yaml config has no clip params. 2023-06-23 01:12:59 -04:00
comfyanonymous 9e37f4c7d5 Fix error with ClipVision loader node. 2023-06-23 01:08:05 -04:00
comfyanonymous 9f83b098c9 Don't merge weights when shapes don't match and print a warning. 2023-06-22 19:08:31 -04:00
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous 9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 2023-06-21 13:22:01 -04:00
comfyanonymous 51581dbfa9 Fix last commits causing an issue with the text encoder lora. 2023-06-20 19:44:39 -04:00
comfyanonymous 8125b51a62 Keep a set of model_keys for faster add_patches. 2023-06-20 19:08:48 -04:00
comfyanonymous 45beebd33c Add a type of model patch useful for model merging. 2023-06-20 17:34:11 -04:00
comfyanonymous 036a22077c Fix k_diffusion math being off by a tiny bit during txt2img. 2023-06-19 15:28:54 -04:00
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous cd930d4e7f pop clip vision keys after loading them. 2023-06-18 21:21:17 -04:00
comfyanonymous c9e4a8c9e5 Not needed anymore. 2023-06-18 13:06:59 -04:00
comfyanonymous fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 2023-06-18 03:18:25 -04:00
comfyanonymous 45be2e92c1 Fix DDIM v-prediction. 2023-06-17 20:48:21 -04:00
comfyanonymous e6e50ab2dd Fix an issue when alphas_comprod are half floats. 2023-06-16 17:16:51 -04:00
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 2023-06-15 18:42:30 -04:00
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 2023-06-15 15:00:10 -04:00
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 2023-06-15 14:29:26 -04:00
comfyanonymous bb1f45d6e8 Properly disable weight initialization in clip models. 2023-06-14 20:13:08 -04:00
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 2023-06-14 19:46:08 -04:00
comfyanonymous 9d54066ebc This isn't needed for inference. 2023-06-14 13:05:08 -04:00
comfyanonymous fa2cca056c Don't initialize CLIPVision weights to default values. 2023-06-14 12:57:02 -04:00
comfyanonymous 6b774589a5 Set model to fp16 before loading the state dict to lower ram bump. 2023-06-14 12:48:02 -04:00
comfyanonymous 0c7cad404c Don't initialize clip weights to default values. 2023-06-14 12:47:36 -04:00
comfyanonymous 6971646b8b Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous 388567f20b sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous ff9b22d79e Turn on safe load for a few models. 2023-06-13 10:12:03 -04:00
comfyanonymous 735ac4cf81 Remove pytorch_lightning dependency. 2023-06-13 10:11:33 -04:00
comfyanonymous 2b14041d4b Remove useless code. 2023-06-13 02:40:58 -04:00
comfyanonymous 274dff3257 Remove more useless files. 2023-06-13 02:22:19 -04:00
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 2023-06-13 02:19:08 -04:00
comfyanonymous f8c5931053 Split the batch in VAEEncode if there's not enough memory. 2023-06-12 00:21:50 -04:00
comfyanonymous c069fc0730 Auto switch to tiled VAE encode if regular one runs out of memory. 2023-06-11 23:25:39 -04:00
comfyanonymous c64ca8c0b2 Refactor unCLIP noise augment out of samplers.py 2023-06-11 04:01:18 -04:00