Commit Graph

353 Commits

Author SHA1 Message Date
comfyanonymous 5a9ddf94eb LoraLoader node now caches the lora file between executions. 2023-06-29 23:40:51 -04:00
comfyanonymous 9920367d3c Fix embeddings not working with --gpu-only 2023-06-29 20:43:06 -04:00
comfyanonymous 62db11683b Move unet to device right after loading on highvram mode. 2023-06-29 20:43:06 -04:00
comfyanonymous 4376b125eb Remove useless code. 2023-06-29 00:26:33 -04:00
comfyanonymous 89120f1fbe This is unused but it should be 1280. 2023-06-28 18:04:23 -04:00
comfyanonymous 2c7c14de56 Support for SDXL text encoder lora. 2023-06-28 02:22:49 -04:00
comfyanonymous fcef47f06e Fix bug. 2023-06-28 00:38:07 -04:00
comfyanonymous 8248babd44 Use pytorch attention by default on nvidia when xformers isn't present.
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
comfyanonymous 9b93b920be Add CheckpointSave node to save checkpoints.
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.

Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32

Anything that patches the model weights like merging or loras will be
saved.

The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous b72a7a835a Support loras based on the stability unet implementation. 2023-06-26 02:56:11 -04:00
comfyanonymous c71a7e6b20 Fix ddim + inpainting not working. 2023-06-26 00:48:48 -04:00
comfyanonymous 4eab00e14b Set the seed in the SDE samplers to make them more reproducible. 2023-06-25 03:04:57 -04:00
comfyanonymous cef6aa62b2 Add support for TAESD decoder for SDXL. 2023-06-25 02:38:14 -04:00
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous b7933960bb Fix CLIPLoader node. 2023-06-24 13:56:46 -04:00
comfyanonymous 78d8035f73 Fix bug with controlnet. 2023-06-24 11:02:38 -04:00
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous fa28d7334b Remove useless code. 2023-06-23 12:35:26 -04:00
comfyanonymous 8607c2d42d Move latent scale factor from VAE to model. 2023-06-23 02:33:31 -04:00
comfyanonymous 30a3861946 Fix bug when yaml config has no clip params. 2023-06-23 01:12:59 -04:00
comfyanonymous 9e37f4c7d5 Fix error with ClipVision loader node. 2023-06-23 01:08:05 -04:00
comfyanonymous 9f83b098c9 Don't merge weights when shapes don't match and print a warning. 2023-06-22 19:08:31 -04:00
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous 9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 2023-06-21 13:22:01 -04:00
comfyanonymous 51581dbfa9 Fix last commits causing an issue with the text encoder lora. 2023-06-20 19:44:39 -04:00
comfyanonymous 8125b51a62 Keep a set of model_keys for faster add_patches. 2023-06-20 19:08:48 -04:00
comfyanonymous 45beebd33c Add a type of model patch useful for model merging. 2023-06-20 17:34:11 -04:00
comfyanonymous 036a22077c Fix k_diffusion math being off by a tiny bit during txt2img. 2023-06-19 15:28:54 -04:00
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous cd930d4e7f pop clip vision keys after loading them. 2023-06-18 21:21:17 -04:00
comfyanonymous c9e4a8c9e5 Not needed anymore. 2023-06-18 13:06:59 -04:00
comfyanonymous fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 2023-06-18 03:18:25 -04:00
comfyanonymous 45be2e92c1 Fix DDIM v-prediction. 2023-06-17 20:48:21 -04:00
comfyanonymous e6e50ab2dd Fix an issue when alphas_comprod are half floats. 2023-06-16 17:16:51 -04:00
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 2023-06-15 18:42:30 -04:00
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 2023-06-15 15:00:10 -04:00
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 2023-06-15 14:29:26 -04:00
comfyanonymous bb1f45d6e8 Properly disable weight initialization in clip models. 2023-06-14 20:13:08 -04:00
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 2023-06-14 19:46:08 -04:00
comfyanonymous 9d54066ebc This isn't needed for inference. 2023-06-14 13:05:08 -04:00
comfyanonymous fa2cca056c Don't initialize CLIPVision weights to default values. 2023-06-14 12:57:02 -04:00
comfyanonymous 6b774589a5 Set model to fp16 before loading the state dict to lower ram bump. 2023-06-14 12:48:02 -04:00
comfyanonymous 0c7cad404c Don't initialize clip weights to default values. 2023-06-14 12:47:36 -04:00
comfyanonymous 6971646b8b Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous 388567f20b sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous ff9b22d79e Turn on safe load for a few models. 2023-06-13 10:12:03 -04:00
comfyanonymous 735ac4cf81 Remove pytorch_lightning dependency. 2023-06-13 10:11:33 -04:00
comfyanonymous 2b14041d4b Remove useless code. 2023-06-13 02:40:58 -04:00
comfyanonymous 274dff3257 Remove more useless files. 2023-06-13 02:22:19 -04:00
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 2023-06-13 02:19:08 -04:00
comfyanonymous f8c5931053 Split the batch in VAEEncode if there's not enough memory. 2023-06-12 00:21:50 -04:00
comfyanonymous c069fc0730 Auto switch to tiled VAE encode if regular one runs out of memory. 2023-06-11 23:25:39 -04:00
comfyanonymous c64ca8c0b2 Refactor unCLIP noise augment out of samplers.py 2023-06-11 04:01:18 -04:00
comfyanonymous de142eaad5 Simpler base model code. 2023-06-09 12:31:16 -04:00
comfyanonymous 23cf8ca7c5 Fix bug when embedding gets ignored because of mismatched size. 2023-06-08 23:48:14 -04:00
comfyanonymous 0e425603fb Small refactor. 2023-06-06 13:23:01 -04:00
comfyanonymous a3a713b6c5 Refactor previews into one command line argument.
Clean up a few things.
2023-06-06 02:13:05 -04:00
space-nuko 3e17971acb preview method autodetection 2023-06-05 18:59:10 -05:00
space-nuko d5a28fadaa Add latent2rgb preview 2023-06-05 18:39:56 -05:00
space-nuko 48f7ec750c Make previews into cli option 2023-06-05 13:19:02 -05:00
space-nuko b4f434ee66 Preview sampled images with TAESD 2023-06-05 09:20:17 -05:00
comfyanonymous fed0a4dd29 Some comments to say what the vram state options mean. 2023-06-04 17:51:04 -04:00
comfyanonymous 0a5fefd621 Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
2023-06-03 11:05:37 -04:00
comfyanonymous 700491d81a Implement global average pooling for controlnet. 2023-06-03 01:49:03 -04:00
comfyanonymous 67892b5ac5 Refactor and improve model_management code related to free memory. 2023-06-02 15:21:33 -04:00
space-nuko 499641ebf1 More accurate total 2023-06-02 00:14:41 -05:00
space-nuko b5dd15c67a System stats endpoint 2023-06-01 23:26:23 -05:00
comfyanonymous 5c38958e49 Tweak lowvram model memory so it's closer to what it was before. 2023-06-01 04:04:35 -04:00
comfyanonymous 94680732d3 Empty cache on mps. 2023-06-01 03:52:51 -04:00
comfyanonymous 03da8a3426 This is useless for inference. 2023-05-31 13:03:24 -04:00
comfyanonymous eb448dd8e1 Auto load model in lowvram if not enough memory. 2023-05-30 12:36:41 -04:00
comfyanonymous b9818eb910 Add route to get safetensors metadata:
/view_metadata/loras?filename=lora.safetensors
2023-05-29 02:48:50 -04:00
comfyanonymous a532888846 Support VAEs in diffusers format. 2023-05-28 02:02:09 -04:00
comfyanonymous 0fc483dcfd Refactor diffusers model convert code to be able to reuse it. 2023-05-28 01:55:40 -04:00
comfyanonymous eb4bd7711a Remove einops. 2023-05-25 18:42:56 -04:00
comfyanonymous 87ab25fac7 Do operations in same order as the one it replaces. 2023-05-25 18:31:27 -04:00
comfyanonymous 2b1fac9708 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-05-25 14:44:16 -04:00
comfyanonymous e1278fa925 Support old pytorch versions that don't have weights_only. 2023-05-25 13:30:59 -04:00
BlenderNeko 8b4b0c3188 vecorized bislerp 2023-05-25 19:23:47 +02:00
comfyanonymous b8ccbec6d8 Various improvements to bislerp. 2023-05-23 11:40:24 -04:00
comfyanonymous 34887b8885 Add experimental bislerp algorithm for latent upscaling.
It's like bilinear but with slerp.
2023-05-23 03:12:56 -04:00
comfyanonymous 6cc450579b Auto transpose images from exif data. 2023-05-22 00:22:24 -04:00
comfyanonymous dc198650c0 sample_dpmpp_2m_sde no longer crashes when step == 1. 2023-05-21 11:34:29 -04:00
comfyanonymous 069657fbf3 Add DPM-Solver++(2M) SDE and exponential scheduler.
exponential scheduler is the one recommended with this sampler.
2023-05-21 01:46:03 -04:00
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2023-05-20 16:01:02 -04:00
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2023-05-20 15:07:21 -04:00
comfyanonymous ef815ba1e2 Switch default scheduler to normal. 2023-05-15 00:29:56 -04:00
comfyanonymous 68d12b530e Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI 2023-05-14 15:39:39 -04:00
comfyanonymous 3a1f47764d Print the torch device that is used on startup. 2023-05-13 17:11:27 -04:00
BlenderNeko 1201d2eae5
Make nodes map over input lists (#579)
* allow nodes to map over lists

* make work with IS_CHANGED and VALIDATE_INPUTS

* give list outputs distinct socket shape

* add rebatch node

* add batch index logic

* add repeat latent batch

* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
BlenderNeko 19c014f429 comment out annoying print statement 2023-05-12 23:57:40 +02:00
BlenderNeko d9e088ddfd minor changes for tiled sampler 2023-05-12 23:49:09 +02:00
comfyanonymous f7c0f75d1f Auto batching improvements.
Try batching when cond sizes don't match with smart padding.
2023-05-10 13:59:24 -04:00
comfyanonymous 314e526c5c Not needed anymore because sampling works with any latent size. 2023-05-09 12:18:18 -04:00
comfyanonymous c6e34963e4 Make t2i adapter work with any latent resolution. 2023-05-08 18:15:19 -04:00
comfyanonymous a1f12e370d Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI 2023-05-07 17:19:03 -04:00
comfyanonymous 6fc4917634 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2023-05-06 19:58:54 -04:00
comfyanonymous 678f933d38 maximum_batch_area for xformers.
Remove useless code.
2023-05-06 19:28:46 -04:00
EllangoK 8e03c789a2 auto-launch cli arg 2023-05-06 16:59:40 -04:00