comfyanonymous
|
21f04fe632
|
Disable default weight values in unet conv2d for faster loading.
|
2023-06-14 19:46:08 -04:00 |
comfyanonymous
|
9d54066ebc
|
This isn't needed for inference.
|
2023-06-14 13:05:08 -04:00 |
comfyanonymous
|
fa2cca056c
|
Don't initialize CLIPVision weights to default values.
|
2023-06-14 12:57:02 -04:00 |
comfyanonymous
|
6b774589a5
|
Set model to fp16 before loading the state dict to lower ram bump.
|
2023-06-14 12:48:02 -04:00 |
comfyanonymous
|
0c7cad404c
|
Don't initialize clip weights to default values.
|
2023-06-14 12:47:36 -04:00 |
comfyanonymous
|
6971646b8b
|
Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
|
2023-06-14 12:09:41 -04:00 |
comfyanonymous
|
388567f20b
|
sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
|
2023-06-13 16:10:36 -04:00 |
comfyanonymous
|
ff9b22d79e
|
Turn on safe load for a few models.
|
2023-06-13 10:12:03 -04:00 |
comfyanonymous
|
735ac4cf81
|
Remove pytorch_lightning dependency.
|
2023-06-13 10:11:33 -04:00 |
comfyanonymous
|
2b14041d4b
|
Remove useless code.
|
2023-06-13 02:40:58 -04:00 |
comfyanonymous
|
274dff3257
|
Remove more useless files.
|
2023-06-13 02:22:19 -04:00 |
comfyanonymous
|
f0a2b81cd0
|
Cleanup: Remove a bunch of useless files.
|
2023-06-13 02:19:08 -04:00 |
comfyanonymous
|
f8c5931053
|
Split the batch in VAEEncode if there's not enough memory.
|
2023-06-12 00:21:50 -04:00 |
comfyanonymous
|
c069fc0730
|
Auto switch to tiled VAE encode if regular one runs out of memory.
|
2023-06-11 23:25:39 -04:00 |
comfyanonymous
|
c64ca8c0b2
|
Refactor unCLIP noise augment out of samplers.py
|
2023-06-11 04:01:18 -04:00 |
comfyanonymous
|
de142eaad5
|
Simpler base model code.
|
2023-06-09 12:31:16 -04:00 |
comfyanonymous
|
23cf8ca7c5
|
Fix bug when embedding gets ignored because of mismatched size.
|
2023-06-08 23:48:14 -04:00 |
comfyanonymous
|
0e425603fb
|
Small refactor.
|
2023-06-06 13:23:01 -04:00 |
comfyanonymous
|
a3a713b6c5
|
Refactor previews into one command line argument.
Clean up a few things.
|
2023-06-06 02:13:05 -04:00 |
space-nuko
|
3e17971acb
|
preview method autodetection
|
2023-06-05 18:59:10 -05:00 |
space-nuko
|
d5a28fadaa
|
Add latent2rgb preview
|
2023-06-05 18:39:56 -05:00 |
space-nuko
|
48f7ec750c
|
Make previews into cli option
|
2023-06-05 13:19:02 -05:00 |
space-nuko
|
b4f434ee66
|
Preview sampled images with TAESD
|
2023-06-05 09:20:17 -05:00 |
comfyanonymous
|
fed0a4dd29
|
Some comments to say what the vram state options mean.
|
2023-06-04 17:51:04 -04:00 |
comfyanonymous
|
0a5fefd621
|
Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
|
2023-06-03 11:05:37 -04:00 |
comfyanonymous
|
700491d81a
|
Implement global average pooling for controlnet.
|
2023-06-03 01:49:03 -04:00 |
comfyanonymous
|
67892b5ac5
|
Refactor and improve model_management code related to free memory.
|
2023-06-02 15:21:33 -04:00 |
space-nuko
|
499641ebf1
|
More accurate total
|
2023-06-02 00:14:41 -05:00 |
space-nuko
|
b5dd15c67a
|
System stats endpoint
|
2023-06-01 23:26:23 -05:00 |
comfyanonymous
|
5c38958e49
|
Tweak lowvram model memory so it's closer to what it was before.
|
2023-06-01 04:04:35 -04:00 |
comfyanonymous
|
94680732d3
|
Empty cache on mps.
|
2023-06-01 03:52:51 -04:00 |
comfyanonymous
|
03da8a3426
|
This is useless for inference.
|
2023-05-31 13:03:24 -04:00 |
comfyanonymous
|
eb448dd8e1
|
Auto load model in lowvram if not enough memory.
|
2023-05-30 12:36:41 -04:00 |
comfyanonymous
|
b9818eb910
|
Add route to get safetensors metadata:
/view_metadata/loras?filename=lora.safetensors
|
2023-05-29 02:48:50 -04:00 |
comfyanonymous
|
a532888846
|
Support VAEs in diffusers format.
|
2023-05-28 02:02:09 -04:00 |
comfyanonymous
|
0fc483dcfd
|
Refactor diffusers model convert code to be able to reuse it.
|
2023-05-28 01:55:40 -04:00 |
comfyanonymous
|
eb4bd7711a
|
Remove einops.
|
2023-05-25 18:42:56 -04:00 |
comfyanonymous
|
87ab25fac7
|
Do operations in same order as the one it replaces.
|
2023-05-25 18:31:27 -04:00 |
comfyanonymous
|
2b1fac9708
|
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
|
2023-05-25 14:44:16 -04:00 |
comfyanonymous
|
e1278fa925
|
Support old pytorch versions that don't have weights_only.
|
2023-05-25 13:30:59 -04:00 |
BlenderNeko
|
8b4b0c3188
|
vecorized bislerp
|
2023-05-25 19:23:47 +02:00 |
comfyanonymous
|
b8ccbec6d8
|
Various improvements to bislerp.
|
2023-05-23 11:40:24 -04:00 |
comfyanonymous
|
34887b8885
|
Add experimental bislerp algorithm for latent upscaling.
It's like bilinear but with slerp.
|
2023-05-23 03:12:56 -04:00 |
comfyanonymous
|
6cc450579b
|
Auto transpose images from exif data.
|
2023-05-22 00:22:24 -04:00 |
comfyanonymous
|
dc198650c0
|
sample_dpmpp_2m_sde no longer crashes when step == 1.
|
2023-05-21 11:34:29 -04:00 |
comfyanonymous
|
069657fbf3
|
Add DPM-Solver++(2M) SDE and exponential scheduler.
exponential scheduler is the one recommended with this sampler.
|
2023-05-21 01:46:03 -04:00 |
comfyanonymous
|
b8636a44aa
|
Make scaled_dot_product switch to sliced attention on OOM.
|
2023-05-20 16:01:02 -04:00 |
comfyanonymous
|
797c4e8d3b
|
Simplify and improve some vae attention code.
|
2023-05-20 15:07:21 -04:00 |
comfyanonymous
|
ef815ba1e2
|
Switch default scheduler to normal.
|
2023-05-15 00:29:56 -04:00 |
comfyanonymous
|
68d12b530e
|
Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI
|
2023-05-14 15:39:39 -04:00 |