comfyanonymous
719fa0866f
Set clip vision model in eval mode so it works without inference mode.
2023-12-15 18:53:08 -05:00
Hari
574363a8a6
Implement Perp-Neg
2023-12-16 00:28:16 +05:30
comfyanonymous
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
comfyanonymous
329c571993
Improve code legibility.
2023-12-14 11:41:49 -05:00
comfyanonymous
6c5990f7db
Fix cfg being calculated more than once if sampler_cfg_function.
2023-12-13 20:28:04 -05:00
comfyanonymous
ba04a87d10
Refactor and improve the sag node.
...
Moved all the sag related code to comfy_extras/nodes_sag.py
2023-12-13 16:11:26 -05:00
Rafie Walker
6761233e9d
Implement Self-Attention Guidance ( #2201 )
...
* First SAG test
* need to put extra options on the model instead of patcher
* no errors and results seem not-broken
* Use @ashen-uncensored formula, which works better!!!
* Fix a crash when using weird resolutions. Remove an unnecessary UNet call
* Improve comments, optimize memory in blur routine
* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
comfyanonymous
b454a67bb9
Support segmind vega model.
2023-12-12 19:09:53 -05:00
comfyanonymous
824e4935f5
Add dtype parameter to VAE object.
2023-12-12 12:03:29 -05:00
comfyanonymous
32b7e7e769
Add manual cast to controlnet.
2023-12-12 11:32:42 -05:00
comfyanonymous
3152023fbc
Use inference dtype for unet memory usage estimation.
2023-12-11 23:50:38 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8
Disable non blocking on mps.
2023-12-10 01:30:35 -05:00
comfyanonymous
614b7e731f
Implement GLora.
2023-12-09 18:15:26 -05:00
comfyanonymous
cb63e230b4
Make lora code a bit cleaner.
2023-12-09 14:15:09 -05:00
comfyanonymous
174eba8e95
Use own clip vision model implementation.
2023-12-09 11:56:31 -05:00
comfyanonymous
97015b6b38
Cleanup.
2023-12-08 16:02:08 -05:00
comfyanonymous
a4ec54a40d
Add linear_start and linear_end to model_config.sampling_settings
2023-12-08 02:49:30 -05:00
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
comfyanonymous
efb704c758
Support attention masking in CLIP implementation.
2023-12-07 02:51:02 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
2db86b4676
Slightly faster lora applying.
2023-12-06 05:13:14 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
9b655d4fd7
Fix memory issue with control loras.
2023-12-04 21:55:19 -05:00
comfyanonymous
26b1c0a771
Fix control lora on fp8.
2023-12-04 13:47:41 -05:00
comfyanonymous
be3468ddd5
Less useless downcasting.
2023-12-04 12:53:46 -05:00
comfyanonymous
ca82ade765
Use .itemsize to get dtype size for fp8.
2023-12-04 11:52:06 -05:00
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
comfyanonymous
61a123a1e0
A different way of handling multiple images passed to SVD.
...
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]
now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous
c97be4db91
Support SD2.1 turbo checkpoint.
2023-11-30 19:27:03 -05:00
comfyanonymous
983ebc5792
Use smart model management for VAE to decrease latency.
2023-11-28 04:58:51 -05:00
comfyanonymous
c45d1b9b67
Add a function to load a unet from a state dict.
2023-11-27 17:41:29 -05:00
comfyanonymous
f30b992b18
.sigma and .timestep now return tensors on the same device as the input.
2023-11-27 16:41:33 -05:00
comfyanonymous
13fdee6abf
Try to free memory for both cond+uncond before inference.
2023-11-27 14:55:40 -05:00
comfyanonymous
be71bb5e13
Tweak memory inference calculations a bit.
2023-11-27 14:04:16 -05:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
5d6dfce548
Fix importing diffusers unets.
2023-11-24 20:35:29 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
410bf07771
Make VAE memory estimation take dtype into account.
2023-11-22 18:17:19 -05:00
comfyanonymous
32447f0c39
Add sampling_settings so models can specify specific sampling settings.
2023-11-22 17:24:00 -05:00
comfyanonymous
c3ae99a749
Allow controlling downscale and upscale methods in PatchModelAddDownscale.
2023-11-22 03:23:16 -05:00
comfyanonymous
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
6a491ebe27
Allow model config to preprocess the vae state dict on load.
2023-11-21 16:29:18 -05:00
comfyanonymous
cd4fc77d5f
Add taesd and taesdxl to VAELoader node.
...
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00