comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
comfyanonymous
ffc4b7c30e
Fix DORA strength.
...
This is a different version of #3298 with more correct behavior.
2024-05-25 02:50:11 -04:00
comfyanonymous
efa5a711b2
Reduce memory usage when applying DORA: #3557
2024-05-24 23:36:48 -04:00
comfyanonymous
6c23854f54
Fix OSX latent2rgb previews.
2024-05-22 13:56:28 -04:00
Chenlei Hu
7718ada4ed
Add type annotation UnetWrapperFunction ( #3531 )
...
* Add type annotation UnetWrapperFunction
* nit
* Add types.py
2024-05-22 02:07:27 -04:00
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
comfyanonymous
83d969e397
Disable xformers when tracing model.
2024-05-21 13:55:49 -04:00
comfyanonymous
1900e5119f
Fix potential issue.
2024-05-20 08:19:54 -04:00
comfyanonymous
09e069ae6c
Log the pytorch version.
2024-05-20 06:22:29 -04:00
comfyanonymous
11a2ad5110
Fix controlnet not upcasting on models that have it enabled.
2024-05-19 17:58:03 -04:00
comfyanonymous
0bdc2b15c7
Cleanup.
2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
2024-05-17 00:31:32 -04:00
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
2024-05-16 04:09:41 -04:00
comfyanonymous
2d41642716
Fix lowvram dora issue.
2024-05-15 02:47:40 -04:00
comfyanonymous
ec6f16adb6
Fix SAG.
2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
2024-05-14 12:47:31 -04:00
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous
49c20cdc70
No longer necessary.
2024-05-12 05:34:43 -04:00
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
2024-05-11 21:55:20 -04:00
comfyanonymous
93e876a3be
Remove warnings that confuse people.
2024-05-09 05:29:42 -04:00
comfyanonymous
cd07340d96
Typo fix.
2024-05-08 18:36:56 -04:00
comfyanonymous
c61eadf69a
Make the load checkpoint with config function call the regular one.
...
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.
The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
2024-05-02 03:26:50 -04:00
comfyanonymous
f81a6fade8
Fix some edge cases with samplers and arrays with a single sigma.
2024-05-01 17:05:30 -04:00
comfyanonymous
2aed53c4ac
Workaround xformers bug.
2024-04-30 21:23:40 -04:00
Garrett Sutula
bacce529fb
Add TLS Support ( #3312 )
...
* Add TLS Support
* Add to readme
* Add guidance for windows users on generating certificates
* Add guidance for windows users on generating certificates
* Fix typo
2024-04-30 20:17:02 -04:00
Jedrzej Kosinski
7990ae18c1
Fix error when more cond masks passed in than batch size ( #3353 )
2024-04-26 12:51:12 -04:00
comfyanonymous
8dc19e40d1
Don't init a VAE model when there are no VAE weights.
2024-04-24 09:20:31 -04:00
comfyanonymous
c59fe9f254
Support VAE without quant_conv.
2024-04-18 21:05:33 -04:00
comfyanonymous
719fb2c81d
Add basic PAG node.
2024-04-14 23:49:50 -04:00
comfyanonymous
258dbc06c3
Fix some memory related issues.
2024-04-14 12:08:58 -04:00
comfyanonymous
58812ab8ca
Support SDXS 512 model.
2024-04-12 22:12:35 -04:00
comfyanonymous
831511a1ee
Fix issue with sampling_settings persisting across models.
2024-04-09 23:20:43 -04:00
comfyanonymous
30abc324c2
Support properly saving CosXL checkpoints.
2024-04-08 00:36:22 -04:00
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
2024-04-06 18:38:39 -04:00
kk-89
38ed2da2dd
Fix typo in lowvram patcher ( #3209 )
2024-04-05 12:02:13 -04:00
comfyanonymous
1088d1850f
Support for CosXL models.
2024-04-05 10:53:41 -04:00
comfyanonymous
41ed7e85ea
Fix object_patches_backup not being the same object across clones.
2024-04-05 00:22:44 -04:00
comfyanonymous
0f5768e038
Fix missing arguments in cfg_function.
2024-04-04 23:38:57 -04:00
comfyanonymous
1f4fc9ea0c
Fix issue with get_model_object on patched model.
2024-04-04 23:01:02 -04:00
comfyanonymous
1a0486bb96
Fix model needing to be loaded on GPU to generate the sigmas.
2024-04-04 22:08:49 -04:00
comfyanonymous
c6bd456c45
Make zero denoise a NOP.
2024-04-04 11:41:27 -04:00
comfyanonymous
fcfd2bdf8a
Small cleanup.
2024-04-04 11:16:49 -04:00
comfyanonymous
0542088ef8
Refactor sampler code for more advanced sampler nodes part 2.
2024-04-04 01:26:41 -04:00
comfyanonymous
57753c964a
Refactor sampling code for more advanced sampler nodes.
2024-04-03 22:09:51 -04:00
comfyanonymous
6c6a39251f
Fix saving text encoder in fp8.
2024-04-02 11:46:34 -04:00
comfyanonymous
e6482fbbfc
Refactor calc_cond_uncond_batch into calc_cond_batch.
...
calc_cond_batch can take an arbitrary amount of cond inputs.
Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
2024-04-01 18:07:47 -04:00