comfyanonymous
|
103c487a89
|
Cleanup.
|
2023-07-02 11:58:23 -04:00 |
comfyanonymous
|
78d8035f73
|
Fix bug with controlnet.
|
2023-06-24 11:02:38 -04:00 |
comfyanonymous
|
05676942b7
|
Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
|
2023-06-24 03:30:22 -04:00 |
comfyanonymous
|
f87ec10a97
|
Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
|
2023-06-22 13:03:50 -04:00 |
comfyanonymous
|
9fccf4aa03
|
Add original_shape parameter to transformer patch extra_options.
|
2023-06-21 13:22:01 -04:00 |
comfyanonymous
|
8883cb0f67
|
Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
|
2023-06-18 22:58:22 -04:00 |
comfyanonymous
|
ae43f09ef7
|
All the unet weights should now be initialized with the right dtype.
|
2023-06-15 18:42:30 -04:00 |
comfyanonymous
|
e21d9ad445
|
Initialize transformer unet block weights in right dtype at the start.
|
2023-06-15 14:29:26 -04:00 |
comfyanonymous
|
9d54066ebc
|
This isn't needed for inference.
|
2023-06-14 13:05:08 -04:00 |
comfyanonymous
|
6971646b8b
|
Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
|
2023-06-14 12:09:41 -04:00 |
comfyanonymous
|
cb1551b819
|
Lowvram mode for gligen and fix some lowvram issues.
|
2023-05-05 18:11:41 -04:00 |
comfyanonymous
|
bae4fb4a9d
|
Fix imports.
|
2023-05-04 18:10:29 -04:00 |
comfyanonymous
|
5282f56434
|
Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
|
2023-04-23 12:35:25 -04:00 |
comfyanonymous
|
6908f9c949
|
This makes pytorch2.0 attention perform a bit faster.
|
2023-04-22 14:30:39 -04:00 |
comfyanonymous
|
3696d1699a
|
Add support for GLIGEN textbox model.
|
2023-04-19 11:06:32 -04:00 |
comfyanonymous
|
73c3e11e83
|
Fix model_management import so it doesn't get executed twice.
|
2023-04-15 19:04:33 -04:00 |
EllangoK
|
e5e587b1c0
|
seperates out arg parser and imports args
|
2023-04-05 23:41:23 -04:00 |
comfyanonymous
|
18a6c1db33
|
Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
|
2023-03-31 17:19:58 -04:00 |
comfyanonymous
|
61ec3c9d5d
|
Add a way to pass options to the transformers blocks.
|
2023-03-31 13:04:39 -04:00 |
comfyanonymous
|
3ed4a4e4e6
|
Try again with vae tiled decoding if regular fails because of OOM.
|
2023-03-22 14:49:00 -04:00 |
comfyanonymous
|
83f23f82b8
|
Add pytorch attention support to VAE.
|
2023-03-13 12:45:54 -04:00 |
comfyanonymous
|
a256a2abde
|
--disable-xformers should not even try to import xformers.
|
2023-03-13 11:36:48 -04:00 |
comfyanonymous
|
0f3ba7482f
|
Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
|
2023-03-12 15:44:16 -04:00 |
comfyanonymous
|
798c90e1c0
|
Fix pytorch 2.0 cross attention not working.
|
2023-03-05 14:14:54 -05:00 |
comfyanonymous
|
c1f5855ac1
|
Make some cross attention functions work on the CPU.
|
2023-03-03 03:27:33 -05:00 |
comfyanonymous
|
1a612e1c74
|
Add some pytorch scaled_dot_product_attention code for testing.
--use-pytorch-cross-attention to use it.
|
2023-03-02 17:01:20 -05:00 |
comfyanonymous
|
9502ee45c3
|
Hopefully fix a strange issue with xformers + lowvram.
|
2023-02-28 13:48:52 -05:00 |
comfyanonymous
|
c9daec4c89
|
Remove prints that are useless when xformers is enabled.
|
2023-02-21 22:16:13 -05:00 |
comfyanonymous
|
773cdabfce
|
Same thing but for the other places where it's used.
|
2023-02-09 12:43:29 -05:00 |
comfyanonymous
|
50db297cf6
|
Try to fix OOM issues with cards that have less vram than mine.
|
2023-01-29 00:50:46 -05:00 |
comfyanonymous
|
051f472e8f
|
Fix sub quadratic attention for SD2 and make it the default optimization.
|
2023-01-25 01:22:43 -05:00 |
comfyanonymous
|
220afe3310
|
Initial commit.
|
2023-01-16 22:37:14 -05:00 |