comfyanonymous
|
10b43ceea5
|
Remove duplicate code.
|
2024-07-24 01:12:59 -04:00 |
comfyanonymous
|
9f291d75b3
|
AuraFlow model implementation.
|
2024-07-11 16:52:26 -04:00 |
comfyanonymous
|
f8f7568d03
|
Basic SD3 controlnet implementation.
Still missing the node to properly use it.
|
2024-06-27 18:43:11 -04:00 |
comfyanonymous
|
66aaa14001
|
Controlnet refactor.
|
2024-06-27 18:43:11 -04:00 |
comfyanonymous
|
8ddc151a4c
|
Squash depreciation warning on new pytorch.
|
2024-06-16 13:06:23 -04:00 |
comfyanonymous
|
bb1969cab7
|
Initial support for the stable audio open model.
|
2024-06-15 12:14:56 -04:00 |
comfyanonymous
|
1281f933c1
|
Small optimization.
|
2024-06-15 02:44:38 -04:00 |
comfyanonymous
|
605e64f6d3
|
Fix lowvram issue.
|
2024-06-12 10:39:33 -04:00 |
Dango233
|
73ce178021
|
Remove redundancy in mmdit.py (#3685)
|
2024-06-11 06:30:25 -04:00 |
comfyanonymous
|
8c4a9befa7
|
SD3 Support.
|
2024-06-10 14:06:23 -04:00 |
comfyanonymous
|
0920e0e5fe
|
Remove some unused imports.
|
2024-05-27 19:08:27 -04:00 |
comfyanonymous
|
8508df2569
|
Work around black image bug on Mac 14.5 by forcing attention upcasting.
|
2024-05-21 16:56:33 -04:00 |
comfyanonymous
|
83d969e397
|
Disable xformers when tracing model.
|
2024-05-21 13:55:49 -04:00 |
comfyanonymous
|
1900e5119f
|
Fix potential issue.
|
2024-05-20 08:19:54 -04:00 |
comfyanonymous
|
0bdc2b15c7
|
Cleanup.
|
2024-05-18 10:11:44 -04:00 |
comfyanonymous
|
98f828fad9
|
Remove unnecessary code.
|
2024-05-18 09:36:44 -04:00 |
comfyanonymous
|
46daf0a9a7
|
Add debug options to force on and off attention upcasting.
|
2024-05-16 04:09:41 -04:00 |
comfyanonymous
|
ec6f16adb6
|
Fix SAG.
|
2024-05-14 18:02:27 -04:00 |
comfyanonymous
|
bb4940d837
|
Only enable attention upcasting on models that actually need it.
|
2024-05-14 17:00:50 -04:00 |
comfyanonymous
|
b0ab31d06c
|
Refactor attention upcasting code part 1.
|
2024-05-14 12:47:31 -04:00 |
comfyanonymous
|
2aed53c4ac
|
Workaround xformers bug.
|
2024-04-30 21:23:40 -04:00 |
comfyanonymous
|
d7897fff2c
|
Move cascade scale factor from stage_a to latent_formats.py
|
2024-03-16 14:49:35 -04:00 |
comfyanonymous
|
2a813c3b09
|
Switch some more prints to logging.
|
2024-03-11 16:34:58 -04:00 |
comfyanonymous
|
5f60ee246e
|
Support loading the sr cascade controlnet.
|
2024-03-07 01:22:48 -05:00 |
comfyanonymous
|
03e6e81629
|
Set upscale algorithm to bilinear for stable cascade controlnet.
|
2024-03-06 02:59:40 -05:00 |
comfyanonymous
|
03e83bb5d0
|
Support stable cascade canny controlnet.
|
2024-03-06 02:25:42 -05:00 |
comfyanonymous
|
cb7c3a2921
|
Allow image_only_indicator to be None.
|
2024-02-29 13:11:30 -05:00 |
comfyanonymous
|
b3e97fc714
|
Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
|
2024-02-28 12:10:11 -05:00 |
comfyanonymous
|
e93cdd0ad0
|
Remove print.
|
2024-02-19 11:47:26 -05:00 |
comfyanonymous
|
a7b5eaa7e3
|
Forgot to commit this.
|
2024-02-19 04:25:46 -05:00 |
comfyanonymous
|
6bcf57ff10
|
Fix attention masks properly for multiple batches.
|
2024-02-17 16:15:18 -05:00 |
comfyanonymous
|
11e3221f1f
|
fp8 weight support for Stable Cascade.
|
2024-02-17 15:27:31 -05:00 |
comfyanonymous
|
f8706546f3
|
Fix attention mask batch size in some attention functions.
|
2024-02-17 15:22:21 -05:00 |
comfyanonymous
|
3b9969c1c5
|
Properly fix attention masks in CLIP with batches.
|
2024-02-17 12:13:13 -05:00 |
comfyanonymous
|
805c36ac9c
|
Make Stable Cascade work on old pytorch 2.0
|
2024-02-17 00:42:30 -05:00 |
comfyanonymous
|
667c92814e
|
Stable Cascade Stage B.
|
2024-02-16 13:02:03 -05:00 |
comfyanonymous
|
f83109f09b
|
Stable Cascade Stage C.
|
2024-02-16 10:55:08 -05:00 |
comfyanonymous
|
5e06baf112
|
Stable Cascade Stage A.
|
2024-02-16 06:30:39 -05:00 |
comfyanonymous
|
c661a8b118
|
Don't use numpy for calculating sigmas.
|
2024-02-07 18:52:51 -05:00 |
comfyanonymous
|
89507f8adf
|
Remove some unused imports.
|
2024-01-25 23:42:37 -05:00 |
comfyanonymous
|
2395ae740a
|
Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
|
2024-01-14 17:28:31 -05:00 |
comfyanonymous
|
6a7bc35db8
|
Use basic attention implementation for small inputs on old pytorch.
|
2024-01-09 13:46:52 -05:00 |
comfyanonymous
|
c6951548cf
|
Update optimized_attention_for_device function for new functions that
support masked attention.
|
2024-01-07 13:52:08 -05:00 |
comfyanonymous
|
aaa9017302
|
Add attention mask support to sub quad attention.
|
2024-01-07 04:13:58 -05:00 |
comfyanonymous
|
0c2c9fbdfa
|
Support attention mask in split attention.
|
2024-01-06 13:16:48 -05:00 |
comfyanonymous
|
3ad0191bfb
|
Implement attention mask on xformers.
|
2024-01-06 04:33:03 -05:00 |
comfyanonymous
|
8c6493578b
|
Implement noise augmentation for SD 4X upscale model.
|
2024-01-03 14:27:11 -05:00 |
comfyanonymous
|
79f73a4b33
|
Remove useless code.
|
2024-01-02 01:50:29 -05:00 |
comfyanonymous
|
61b3f15f8f
|
Fix lowvram mode not working with unCLIP and Revision code.
|
2023-12-26 05:02:02 -05:00 |
comfyanonymous
|
d0165d819a
|
Fix SVD lowvram mode.
|
2023-12-24 07:13:18 -05:00 |