comfyanonymous
8e012043a9
Add a ModelSamplingAuraFlow node to change the shift value.
...
Set the default AuraFlow shift value to 1.73 (sqrt(3)).
2024-07-11 17:57:36 -04:00
comfyanonymous
9f291d75b3
AuraFlow model implementation.
2024-07-11 16:52:26 -04:00
comfyanonymous
f45157e3ac
Fix error message never being shown.
2024-07-11 11:46:51 -04:00
comfyanonymous
5e1fced639
Cleaner support for loading different diffusion model types.
2024-07-11 11:37:31 -04:00
comfyanonymous
ffe0bb0a33
Remove useless code.
2024-07-10 20:33:12 -04:00
comfyanonymous
391c1046cf
More flexibility with text encoder return values.
...
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
comfyanonymous
e44fa5667f
Support returning text encoder attention masks.
2024-07-10 19:31:22 -04:00
Extraltodeus
f1a01c2c7e
Add sampler_pre_cfg_function ( #3979 )
...
* Update samplers.py
* Update model_patcher.py
2024-07-09 16:20:49 -04:00
comfyanonymous
ade7aa1b0c
Remove useless import.
2024-07-09 11:05:05 -04:00
comfyanonymous
faa57430b0
Controlnet union model basic implementation.
...
This is only the model code itself, it currently defaults to an empty
embedding [0] * 6 which seems to work better than treating it like a
regular controlnet.
TODO: Add nodes to select the image type.
2024-07-08 23:49:02 -04:00
comfyanonymous
bb663bcd6c
Rename clip_t5base to t5base for stable audio text encoder.
2024-07-08 08:53:55 -04:00
comfyanonymous
2dc84d1444
Add a way to set the timestep multiplier in the flow sampling.
2024-07-06 04:06:03 -04:00
comfyanonymous
ff63893d10
Support other types of T5 models.
2024-07-06 02:42:53 -04:00
comfyanonymous
4040491149
Better T5xxl detection.
2024-07-06 00:53:33 -04:00
comfyanonymous
b8e58a9394
Cleanup T5 code a bit.
2024-07-06 00:36:49 -04:00
comfyanonymous
80c4590998
Allow specifying the padding token for the tokenizer.
2024-07-06 00:06:49 -04:00
comfyanonymous
ce649d61c0
Allow zeroing out of embeds with unused attention mask.
2024-07-05 23:48:17 -04:00
comfyanonymous
739b76630e
Remove useless code.
2024-07-04 15:14:13 -04:00
comfyanonymous
d7484ef30c
Support loading checkpoints with the UNETLoader node.
2024-07-03 11:34:32 -04:00
comfyanonymous
537f35c7bc
Don't update dict if contiguous.
2024-07-02 20:21:51 -04:00
Alex "mcmonkey" Goodwin
3f46362d22
fix non-contiguous tensor saving (from channels-last) ( #3932 )
2024-07-02 20:16:33 -04:00
Chenlei Hu
9dd549e253
Add `--no-custom-node` cmd flag ( #3903 )
...
* Add --no-custom-node cmd flag
* nit
2024-07-01 17:54:03 -04:00
comfyanonymous
05e831697a
Switch to the real cfg++ method in the samplers.
...
The old _pp ones will be updated automatically to the regular ones with 2x
the cfg.
My fault for not checking what the "_pp" samplers actually did.
2024-06-29 11:59:48 -04:00
comfyanonymous
264caca20e
ControlNetApplySD3 node can now be used to use SD3 controlnets.
2024-06-27 18:43:11 -04:00
comfyanonymous
f8f7568d03
Basic SD3 controlnet implementation.
...
Still missing the node to properly use it.
2024-06-27 18:43:11 -04:00
comfyanonymous
66aaa14001
Controlnet refactor.
2024-06-27 18:43:11 -04:00
comfyanonymous
8ceb5a02a3
Support saving stable audio checkpoint that can be loaded back.
2024-06-27 11:06:52 -04:00
comfyanonymous
4f9d2b057c
Remove print.
2024-06-27 02:54:15 -04:00
comfyanonymous
44947e7ad4
Add DEIS order 3 sampler.
...
Order 4 seems to give bad results.
2024-06-26 22:40:05 -04:00
comfyanonymous
69d710e40f
Implement my alternative take on CFG++ as the euler_pp sampler.
...
Add euler_ancestral_pp which is the ancestral version of euler with the
same modification.
2024-06-25 07:41:52 -04:00
comfyanonymous
73ca780019
Add SamplerEulerCFG++ node.
...
This node should match the DDIM implementation of CFG++ when "regular" is
selected.
"alternative" is a slightly different take on CFG++
2024-06-23 13:21:18 -04:00
comfyanonymous
2f360ae898
Support OneTrainer SD3 lora format.
2024-06-22 13:08:04 -04:00
comfyanonymous
4ef1479dcd
Multi dimension tiled scale function and tiled VAE audio encoding fallback.
2024-06-22 11:57:49 -04:00
comfyanonymous
1e2839f4d9
More proper tiled audio decoding.
2024-06-20 16:50:31 -04:00
comfyanonymous
d5efde89b7
Add ipndm_v sampler, works best with the exponential scheduler.
2024-06-20 08:51:49 -04:00
comfyanonymous
028a583bef
Fix issue with full diffusers SD3 loras.
2024-06-19 22:32:04 -04:00
comfyanonymous
0d6a57938e
Support loading diffusers SD3 model format with UNETLoader node.
2024-06-19 22:21:18 -04:00
comfyanonymous
b08a9dd04b
Remove empty line.
2024-06-19 20:20:35 -04:00
Mario Klingemann
eee815ec99
Update sd1_clip.py ( #3684 )
...
Made token instance check more flexible so it also works with integers from numpy arrays or long tensors
2024-06-19 16:42:41 -04:00
comfyanonymous
e11052afcf
Add ipndm sampler.
2024-06-19 16:32:30 -04:00
comfyanonymous
3914d5a2ae
Support full SD3 loras.
2024-06-19 10:13:33 -04:00
comfyanonymous
a45df69570
Basic tiled decoding for audio VAE.
2024-06-17 22:48:23 -04:00
Janek Mann
b7c473d1ab
Fix lora keys for SimpleTuner ( #3759 )
2024-06-17 07:55:06 -04:00
comfyanonymous
6425252c4f
Use fp16 as the default vae dtype for the audio VAE.
2024-06-16 13:12:54 -04:00
comfyanonymous
8ddc151a4c
Squash depreciation warning on new pytorch.
2024-06-16 13:06:23 -04:00
comfyanonymous
ca9d300a80
Better estimation for memory usage during audio VAE encoding/decoding.
2024-06-16 11:47:32 -04:00
comfyanonymous
746a0410d4
Fix VAEEncode with taesd3.
2024-06-16 03:10:04 -04:00
comfyanonymous
04e8798c37
Improvements to the TAESD3 implementation.
2024-06-16 02:04:24 -04:00
Dr.Lt.Data
df7db0e027
support TAESD3 ( #3738 )
2024-06-16 02:03:53 -04:00
comfyanonymous
bb1969cab7
Initial support for the stable audio open model.
2024-06-15 12:14:56 -04:00
comfyanonymous
1281f933c1
Small optimization.
2024-06-15 02:44:38 -04:00
comfyanonymous
f2e844e054
Optimize some unneeded if conditions in the sampling code.
2024-06-15 02:26:19 -04:00
comfyanonymous
0ec513d877
Add a --force-channels-last to inference models in channel last mode.
2024-06-15 01:08:12 -04:00
comfyanonymous
0e06b370db
Print key names for easier debugging.
2024-06-14 18:18:53 -04:00
Simon Lui
5eb98f0092
Exempt IPEX from non_blocking previews fixing segmentation faults. ( #3708 )
2024-06-13 18:51:14 -04:00
comfyanonymous
ac151ac169
Support SD3 diffusers lora.
2024-06-13 18:26:10 -04:00
comfyanonymous
37a08a41b3
Support setting weight offsets in weight patcher.
2024-06-13 17:21:26 -04:00
comfyanonymous
605e64f6d3
Fix lowvram issue.
2024-06-12 10:39:33 -04:00
comfyanonymous
1ddf512fdc
Don't auto convert clip and vae weights to fp16 when saving checkpoint.
2024-06-12 01:07:58 -04:00
comfyanonymous
694e0b48e0
SD3 better memory usage estimation.
2024-06-12 00:49:00 -04:00
comfyanonymous
69c8d6d8a6
Single and dual clip loader nodes support SD3.
...
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
2024-06-11 23:27:39 -04:00
comfyanonymous
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
2024-06-11 17:03:26 -04:00
comfyanonymous
5889b7ca0a
Support multiple text encoder configurations on SD3.
2024-06-11 13:14:43 -04:00
comfyanonymous
9424522ead
Reuse code.
2024-06-11 07:20:26 -04:00
Dango233
73ce178021
Remove redundancy in mmdit.py ( #3685 )
2024-06-11 06:30:25 -04:00
comfyanonymous
a82fae2375
Fix bug with cosxl edit model.
2024-06-10 16:00:03 -04:00
comfyanonymous
8c4a9befa7
SD3 Support.
2024-06-10 14:06:23 -04:00
comfyanonymous
a5e6a632f9
Support sampling non 2D latents.
2024-06-10 01:31:09 -04:00
comfyanonymous
742d5720d1
Support zeroing out text embeddings with the attention mask.
2024-06-09 16:51:58 -04:00
comfyanonymous
6cd8ffc465
Reshape the empty latent image to the right amount of channels if needed.
2024-06-08 02:35:08 -04:00
comfyanonymous
56333d4850
Use the end token for the text encoder attention mask.
2024-06-07 03:05:23 -04:00
comfyanonymous
104fcea0c8
Add function to get the list of currently loaded models.
2024-06-05 23:25:16 -04:00
comfyanonymous
b1fd26fe9e
pytorch xpu should be flash or mem efficient attention?
2024-06-04 17:44:14 -04:00
comfyanonymous
809cc85a8e
Remove useless code.
2024-06-02 19:23:37 -04:00
comfyanonymous
b249862080
Add an annoying print to a function I want to remove.
2024-06-01 12:47:31 -04:00
comfyanonymous
bf3e334d46
Disable non_blocking when --deterministic or directml.
2024-05-30 11:07:38 -04:00
JettHu
b26da2245f
Fix UnetParams annotation typo ( #3589 )
2024-05-27 19:30:35 -04:00
comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
comfyanonymous
ffc4b7c30e
Fix DORA strength.
...
This is a different version of #3298 with more correct behavior.
2024-05-25 02:50:11 -04:00
comfyanonymous
efa5a711b2
Reduce memory usage when applying DORA: #3557
2024-05-24 23:36:48 -04:00
comfyanonymous
6c23854f54
Fix OSX latent2rgb previews.
2024-05-22 13:56:28 -04:00
Chenlei Hu
7718ada4ed
Add type annotation UnetWrapperFunction ( #3531 )
...
* Add type annotation UnetWrapperFunction
* nit
* Add types.py
2024-05-22 02:07:27 -04:00
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
comfyanonymous
83d969e397
Disable xformers when tracing model.
2024-05-21 13:55:49 -04:00
comfyanonymous
1900e5119f
Fix potential issue.
2024-05-20 08:19:54 -04:00
comfyanonymous
09e069ae6c
Log the pytorch version.
2024-05-20 06:22:29 -04:00
comfyanonymous
11a2ad5110
Fix controlnet not upcasting on models that have it enabled.
2024-05-19 17:58:03 -04:00
comfyanonymous
0bdc2b15c7
Cleanup.
2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
2024-05-17 00:31:32 -04:00
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
2024-05-16 04:09:41 -04:00
comfyanonymous
2d41642716
Fix lowvram dora issue.
2024-05-15 02:47:40 -04:00
comfyanonymous
ec6f16adb6
Fix SAG.
2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
2024-05-14 12:47:31 -04:00
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous
49c20cdc70
No longer necessary.
2024-05-12 05:34:43 -04:00
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
2024-05-11 21:55:20 -04:00
comfyanonymous
93e876a3be
Remove warnings that confuse people.
2024-05-09 05:29:42 -04:00
comfyanonymous
cd07340d96
Typo fix.
2024-05-08 18:36:56 -04:00
comfyanonymous
c61eadf69a
Make the load checkpoint with config function call the regular one.
...
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.
The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
2024-05-02 03:26:50 -04:00
comfyanonymous
f81a6fade8
Fix some edge cases with samplers and arrays with a single sigma.
2024-05-01 17:05:30 -04:00
comfyanonymous
2aed53c4ac
Workaround xformers bug.
2024-04-30 21:23:40 -04:00
Garrett Sutula
bacce529fb
Add TLS Support ( #3312 )
...
* Add TLS Support
* Add to readme
* Add guidance for windows users on generating certificates
* Add guidance for windows users on generating certificates
* Fix typo
2024-04-30 20:17:02 -04:00
Jedrzej Kosinski
7990ae18c1
Fix error when more cond masks passed in than batch size ( #3353 )
2024-04-26 12:51:12 -04:00
comfyanonymous
8dc19e40d1
Don't init a VAE model when there are no VAE weights.
2024-04-24 09:20:31 -04:00
comfyanonymous
c59fe9f254
Support VAE without quant_conv.
2024-04-18 21:05:33 -04:00
comfyanonymous
719fb2c81d
Add basic PAG node.
2024-04-14 23:49:50 -04:00
comfyanonymous
258dbc06c3
Fix some memory related issues.
2024-04-14 12:08:58 -04:00
comfyanonymous
58812ab8ca
Support SDXS 512 model.
2024-04-12 22:12:35 -04:00
comfyanonymous
831511a1ee
Fix issue with sampling_settings persisting across models.
2024-04-09 23:20:43 -04:00
comfyanonymous
30abc324c2
Support properly saving CosXL checkpoints.
2024-04-08 00:36:22 -04:00
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
2024-04-06 18:38:39 -04:00
kk-89
38ed2da2dd
Fix typo in lowvram patcher ( #3209 )
2024-04-05 12:02:13 -04:00
comfyanonymous
1088d1850f
Support for CosXL models.
2024-04-05 10:53:41 -04:00
comfyanonymous
41ed7e85ea
Fix object_patches_backup not being the same object across clones.
2024-04-05 00:22:44 -04:00
comfyanonymous
0f5768e038
Fix missing arguments in cfg_function.
2024-04-04 23:38:57 -04:00
comfyanonymous
1f4fc9ea0c
Fix issue with get_model_object on patched model.
2024-04-04 23:01:02 -04:00
comfyanonymous
1a0486bb96
Fix model needing to be loaded on GPU to generate the sigmas.
2024-04-04 22:08:49 -04:00
comfyanonymous
c6bd456c45
Make zero denoise a NOP.
2024-04-04 11:41:27 -04:00
comfyanonymous
fcfd2bdf8a
Small cleanup.
2024-04-04 11:16:49 -04:00
comfyanonymous
0542088ef8
Refactor sampler code for more advanced sampler nodes part 2.
2024-04-04 01:26:41 -04:00
comfyanonymous
57753c964a
Refactor sampling code for more advanced sampler nodes.
2024-04-03 22:09:51 -04:00
comfyanonymous
6c6a39251f
Fix saving text encoder in fp8.
2024-04-02 11:46:34 -04:00
comfyanonymous
e6482fbbfc
Refactor calc_cond_uncond_batch into calc_cond_batch.
...
calc_cond_batch can take an arbitrary amount of cond inputs.
Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
2024-04-01 18:07:47 -04:00
comfyanonymous
575acb69e4
IP2P model loading support.
...
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.
This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
2024-03-31 03:10:28 -04:00
comfyanonymous
94a5a67c32
Cleanup to support different types of inpaint models.
2024-03-29 14:44:13 -04:00
comfyanonymous
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous
327ca1313d
Support SDXS 0.9
2024-03-27 23:58:58 -04:00
comfyanonymous
ae77590b4e
dora_scale support for lora file.
2024-03-25 18:09:23 -04:00
comfyanonymous
c6de09b02e
Optimize memory unload strategy for more optimized performance.
2024-03-24 02:36:30 -04:00
comfyanonymous
0624838237
Add inverse noise scaling function.
2024-03-21 14:49:11 -04:00
comfyanonymous
5d875d77fe
Fix regression with lcm not working with batches.
2024-03-20 20:48:54 -04:00
comfyanonymous
4b9005e949
Fix regression with model merging.
2024-03-20 13:56:12 -04:00
comfyanonymous
c18a203a8a
Don't unload model weights for non weight patches.
2024-03-20 02:27:58 -04:00
comfyanonymous
150a3e946f
Make LCM sampler use the model noise scaling function.
2024-03-20 01:35:59 -04:00
comfyanonymous
40e124c6be
SV3D support.
2024-03-18 16:54:13 -04:00
comfyanonymous
cacb022c4a
Make saved SD1 checkpoints match more closely the official one.
2024-03-18 00:26:23 -04:00
comfyanonymous
d7897fff2c
Move cascade scale factor from stage_a to latent_formats.py
2024-03-16 14:49:35 -04:00
comfyanonymous
f2fe635c9f
SamplerDPMAdaptative node to test the different options.
2024-03-15 22:36:10 -04:00
comfyanonymous
448d9263a2
Fix control loras breaking.
2024-03-14 09:30:21 -04:00
comfyanonymous
db8b59ecff
Lower memory usage for loras in lowvram mode at the cost of perf.
2024-03-13 20:07:27 -04:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
comfyanonymous
0ed72befe1
Change log levels.
...
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
2024-03-10 12:14:23 -04:00
comfyanonymous
5f60ee246e
Support loading the sr cascade controlnet.
2024-03-07 01:22:48 -05:00
comfyanonymous
03e6e81629
Set upscale algorithm to bilinear for stable cascade controlnet.
2024-03-06 02:59:40 -05:00
comfyanonymous
03e83bb5d0
Support stable cascade canny controlnet.
2024-03-06 02:25:42 -05:00
comfyanonymous
10860bcd28
Add compression_ratio to controlnet code.
2024-03-05 15:15:20 -05:00
comfyanonymous
478f71a249
Remove useless check.
2024-03-04 08:51:25 -05:00
comfyanonymous
12c1080ebc
Simplify differential diffusion code.
2024-03-03 15:34:42 -05:00
Shiimizu
727021bdea
Implement Differential Diffusion ( #2876 )
...
* Implement Differential Diffusion
* Cleanup.
* Fix.
* Masks should be applied at full strength.
* Fix colors.
* Register the node.
* Cleaner code.
* Fix issue with getting unipc sampler.
* Adjust thresholds.
* Switch to linear thresholds.
* Only calculate nearest_idx on valid thresholds.
2024-03-03 15:34:13 -05:00
comfyanonymous
1abf8374ec
utils.set_attr can now be used to set any attribute.
...
The old set_attr has been renamed to set_attr_param.
2024-03-02 17:27:23 -05:00
comfyanonymous
dce3555339
Add some tesla pascal GPUs to the fp16 working but slower list.
2024-03-02 17:16:31 -05:00
comfyanonymous
51df846598
Let conditioning specify custom concat conds.
2024-03-02 11:44:06 -05:00
comfyanonymous
9f71e4b62d
Let model patches patch sub objects.
2024-03-02 11:43:27 -05:00
comfyanonymous
00425563c0
Cleanup: Use sampling noise scaling function for inpainting.
2024-03-01 14:24:41 -05:00
comfyanonymous
c62e836167
Move noise scaling to object with sampling math.
2024-03-01 12:54:38 -05:00
comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
2024-02-29 13:11:30 -05:00
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
37a86e4618
Remove duplicate text_projection key from some saved models.
2024-02-28 03:57:41 -05:00
comfyanonymous
8daedc5bf2
Auto detect playground v2.5 model.
2024-02-27 18:03:03 -05:00
comfyanonymous
d46583ecec
Playground V2.5 support with ModelSamplingContinuousEDM node.
...
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
2024-02-27 15:12:33 -05:00
comfyanonymous
1e0fcc9a65
Make XL checkpoints save in a more standard format.
2024-02-27 02:07:40 -05:00
comfyanonymous
b416be7d78
Make the text projection saved in the checkpoint the right format.
2024-02-27 01:52:23 -05:00
comfyanonymous
03c47fc0f2
Add a min_length property to tokenizer class.
2024-02-26 21:36:37 -05:00
comfyanonymous
8ac69f62e5
Make return_projected_pooled setable from the __init__
2024-02-25 14:49:13 -05:00
comfyanonymous
ca7c310a0e
Support loading old CLIP models saved with CLIPSave.
2024-02-25 08:29:12 -05:00
comfyanonymous
c2cb8e889b
Always return unprojected pooled output for gligen.
2024-02-25 07:33:13 -05:00
comfyanonymous
1cb3f6a83b
Move text projection into the CLIP model code.
...
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous
6533b172c1
Support text encoder text_projection in lora.
2024-02-24 23:50:46 -05:00
comfyanonymous
1e5f0f66be
Support lora keys with lora_prior_unet_ and lora_prior_te_
2024-02-23 12:21:20 -05:00
logtd
e1cb93c383
Fix model and cond transformer options merge
2024-02-23 01:19:43 -07:00
comfyanonymous
10847dfafe
Cleanup uni_pc inpainting.
...
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
2024-02-23 02:39:35 -05:00
comfyanonymous
18c151b3e3
Add some latent2rgb matrices for previews.
2024-02-20 10:57:24 -05:00
comfyanonymous
0d0fbabd1d
Pass pooled CLIP to stage b.
2024-02-20 04:24:45 -05:00
comfyanonymous
c6b7a157ed
Align simple scheduling closer to official stable cascade scheduler.
2024-02-20 04:24:39 -05:00
comfyanonymous
88f300401c
Enable fp16 by default on mps.
2024-02-19 12:00:48 -05:00
comfyanonymous
e93cdd0ad0
Remove print.
2024-02-19 11:47:26 -05:00
comfyanonymous
3711b31dff
Support Stable Cascade in checkpoint format.
2024-02-19 11:20:48 -05:00
comfyanonymous
d91f45ef28
Some cleanups to how the text encoders are loaded.
2024-02-19 10:46:30 -05:00
comfyanonymous
a7b5eaa7e3
Forgot to commit this.
2024-02-19 04:25:46 -05:00
comfyanonymous
3b2e579926
Support loading the Stable Cascade effnet and previewer as a VAE.
...
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous
dccca1daa5
Fix gligen lowvram mode.
2024-02-18 02:20:23 -05:00
comfyanonymous
8b60d33bb7
Add ModelSamplingStableCascade to control the shift sampling parameter.
...
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
2024-02-18 00:55:23 -05:00
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
comfyanonymous
11e3221f1f
fp8 weight support for Stable Cascade.
2024-02-17 15:27:31 -05:00
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
comfyanonymous
5b40e7a5ed
Implement shift schedule for cascade stage C.
2024-02-17 11:38:47 -05:00
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
2024-02-17 09:01:17 -05:00
comfyanonymous
6c875d846b
Fix clip attention mask issues on some hardware.
2024-02-17 07:53:52 -05:00
comfyanonymous
805c36ac9c
Make Stable Cascade work on old pytorch 2.0
2024-02-17 00:42:30 -05:00
comfyanonymous
f2d1d16f4f
Support Stable Cascade Stage B lite.
2024-02-16 23:41:23 -05:00
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
2024-02-16 23:01:54 -05:00
comfyanonymous
97d03ae04a
StableCascade CLIP model support.
2024-02-16 13:29:04 -05:00
comfyanonymous
667c92814e
Stable Cascade Stage B.
2024-02-16 13:02:03 -05:00
comfyanonymous
f83109f09b
Stable Cascade Stage C.
2024-02-16 10:55:08 -05:00