comfyanonymous
b699a15062
Refactor inpaint/ip2p code.
2024-11-19 03:25:25 -05:00
comfyanonymous
d9f90965c8
Support block replace patches in auraflow.
2024-11-17 08:19:59 -05:00
comfyanonymous
41886af138
Add transformer options blocks replace patch to mochi.
2024-11-16 20:48:14 -05:00
comfyanonymous
3b9a6cf2b1
Fix issue with 3d masks.
2024-11-13 07:18:30 -05:00
comfyanonymous
8ebf2d8831
Add block replace transformer_options to flux.
2024-11-12 08:00:39 -05:00
comfyanonymous
eb476e6ea9
Allow 1D masks for 1D latents.
2024-11-11 14:44:52 -05:00
comfyanonymous
8b275ce5be
Support auto detecting some zsnr anime checkpoints.
2024-11-11 05:34:11 -05:00
comfyanonymous
2a18e98ccf
Refactor so that zsnr can be set in the sampling_settings.
2024-11-11 04:55:56 -05:00
comfyanonymous
bdeb1c171c
Fast previews for mochi.
2024-11-10 03:39:35 -05:00
comfyanonymous
8b90e50979
Properly handle and reshape masks when used on 3d latents.
2024-11-09 15:30:19 -05:00
comfyanonymous
2865f913f7
Free memory before doing tiled decode.
2024-11-07 04:01:24 -05:00
comfyanonymous
b49616f951
Make VAEDecodeTiled node work with video VAEs.
2024-11-07 03:47:12 -05:00
comfyanonymous
5e29e7a488
Remove scaled_fp8 key after reading it to silence warning.
2024-11-06 04:56:42 -05:00
comfyanonymous
8afb97cd3f
Fix unknown VAE being detected as the mochi VAE.
2024-11-05 03:43:27 -05:00
contentis
69694f40b3
fix dynamic shape export ( #5490 )
2024-11-04 14:59:28 -05:00
comfyanonymous
6c9dbde7de
Fix mochi all in one checkpoint t5xxl key names.
2024-11-03 01:40:42 -05:00
comfyanonymous
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
Aarni Koskela
1c8286a44b
Avoid SyntaxWarning in UniPC docstring ( #5442 )
2024-10-31 15:17:26 -04:00
comfyanonymous
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
comfyanonymous
daa1565b93
Fix diffusers flux controlnet regression.
2024-10-30 13:11:34 -04:00
comfyanonymous
09fdb2b269
Support SD3.5 medium diffusers format weights and loras.
2024-10-30 04:24:00 -04:00
comfyanonymous
30c0c81351
Add a way to patch blocks in SD3.
2024-10-29 00:48:32 -04:00
comfyanonymous
13b0ff8a6f
Update SD3 code.
2024-10-28 21:58:52 -04:00
comfyanonymous
c320801187
Remove useless line.
2024-10-28 17:41:12 -04:00
comfyanonymous
669d9e4c67
Set default shift on mochi to 6.0
2024-10-27 22:21:04 -04:00
comfyanonymous
9ee0a6553a
float16 inference is a bit broken on mochi.
2024-10-27 04:56:40 -04:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
c3ffbae067
Make LatentUpscale nodes work on 3d latents.
2024-10-26 01:50:51 -04:00
comfyanonymous
d605677b33
Make euler_ancestral work on flow models (credit: Ashen).
2024-10-25 19:53:44 -04:00
PsychoLogicAu
af8cf79a2d
support SimpleTuner lycoris lora for SD3 ( #5340 )
2024-10-24 01:18:32 -04:00
comfyanonymous
66b0961a46
Fix ControlLora issue with last commit.
2024-10-23 17:02:40 -04:00
comfyanonymous
754597c8a9
Clean up some controlnet code.
...
Remove self.device which was useless.
2024-10-23 14:19:05 -04:00
comfyanonymous
915fdb5745
Fix lowvram edge case.
2024-10-22 16:34:50 -04:00
contentis
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
comfyanonymous
8ce2a1052c
Optimizations to --fast and scaled fp8.
2024-10-22 02:12:28 -04:00
comfyanonymous
f82314fcfc
Fix duplicate sigmas on beta scheduler.
2024-10-21 20:19:45 -04:00
comfyanonymous
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
comfyanonymous
f9f9faface
Fixed model merging issue with scaled fp8.
2024-10-20 06:24:31 -04:00
comfyanonymous
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
comfyanonymous
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
comfyanonymous
73e3a9e676
Clamp output when rounding weight to prevent Nan.
2024-10-19 19:07:10 -04:00
comfyanonymous
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
comfyanonymous
0bedfb26af
Revert "Fix Transformers FutureWarning ( #5140 )"
...
This reverts commit 95b7cf9bbe
.
2024-10-16 12:36:19 -04:00
comfyanonymous
f584758271
Cleanup some useless lines.
2024-10-14 21:02:39 -04:00
svdc
95b7cf9bbe
Fix Transformers FutureWarning ( #5140 )
...
* Update sd1_clip.py
Fix Transformers FutureWarning
* Update sd1_clip.py
Fix comment
2024-10-14 20:12:20 -04:00
comfyanonymous
3c60ecd7a8
Fix fp8 ops staying enabled.
2024-10-12 14:10:13 -04:00
comfyanonymous
7ae6626723
Remove useless argument.
2024-10-12 07:16:21 -04:00
comfyanonymous
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
Kadir Nar
ad07796777
🐛 Add device to variable c ( #5210 )
2024-10-11 20:37:50 -04:00