comfyanonymous
1e68002b87
Cap lowvram to half of free memory.
2024-08-03 14:50:20 -04:00
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
comfyanonymous
f123328b82
Load T5 in fp8 if it's in fp8 in the Flux checkpoint.
2024-08-03 12:39:33 -04:00
comfyanonymous
63a7e8edba
More aggressive batch splitting.
2024-08-03 11:53:30 -04:00
comfyanonymous
0eea47d580
Add ModelSamplingFlux to experiment with the shift value.
...
Default shift on Flux Schnell is 0.0
2024-08-03 03:54:38 -04:00
comfyanonymous
7cd0cdfce6
Add advanced model merge node for Flux model.
2024-08-02 23:20:53 -04:00
comfyanonymous
ea03c9dcd2
Better per model memory usage estimations.
2024-08-02 18:09:24 -04:00
comfyanonymous
3a9ee995cf
Tweak regular SD memory formula.
2024-08-02 17:34:30 -04:00
comfyanonymous
47da42d928
Better Flux vram estimation.
2024-08-02 17:02:35 -04:00
comfyanonymous
17bbd83176
Fix bug loading flac workflow when it contains = character.
2024-08-02 13:14:28 -04:00
fgdfgfthgr-fox
bfb52de866
Lower SAG scale step for finer control ( #4158 )
...
* Lower SAG step for finer control
Since the introduction of cfg++ which uses very low cfg value, a step of 0.1 in SAG might be too high for finer control. Even SAG of 0.1 can be too high when cfg is only 0.6, so I change the step to 0.01.
* Lower PAG step as well.
* Update nodes_sag.py
2024-08-02 10:29:03 -04:00
comfyanonymous
eca962c6da
Add FluxGuidance node.
...
This lets you adjust the guidance on the dev model which is a parameter
that is passed to the diffusion model.
2024-08-02 10:25:49 -04:00
Jairo Correa
c1696cd1b5
Add missing import ( #4174 )
2024-08-02 09:34:12 -04:00
comfyanonymous
369f459b20
Fix no longer working on old pytorch.
2024-08-01 22:20:24 -04:00
Alexander Brown
ce9ac2fe05
Fix clip_g/clip_l mixup ( #4168 )
2024-08-01 21:40:56 -04:00
comfyanonymous
e638f2858a
Hack to make all resolutions work on Flux models.
2024-08-01 21:39:18 -04:00
comfyanonymous
a531001cc7
Add CLIPTextEncodeFlux.
2024-08-01 18:53:25 -04:00
comfyanonymous
d420bc792a
Tweak the memory usage formulas for Flux and SD.
2024-08-01 17:53:45 -04:00
comfyanonymous
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
2024-08-01 16:39:59 -04:00
comfyanonymous
1c61361fd2
Fast preview support for Flux.
2024-08-01 16:28:11 -04:00
comfyanonymous
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
2024-08-01 16:18:44 -04:00
comfyanonymous
48eb1399c0
Try to fix mac issue.
2024-08-01 13:41:27 -04:00
comfyanonymous
b4f6ebb2e8
Rename UNETLoader node to "Load Diffusion Model".
2024-08-01 13:33:30 -04:00
comfyanonymous
d7430a1651
Add a way to load the diffusion model in fp8 with UNETLoader node.
2024-08-01 13:30:51 -04:00
comfyanonymous
f2b80f95d2
Better Mac support on flux model.
2024-08-01 13:10:50 -04:00
comfyanonymous
1aa9cf3292
Make lowvram more aggressive on low memory machines.
2024-08-01 12:11:57 -04:00
comfyanonymous
2f88d19ef3
Add link to Flux examples to readme.
2024-08-01 11:48:19 -04:00
comfyanonymous
eb96c3bd82
Fix .sft file loading (they are safetensors files).
2024-08-01 11:32:58 -04:00
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
comfyanonymous
8d34211a7a
Fix old python versions no longer working.
2024-08-01 09:57:20 -04:00
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
comfyanonymous
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
2024-08-01 09:42:17 -04:00
comfyanonymous
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
2024-08-01 03:58:58 -04:00
comfyanonymous
c24f897352
Fix to get fp8 working on T5 base.
2024-07-31 02:00:19 -04:00
comfyanonymous
a5991a7aa6
Fix hunyuan dit text encoder weights always being in fp32.
2024-07-31 01:34:57 -04:00
comfyanonymous
2c038ccef0
Lower CLIP memory usage by a bit.
2024-07-31 01:32:35 -04:00
comfyanonymous
b85216a3c0
Lower T5 memory usage by a few hundred MB.
2024-07-31 00:52:34 -04:00
comfyanonymous
82cae45d44
Fix potential issue with non clip text embeddings.
2024-07-30 14:41:13 -04:00
comfyanonymous
25853d0be8
Use common function for casting weights to input.
2024-07-30 10:49:14 -04:00
comfyanonymous
79040635da
Remove unnecessary code.
2024-07-30 05:01:34 -04:00
comfyanonymous
66d35c07ce
Improve artifacts on hydit, auraflow and SD3 on specific resolutions.
...
This breaks seeds for resolutions that are not a multiple of 16 in pixel
resolution by using circular padding instead of reflection padding but
should lower the amount of artifacts when doing img2img at those
resolutions.
2024-07-29 20:48:50 -04:00
comfyanonymous
c75b50607b
Less confusing exception if pillow() function fails.
2024-07-29 11:15:37 -04:00
comfyanonymous
4ba7fa0244
Refactor: Move sd2_clip.py to text_encoders folder.
2024-07-28 01:19:20 -04:00
bymyself
ab76abc767
Active workflow use primary fg color ( #4090 )
2024-07-27 23:34:19 -04:00
Silver
9300058026
Add dpmpp_2s_ancestral as custom sampler ( #4101 )
...
Adding dpmpp_2s_ancestral as custom sampler node to enable its use with eta and s_noise when using custom sampling.
2024-07-27 16:19:50 -04:00
comfyanonymous
f82d09c9b4
Update packaging workflow.
2024-07-27 04:48:19 -04:00
comfyanonymous
e6829e7ac5
Add a way to set custom dependencies in the release workflow.
2024-07-27 04:41:46 -04:00
comfyanonymous
07f6a1a685
Handle case in the updater when master branch is not in local repo.
2024-07-27 03:15:22 -04:00
comfyanonymous
e746965c50
Update nightly package workflow.
2024-07-27 01:20:18 -04:00
comfyanonymous
45a2842d7f
Set stable releases as a prerelease initially.
...
This should give time to test the standalone package before making it live.
2024-07-26 14:52:20 -04:00