comfyanonymous
6c5990f7db
Fix cfg being calculated more than once if sampler_cfg_function.
2023-12-13 20:28:04 -05:00
comfyanonymous
ba04a87d10
Refactor and improve the sag node.
...
Moved all the sag related code to comfy_extras/nodes_sag.py
2023-12-13 16:11:26 -05:00
Rafie Walker
6761233e9d
Implement Self-Attention Guidance ( #2201 )
...
* First SAG test
* need to put extra options on the model instead of patcher
* no errors and results seem not-broken
* Use @ashen-uncensored formula, which works better!!!
* Fix a crash when using weird resolutions. Remove an unnecessary UNet call
* Improve comments, optimize memory in blur routine
* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
comfyanonymous
b454a67bb9
Support segmind vega model.
2023-12-12 19:09:53 -05:00
comfyanonymous
824e4935f5
Add dtype parameter to VAE object.
2023-12-12 12:03:29 -05:00
comfyanonymous
32b7e7e769
Add manual cast to controlnet.
2023-12-12 11:32:42 -05:00
comfyanonymous
3152023fbc
Use inference dtype for unet memory usage estimation.
2023-12-11 23:50:38 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8
Disable non blocking on mps.
2023-12-10 01:30:35 -05:00
comfyanonymous
614b7e731f
Implement GLora.
2023-12-09 18:15:26 -05:00
comfyanonymous
cb63e230b4
Make lora code a bit cleaner.
2023-12-09 14:15:09 -05:00
comfyanonymous
174eba8e95
Use own clip vision model implementation.
2023-12-09 11:56:31 -05:00
comfyanonymous
97015b6b38
Cleanup.
2023-12-08 16:02:08 -05:00
comfyanonymous
a4ec54a40d
Add linear_start and linear_end to model_config.sampling_settings
2023-12-08 02:49:30 -05:00
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
comfyanonymous
efb704c758
Support attention masking in CLIP implementation.
2023-12-07 02:51:02 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
2db86b4676
Slightly faster lora applying.
2023-12-06 05:13:14 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
9b655d4fd7
Fix memory issue with control loras.
2023-12-04 21:55:19 -05:00
comfyanonymous
26b1c0a771
Fix control lora on fp8.
2023-12-04 13:47:41 -05:00
comfyanonymous
be3468ddd5
Less useless downcasting.
2023-12-04 12:53:46 -05:00
comfyanonymous
ca82ade765
Use .itemsize to get dtype size for fp8.
2023-12-04 11:52:06 -05:00
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
comfyanonymous
61a123a1e0
A different way of handling multiple images passed to SVD.
...
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]
now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous
c97be4db91
Support SD2.1 turbo checkpoint.
2023-11-30 19:27:03 -05:00
comfyanonymous
983ebc5792
Use smart model management for VAE to decrease latency.
2023-11-28 04:58:51 -05:00
comfyanonymous
c45d1b9b67
Add a function to load a unet from a state dict.
2023-11-27 17:41:29 -05:00
comfyanonymous
f30b992b18
.sigma and .timestep now return tensors on the same device as the input.
2023-11-27 16:41:33 -05:00
comfyanonymous
13fdee6abf
Try to free memory for both cond+uncond before inference.
2023-11-27 14:55:40 -05:00
comfyanonymous
be71bb5e13
Tweak memory inference calculations a bit.
2023-11-27 14:04:16 -05:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
5d6dfce548
Fix importing diffusers unets.
2023-11-24 20:35:29 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
410bf07771
Make VAE memory estimation take dtype into account.
2023-11-22 18:17:19 -05:00
comfyanonymous
32447f0c39
Add sampling_settings so models can specify specific sampling settings.
2023-11-22 17:24:00 -05:00
comfyanonymous
c3ae99a749
Allow controlling downscale and upscale methods in PatchModelAddDownscale.
2023-11-22 03:23:16 -05:00
comfyanonymous
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
6a491ebe27
Allow model config to preprocess the vae state dict on load.
2023-11-21 16:29:18 -05:00
comfyanonymous
cd4fc77d5f
Add taesd and taesdxl to VAELoader node.
...
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
ce67dcbcda
Make it easy for models to process the unet state dict on load.
2023-11-20 23:17:53 -05:00
comfyanonymous
d9d8702d8d
percent_to_sigma now returns a float instead of a tensor.
2023-11-18 23:20:29 -05:00
comfyanonymous
0cf4e86939
Add some command line arguments to store text encoder weights in fp8.
...
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
107e78b1cb
Add support for loading SSD1B diffusers unet version.
...
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
comfyanonymous
9f00a18095
Fix potential issues.
2023-11-16 14:59:54 -05:00
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
comfyanonymous
dcec1047e6
Invert the start and end percentages in the code.
...
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.
percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.
Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00
comfyanonymous
57eea0efbb
heunpp2 sampler.
2023-11-14 23:50:55 -05:00
comfyanonymous
728613bb3e
Fix last pr.
2023-11-14 14:41:31 -05:00
comfyanonymous
ec3d0ab432
Merge branch 'master' of https://github.com/Jannchie/ComfyUI
2023-11-14 14:38:07 -05:00
comfyanonymous
c962884a5c
Make bislerp work on GPU.
2023-11-14 11:38:36 -05:00
comfyanonymous
420beeeb05
Clean up and refactor sampler code.
...
This should make it much easier to write custom nodes with kdiffusion type
samplers.
2023-11-14 00:39:34 -05:00
Jianqi Pan
f2e49b1d57
fix: adaptation to older versions of pytroch
2023-11-14 14:32:05 +09:00
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
2023-11-14 00:08:12 -05:00
comfyanonymous
7339479b10
Disable xformers when it can't load properly.
2023-11-13 12:31:10 -05:00
comfyanonymous
4781819a85
Make memory estimation aware of model dtype.
2023-11-12 04:28:26 -05:00
comfyanonymous
dd4ba68b6e
Allow different models to estimate memory usage differently.
2023-11-12 04:03:52 -05:00
comfyanonymous
2c9dba8dc0
sampling_function now has the model object as the argument.
2023-11-12 03:45:10 -05:00
comfyanonymous
8d80584f6a
Remove useless argument from uni_pc sampler.
2023-11-12 01:25:33 -05:00
comfyanonymous
248aa3e563
Fix bug.
2023-11-11 12:20:16 -05:00
comfyanonymous
4a8a839b40
Add option to use in place weight updating in ModelPatcher.
2023-11-11 01:11:12 -05:00
comfyanonymous
412d3ff57d
Refactor.
2023-11-11 01:11:06 -05:00
comfyanonymous
58d5d71a93
Working RescaleCFG node.
...
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous
3e0033ef30
Fix model merge bug.
...
Unload models before getting weights for model patching.
2023-11-10 03:19:05 -05:00
comfyanonymous
002aefa382
Support lcm models.
...
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous
ec12000136
Add support for full diff lora keys.
2023-11-08 22:05:31 -05:00
comfyanonymous
064d7583eb
Add a CONDConstant for passing non tensor conds to unet.
2023-11-08 01:59:09 -05:00
comfyanonymous
794dd2064d
Fix typo.
2023-11-07 23:41:55 -05:00
comfyanonymous
0a6fd49a3e
Print leftover keys when using the UNETLoader.
2023-11-07 22:15:55 -05:00
comfyanonymous
fe40109b57
Fix issue with object patches not being copied with patcher.
2023-11-07 22:15:15 -05:00
comfyanonymous
a527d0c795
Code refactor.
2023-11-07 19:33:40 -05:00
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
2023-11-07 04:30:37 -05:00
comfyanonymous
844dbf97a7
Add: advanced->model->ModelSamplingDiscrete node.
...
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
656c0b5d90
CLIP code refactor and improvements.
...
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
b3fcd64c6c
Make SDTokenizer class work with more types of tokenizers.
2023-11-06 01:09:18 -05:00
gameltb
7e455adc07
fix unet_wrapper_function name in ModelPatcher
2023-11-05 17:11:44 +08:00
comfyanonymous
1ffa8858e7
Move model sampling code to comfy/model_sampling.py
2023-11-04 01:32:23 -04:00
comfyanonymous
ae2acfc21b
Don't convert Nan to zero.
...
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
d2e27b48f1
sampler_cfg_function now gets the noisy output as argument again.
...
This should make things that use sampler_cfg_function behave like before.
Added an input argument for those that want the denoised output.
This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
2455aaed8a
Allow model or clip to be None in load_lora_for_models.
2023-11-01 20:27:20 -04:00
comfyanonymous
ecb80abb58
Allow ModelSamplingDiscrete to be instantiated without a model config.
2023-11-01 19:13:03 -04:00
comfyanonymous
e73ec8c4da
Not used anymore.
2023-11-01 00:01:30 -04:00
comfyanonymous
111f1b5255
Fix some issues with sampling precision.
2023-10-31 23:49:29 -04:00
comfyanonymous
7c0f255de1
Clean up percent start/end and make controlnets work with sigmas.
2023-10-31 22:14:32 -04:00
comfyanonymous
a268a574fa
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
1777b54d02
Sampling code changes.
...
apply_model in model_base now returns the denoised output.
This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
a12cc05323
Add --max-upload-size argument, the default is 100MB.
2023-10-29 03:55:46 -04:00
comfyanonymous
2a134bfab9
Fix checkpoint loader with config.
2023-10-27 22:13:55 -04:00
comfyanonymous
e60ca6929a
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
2023-10-27 15:54:04 -04:00
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
2023-10-27 14:45:15 -04:00
comfyanonymous
434ce25ec0
Restrict loading embeddings from embedding folders.
2023-10-27 02:54:13 -04:00
comfyanonymous
723847f6b3
Faster clip image processing.
2023-10-26 01:53:01 -04:00
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
comfyanonymous
7fbb217d3a
Fix uni_pc returning noisy image when steps <= 3
2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
3783cb8bfd
change 'c_adm' to 'y' in ControlNet.get_control
2023-10-25 08:24:32 -05:00
comfyanonymous
d1d2fea806
Pass extra conds directly to unet.
2023-10-25 00:07:53 -04:00
comfyanonymous
036f88c621
Refactor to make it easier to add custom conds to models.
2023-10-24 23:31:12 -04:00
comfyanonymous
3fce8881ca
Sampling code refactor to make it easier to add more conds.
2023-10-24 03:38:41 -04:00
comfyanonymous
8594c8be4d
Empty the cache when torch cache is more than 25% free mem.
2023-10-22 13:58:12 -04:00
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
comfyanonymous
a0690f9df9
Fix t2i adapter issue.
2023-10-21 20:31:24 -04:00
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
comfyanonymous
4185324a1d
Fix uni_pc sampler math. This changes the images this sampler produces.
2023-10-20 04:16:53 -04:00
comfyanonymous
e6962120c6
Make sure cond_concat is on the right device.
2023-10-19 01:14:25 -04:00
comfyanonymous
45c972aba8
Refactor cond_concat into conditioning.
2023-10-18 20:36:58 -04:00
comfyanonymous
430a8334c5
Fix some potential issues.
2023-10-18 19:48:36 -04:00
comfyanonymous
782a24fce6
Refactor cond_concat into model object.
2023-10-18 16:48:37 -04:00
comfyanonymous
0d45a565da
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
c8013f73e5
Add some Quadro cards to the list of cards with broken fp16.
2023-10-16 16:48:46 -04:00
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
comfyanonymous
fd4c5f07e7
Add a --bf16-unet to test running the unet in bf16.
2023-10-13 14:51:10 -04:00
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
2023-10-13 14:41:17 -04:00
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
20d3852aa1
Pull some small changes from the other repo.
2023-10-11 20:38:48 -04:00
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
2023-10-11 20:38:48 -04:00
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
8cc75c64ff
Let unet wrapper functions have .to attributes.
2023-10-11 01:34:38 -04:00
comfyanonymous
5e885bd9c8
Cleanup.
2023-10-10 21:46:53 -04:00
comfyanonymous
851bb87ca9
Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI
2023-10-10 21:42:35 -04:00
Yukimasa Funaoka
9eb621c95a
Supports TAESD models in safetensors format
2023-10-10 13:21:44 +09:00
comfyanonymous
d1a0abd40b
Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI
2023-10-09 01:53:29 -04:00
comfyanonymous
72188dffc3
load_checkpoint_guess_config can now optionally output the model.
2023-10-06 13:48:18 -04:00
Jairo Correa
63e5fd1790
Option to input directory
2023-10-04 19:45:15 -03:00
City
9bfec2bdbf
Fix quality loss due to low precision
2023-10-04 15:40:59 +02:00
badayvedat
0f17993d05
fix: typo in extra sampler
2023-09-29 06:09:59 +03:00
comfyanonymous
66756de100
Add SamplerDPMPP_2M_SDE node.
2023-09-28 21:56:23 -04:00
comfyanonymous
71713888c4
Print missing VAE keys.
2023-09-28 00:54:57 -04:00
comfyanonymous
d234ca558a
Add missing samplers to KSamplerSelect.
2023-09-28 00:17:03 -04:00
comfyanonymous
1adcc4c3a2
Add a SamplerCustom Node.
...
This node takes a list of sigmas and a sampler object as input.
This lets people easily implement custom schedulers and samplers as nodes.
More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous
bf3fc2f1b7
Refactor sampling related code.
2023-09-27 16:45:22 -04:00
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00
comfyanonymous
1d6dd83184
Scheduler code refactor.
2023-09-26 17:07:07 -04:00
comfyanonymous
446caf711c
Sampling code refactor.
2023-09-26 13:45:15 -04:00
comfyanonymous
76cdc809bf
Support more controlnet models.
2023-09-23 18:47:46 -04:00
comfyanonymous
ae87543653
Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI
2023-09-23 00:57:17 -04:00
Simon Lui
eec449ca8e
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
2023-09-22 21:11:27 -07:00
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
2023-09-22 20:26:47 -04:00
comfyanonymous
492db2de8d
Allow having a different pooled output for each image in a batch.
2023-09-21 01:14:42 -04:00
comfyanonymous
1cdfb3dba4
Only do the cast on the device if the device supports it.
2023-09-20 17:52:41 -04:00
comfyanonymous
7c9a92f552
Don't depend on torchvision.
2023-09-19 13:12:47 -04:00
MoonRide303
2b6b178173
Added support for lanczos scaling
2023-09-19 10:40:38 +02:00
comfyanonymous
b92bf8196e
Do lora cast on GPU instead of CPU for higher performance.
2023-09-18 23:04:49 -04:00
comfyanonymous
321c5fa295
Enable pytorch attention by default on xpu.
2023-09-17 04:09:19 -04:00
comfyanonymous
61b1f67734
Support models without previews.
2023-09-16 12:59:54 -04:00
comfyanonymous
43d4935a1d
Add cond_or_uncond array to transformer_options so hooks can check what is
...
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
415abb275f
Add DDPM sampler.
2023-09-15 19:22:47 -04:00
comfyanonymous
94e4fe39d8
This isn't used anywhere.
2023-09-15 12:03:03 -04:00
comfyanonymous
44361f6344
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
0d8f376446
Set last layer on SD2.x models uses the proper indexes now.
...
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
0966d3ce82
Don't run text encoders on xpu because there are issues.
2023-09-14 12:16:07 -04:00
comfyanonymous
3039b08eb1
Only parse command line args when main.py is called.
2023-09-13 11:38:20 -04:00
comfyanonymous
ed58730658
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
fb3b728203
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
7d401ed1d0
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
e85be36bd2
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
1e6b67101c
Support diffusers format t2i adapters.
2023-09-08 11:36:51 -04:00
comfyanonymous
326577d04c
Allow cancelling of everything with a progress bar.
2023-09-07 23:37:03 -04:00
comfyanonymous
f88f7f413a
Add a ConditioningSetAreaPercentage node.
2023-09-06 03:28:27 -04:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
7746bdf7b0
Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI
2023-09-04 00:43:11 -04:00
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
comfyanonymous
a74c5dbf37
Move some functions to utils.py
2023-09-02 22:33:37 -04:00
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
77a176f9e0
Use common function to reshape batch to.
2023-09-02 03:42:49 -04:00
comfyanonymous
7931ff0fd9
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
0e3b641172
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
5c363a9d86
Fix controlnet bug.
2023-09-01 02:01:08 -04:00
comfyanonymous
cfe1c54de8
Fix controlnet issue.
2023-08-31 15:16:58 -04:00
comfyanonymous
1c012d69af
It doesn't make sense for c_crossattn and c_concat to be lists.
2023-08-31 13:25:00 -04:00
comfyanonymous
7e941f9f24
Clean up DiffusersLoader node.
2023-08-30 12:57:07 -04:00
Simon Lui
18617967e5
Fix error message in model_patcher.py
...
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
fe4c07400c
Fix "Load Checkpoint with config" node.
2023-08-29 23:58:32 -04:00
comfyanonymous
f2f5e5dcbb
Support SDXL t2i adapters with 3 channel input.
2023-08-29 16:44:57 -04:00
comfyanonymous
15adc3699f
Move beta_schedule to model_config and allow disabling unet creation.
2023-08-29 14:22:53 -04:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
65cae62c71
No need to check filename extensions to detect shuffle controlnet.
2023-08-28 16:49:06 -04:00
comfyanonymous
4e89b2c25a
Put clip vision outputs on the CPU.
2023-08-28 16:26:11 -04:00
comfyanonymous
a094b45c93
Load clipvision model to GPU for faster performance.
2023-08-28 15:29:27 -04:00
comfyanonymous
1300a1bb4c
Text encoder should initially load on the offload_device not the regular.
2023-08-28 15:08:45 -04:00
comfyanonymous
f92074b84f
Move ModelPatcher to model_patcher.py
2023-08-28 14:51:31 -04:00
comfyanonymous
4798cf5a62
Implement loras with norm keys.
2023-08-28 11:20:06 -04:00
comfyanonymous
b8c7c770d3
Enable bf16-vae by default on ampere and up.
2023-08-27 23:06:19 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
a57b0c797b
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
f72780a7e3
The new smart memory management makes this unnecessary.
2023-08-25 18:02:15 -04:00
comfyanonymous
c77f02e1c6
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
15a7716fa6
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00