Commit Graph

149 Commits

Author SHA1 Message Date
comfyanonymous dd4ba68b6e Allow different models to estimate memory usage differently. 2023-11-12 04:03:52 -05:00
comfyanonymous 2c9dba8dc0 sampling_function now has the model object as the argument. 2023-11-12 03:45:10 -05:00
comfyanonymous 8d80584f6a Remove useless argument from uni_pc sampler. 2023-11-12 01:25:33 -05:00
comfyanonymous 58d5d71a93 Working RescaleCFG node.
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous 002aefa382 Support lcm models.
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous ae2acfc21b Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous d2e27b48f1 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous e73ec8c4da Not used anymore. 2023-11-01 00:01:30 -04:00
comfyanonymous 111f1b5255 Fix some issues with sampling precision. 2023-10-31 23:49:29 -04:00
comfyanonymous 7c0f255de1 Clean up percent start/end and make controlnets work with sigmas. 2023-10-31 22:14:32 -04:00
comfyanonymous a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous 1777b54d02 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous 036f88c621 Refactor to make it easier to add custom conds to models. 2023-10-24 23:31:12 -04:00
comfyanonymous 3fce8881ca Sampling code refactor to make it easier to add more conds. 2023-10-24 03:38:41 -04:00
comfyanonymous 4185324a1d Fix uni_pc sampler math. This changes the images this sampler produces. 2023-10-20 04:16:53 -04:00
comfyanonymous e6962120c6 Make sure cond_concat is on the right device. 2023-10-19 01:14:25 -04:00
comfyanonymous 45c972aba8 Refactor cond_concat into conditioning. 2023-10-18 20:36:58 -04:00
comfyanonymous 782a24fce6 Refactor cond_concat into model object. 2023-10-18 16:48:37 -04:00
comfyanonymous 66756de100 Add SamplerDPMPP_2M_SDE node. 2023-09-28 21:56:23 -04:00
comfyanonymous d234ca558a Add missing samplers to KSamplerSelect. 2023-09-28 00:17:03 -04:00
comfyanonymous 1d6dd83184 Scheduler code refactor. 2023-09-26 17:07:07 -04:00
comfyanonymous 446caf711c Sampling code refactor. 2023-09-26 13:45:15 -04:00
comfyanonymous 492db2de8d Allow having a different pooled output for each image in a batch. 2023-09-21 01:14:42 -04:00
comfyanonymous 43d4935a1d Add cond_or_uncond array to transformer_options so hooks can check what is
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous 415abb275f Add DDPM sampler. 2023-09-15 19:22:47 -04:00
comfyanonymous 326577d04c Allow cancelling of everything with a progress bar. 2023-09-07 23:37:03 -04:00
comfyanonymous f88f7f413a Add a ConditioningSetAreaPercentage node. 2023-09-06 03:28:27 -04:00
comfyanonymous 1c012d69af It doesn't make sense for c_crossattn and c_concat to be lists. 2023-08-31 13:25:00 -04:00
comfyanonymous d6e4b342e6 Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous 89a0767abf Smarter memory management.
Try to keep models on the vram when possible.

Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous 0cb6dac943 Remove 3m from PR #1213 because of some small issues. 2023-08-14 00:48:45 -04:00
comfyanonymous e244b2df83 Add sgm_uniform scheduler that acts like the default one in sgm. 2023-08-14 00:29:03 -04:00
comfyanonymous 58c7da3665 Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras. 2023-08-14 00:28:50 -04:00
FizzleDorf 3cfad03a68 dpmpp 3m + dpmpp 3m sde added 2023-08-13 22:29:04 -04:00
comfyanonymous cf10c5592c Disable calculating uncond when CFG is 1.0 2023-08-09 20:55:03 -04:00
asagi4 1ea4d84691 Fix timestep ranges when batch_size > 1 2023-07-27 21:14:09 +03:00
comfyanonymous 7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2023-07-24 17:50:49 -04:00
comfyanonymous d191c4f9ed Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2023-07-24 13:35:20 -04:00
comfyanonymous 0240946ecf Add a way to set which range of timesteps the cond gets applied to. 2023-07-24 09:25:02 -04:00
comfyanonymous 67be7eb81d Nodes can now patch the unet function. 2023-07-22 17:01:12 -04:00
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 2023-07-17 01:22:12 -04:00
comfyanonymous bb5fbd29e9 Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI 2023-07-07 01:52:25 -04:00
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous e57cba4c61 Add gpu variations of the sde samplers that are less deterministic
but faster.
2023-07-05 01:39:38 -04:00
mara c61a95f9f7 Fix size check for conditioning mask
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous 4eab00e14b Set the seed in the SDE samplers to make them more reproducible. 2023-06-25 03:04:57 -04:00
comfyanonymous 8607c2d42d Move latent scale factor from VAE to model. 2023-06-23 02:33:31 -04:00
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous 036a22077c Fix k_diffusion math being off by a tiny bit during txt2img. 2023-06-19 15:28:54 -04:00
comfyanonymous 388567f20b sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous c64ca8c0b2 Refactor unCLIP noise augment out of samplers.py 2023-06-11 04:01:18 -04:00
comfyanonymous de142eaad5 Simpler base model code. 2023-06-09 12:31:16 -04:00
comfyanonymous 069657fbf3 Add DPM-Solver++(2M) SDE and exponential scheduler.
exponential scheduler is the one recommended with this sampler.
2023-05-21 01:46:03 -04:00
comfyanonymous ef815ba1e2 Switch default scheduler to normal. 2023-05-15 00:29:56 -04:00
comfyanonymous f7c0f75d1f Auto batching improvements.
Try batching when cond sizes don't match with smart padding.
2023-05-10 13:59:24 -04:00
comfyanonymous 314e526c5c Not needed anymore because sampling works with any latent size. 2023-05-09 12:18:18 -04:00
comfyanonymous 908dc1d5a8 Add a total_steps value to sampler callback. 2023-05-03 12:58:10 -04:00
comfyanonymous d3293c8339 Properly disable all progress bars when disable_pbar=True 2023-05-01 15:52:17 -04:00
BlenderNeko a2e18b1504 allow disabling of progress bar when sampling 2023-04-30 18:59:58 +02:00
comfyanonymous 071011aebe Mask strength should be separate from area strength. 2023-04-29 20:06:53 -04:00
Jacob Segal af02393c2a Default to sampling entire image
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.

I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
   used to index into tensors.
2023-04-29 00:16:58 -07:00
Jacob Segal e214c917ae Add Condition by Mask node
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2023-04-27 20:03:27 -07:00
comfyanonymous 5a971cecdb Add callback to sampler function.
Callback format is: callback(step, x0, x)
2023-04-27 04:38:44 -04:00
comfyanonymous 7983b3a975 This is cleaner this way. 2023-04-24 22:45:35 -04:00
BlenderNeko 8d2de420d3 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-23 20:02:18 +02:00
BlenderNeko 2a09e2aa27 refactor/split various bits of code for sampling 2023-04-23 20:02:08 +02:00
comfyanonymous 5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous 907010e082 Remove some useless code. 2023-04-20 23:58:25 -04:00
comfyanonymous 96b57a9ad6 Don't pass adm to model when it doesn't support it. 2023-04-19 21:11:38 -04:00
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous 884ea653c8 Add a way for nodes to set a custom CFG function. 2023-04-17 11:05:15 -04:00
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous 23524ad8c5 Remove print. 2023-04-03 22:58:54 -04:00
comfyanonymous f50b1fec69 Add noise augmentation setting to unCLIPConditioning. 2023-04-03 13:50:29 -04:00
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2023-03-31 13:04:39 -04:00
comfyanonymous afd65d3819 Fix noise mask not working with > 1 batch size on ksamplers. 2023-03-30 03:50:12 -04:00
comfyanonymous 0d65cb17b7 Fix ddim_uniform crashing with 37 steps. 2023-03-28 16:29:35 -04:00
comfyanonymous 7f0fd99b5d Make ddim work with --cpu 2023-03-24 11:39:51 -04:00
comfyanonymous b4b21be707 Fix area composition feathering not working properly. 2023-03-19 02:00:52 -04:00
comfyanonymous afff30fc0a Add --cpu to use the cpu for inference. 2023-03-06 10:50:50 -05:00
comfyanonymous 69cc75fbf8 Add a way to interrupt current processing in the backend. 2023-03-02 14:42:03 -05:00
comfyanonymous 75fa162531 Remove sample_ from some sampler names.
Old workflows will still work.
2023-02-27 01:43:06 -05:00
comfyanonymous af3cc1b5fb Fixed issue when batched image was used as a controlnet input. 2023-02-25 14:57:28 -05:00
comfyanonymous f04dc2c2f4 Implement DDIM sampler. 2023-02-22 21:10:19 -05:00
comfyanonymous 2976c1ad28 Uni_PC: make max denoise behave more like other samplers.
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2023-02-22 02:21:06 -05:00
comfyanonymous a7328e4945 Add uni_pc bh2 variant. 2023-02-21 16:11:48 -05:00
comfyanonymous 4efa67fa12 Add ControlNet support. 2023-02-16 10:38:08 -05:00
comfyanonymous bc69fb5245 Use inpaint models the proper way by using VAEEncodeForInpaint. 2023-02-15 20:44:51 -05:00
comfyanonymous cef2cc3cb0 Support for inpaint models. 2023-02-15 16:38:20 -05:00
comfyanonymous 07db00355f Add masks to samplers code for inpainting. 2023-02-15 13:16:38 -05:00
comfyanonymous 5489d5af04 Add uni_pc sampler to KSampler* nodes. 2023-02-11 03:34:09 -05:00
comfyanonymous 3fd87cbd21 Slightly smarter batching behaviour.
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
comfyanonymous bbdcf0b737 Use relative imports for k_diffusion. 2023-02-08 16:51:19 -05:00
comfyanonymous 853e96ada3 Increase it/s by batching together some stuff sent to unet. 2023-02-08 14:24:00 -05:00
comfyanonymous 69df7eba94 Add KSamplerAdvanced node.
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
 and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2023-01-31 03:09:38 -05:00
comfyanonymous c4b02059d0 Add ConditioningSetArea node.
to apply conditioning/prompts only to a specific area of the image.

Add ConditioningCombine node.
so that multiple conditioning/prompts can be applied to the image at the
same time
2023-01-26 12:06:48 -05:00
comfyanonymous 220afe3310 Initial commit. 2023-01-16 22:37:14 -05:00