Commit Graph

2699 Commits

Author SHA1 Message Date
comfyanonymous c45d1b9b67 Add a function to load a unet from a state dict. 2023-11-27 17:41:29 -05:00
comfyanonymous f30b992b18 .sigma and .timestep now return tensors on the same device as the input. 2023-11-27 16:41:33 -05:00
comfyanonymous 488de0b4df ModelSamplingDiscreteLCM -> ModelSamplingDiscreteDistilled 2023-11-27 16:32:03 -05:00
comfyanonymous 13fdee6abf Try to free memory for both cond+uncond before inference. 2023-11-27 14:55:40 -05:00
comfyanonymous be71bb5e13 Tweak memory inference calculations a bit. 2023-11-27 14:04:16 -05:00
pythongosssss 9be0b30cf1 fix formatting 2023-11-27 14:02:50 +00:00
pythongosssss 34eccd863b Add simple undo redo history 2023-11-27 14:00:15 +00:00
comfyanonymous 96c2deeefb Merge branch 'path_error_fix' of https://github.com/jeske/ComfyUI 2023-11-27 02:06:08 -05:00
David Jeske edd6f75d3a better error for invalid output paths 2023-11-26 13:10:31 -07:00
Jack Bauer 6aa1bcd601
Remove hard coded max_items in history API 2023-11-26 17:23:11 +04:00
comfyanonymous 39e75862b2 Fix regression from last commit. 2023-11-26 03:43:02 -05:00
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous 5b37270d3a Add a lora loader node for models with no CLIP. 2023-11-25 02:26:50 -05:00
comfyanonymous 5d6dfce548 Fix importing diffusers unets. 2023-11-24 20:35:29 -05:00
comfyanonymous e020ab61f9 Fix output APNG not working with ffmpeg. 2023-11-24 18:24:19 -05:00
comfyanonymous 8ad5d494d5 Fix APNG not working in ffmpeg. 2023-11-24 18:14:17 -05:00
comfyanonymous 916e9c998c Use same default fps as webp node. 2023-11-24 11:19:23 -05:00
comfyanonymous eff24ea6aa Add a node to save animated PNG files. These work in ffpmeg unlike webp. 2023-11-24 11:12:10 -05:00
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 2023-11-24 03:55:35 -05:00
comfyanonymous 982338b9bb Fix issue loading webp files in UI. 2023-11-24 02:08:08 -05:00
comfyanonymous c782cf3ea9 Add to Readme that Stable Video Diffusion is supported. 2023-11-24 00:27:08 -05:00
comfyanonymous 02ffbb2de3 Fix typo. 2023-11-23 23:20:07 -05:00
comfyanonymous 42dfae6331 Nodes to properly use the SDV img2vid checkpoint.
The img2vid model is conditioned on clip vision output only which means
there's no CLIP model which is why I added a ImageOnlyCheckpointLoader to
load it. Note that the unClipCheckpointLoader can also load it because it
also has a CLIP_VISION output.

SDV_img2vid_Conditioning is the node used to pass the right conditioning
to the img2vid model.

VideoLinearCFGGuidance applies a linearly decreasing CFG scale to each
video frame from the cfg set in the sampler node to min_cfg.

SDV_img2vid_Conditioning can be found in conditioning->video_models
ImageOnlyCheckpointLoader can be found in loaders->video_models
VideoLinearCFGGuidance can be found in sampling->video_models
2023-11-23 19:48:49 -05:00
comfyanonymous 871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
Enrico Fasoli 1964bf1e78 fix: folder handling issues 2023-11-23 22:24:58 +01:00
comfyanonymous 022033a0e7 Fix SaveAnimatedWEBP not working when metadata is disabled. 2023-11-23 15:39:35 -05:00
pythongosssss 4d2437e681 Call widget onRemove to remove element 2023-11-23 19:43:55 +00:00
comfyanonymous a657f96c5c Add a node to save animated webp. 2023-11-23 14:28:41 -05:00
comfyanonymous 87031a1945 Update readme with link to LCM example page. 2023-11-23 11:59:11 -05:00
comfyanonymous d03d8aa2e3 Fix loading groups. 2023-11-23 01:09:15 -05:00
comfyanonymous 410bf07771 Make VAE memory estimation take dtype into account. 2023-11-22 18:17:19 -05:00
comfyanonymous 32447f0c39 Add sampling_settings so models can specify specific sampling settings. 2023-11-22 17:24:00 -05:00
pythongosssss 70d2ea0faa
Control filter list (#2009)
* Add control_filter_list to filter items after queue

* fix regex

* backwards compatibility

* formatting

* revert

* Add and fix test
2023-11-22 12:52:20 -05:00
comfyanonymous 1ca4802e8c Merge branch 'hide-if-collapsed' of https://github.com/pythongosssss/ComfyUI 2023-11-22 11:46:21 -05:00
pythongosssss ab7d4f7848 Handle collapsing to hide element 2023-11-22 13:53:30 +00:00
comfyanonymous c3ae99a749 Allow controlling downscale and upscale methods in PatchModelAddDownscale. 2023-11-22 03:23:16 -05:00
comfyanonymous 72741105a6 Remove useless code. 2023-11-21 17:27:28 -05:00
comfyanonymous 6a491ebe27 Allow model config to preprocess the vae state dict on load. 2023-11-21 16:29:18 -05:00
comfyanonymous d66b631d74 Merge branch 'fix-collapsed-clip' of https://github.com/pythongosssss/ComfyUI 2023-11-21 13:26:26 -05:00
comfyanonymous cd4fc77d5f Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
pythongosssss 89e31abc46 Fix clipping of collapsed nodes 2023-11-21 17:54:01 +00:00
pythongosssss 6ff06fa796
Animated image output support (#2008)
* Refactor multiline widget into generic DOM widget

* wip webp preview

* webp support

* fix check

* fix sizing

* show image when zoomed out

* Swap webp checkto generic animated image flag

* remove duplicate

* Fix falsy check
2023-11-21 01:33:58 -05:00
comfyanonymous ce67dcbcda Make it easy for models to process the unet state dict on load. 2023-11-20 23:17:53 -05:00
comfyanonymous 2dd5b4dd78 Only show last 200 elements in the UI history tab. 2023-11-20 16:56:29 -05:00
comfyanonymous a03dde190e Cap maximum history size at 10000. Delete oldest entry when reached. 2023-11-20 16:38:39 -05:00
comfyanonymous 31c5ea7b2c Add LatentInterpolate to interpolate between latents. 2023-11-20 03:55:51 -05:00
comfyanonymous dba4f3b4fc Add a RepeatImageBatch node. 2023-11-19 06:09:01 -05:00
comfyanonymous d9d8702d8d percent_to_sigma now returns a float instead of a tensor. 2023-11-18 23:20:29 -05:00
comfyanonymous 8a451234b3 Add ImageCrop node. 2023-11-18 04:44:17 -05:00
comfyanonymous 0cf4e86939 Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00