Commit Graph

189 Commits

Author SHA1 Message Date
space-nuko 00646b0813 Bitwise operations for masks 2023-05-27 21:48:49 -05:00
comfyanonymous 7310290f17 Pull in latest upscale model code from chainner. 2023-05-23 22:26:50 -04:00
comfyanonymous 71666f248f Fix padding in Blur. 2023-05-20 10:08:47 -04:00
BlenderNeko 36af98d755 improve sharpen and blur nodes 2023-05-20 15:23:28 +02:00
comfyanonymous 587f89fe5a Enable safe loading for upscale models. 2023-05-14 15:10:40 -04:00
BlenderNeko 1201d2eae5
Make nodes map over input lists (#579)
* allow nodes to map over lists

* make work with IS_CHANGED and VALIDATE_INPUTS

* give list outputs distinct socket shape

* add rebatch node

* add batch index logic

* add repeat latent batch

* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
comfyanonymous 51583164ef Make MaskToImage support masks with a batch size. 2023-05-10 10:03:30 -04:00
comfyanonymous 1a31020081 Support softsign hypernetwork. 2023-05-05 00:16:57 -04:00
comfyanonymous fcf513e0b6 Refactor. 2023-05-03 17:48:35 -04:00
pythongosssss 5eeecf3fd5 remove unused import 2023-05-03 18:21:23 +01:00
pythongosssss 8912623ea9 use comfy progress bar 2023-05-03 18:19:22 +01:00
pythongosssss fdf57325f4 Merge remote-tracking branch 'origin/master' into tiled-progress 2023-05-03 17:33:42 +01:00
pythongosssss 06ad35b493 added progress to encode + upscale 2023-05-02 19:18:07 +01:00
comfyanonymous 07194297fd Python 3.7 support. 2023-04-25 14:02:17 -04:00
comfyanonymous 463bde66a1 Add hypernetwork example link to readme.
Move hypernetwork loader node to loaders.
2023-04-24 03:08:51 -04:00
comfyanonymous 4e345b31f6 Support all known hypernetworks. 2023-04-24 02:36:06 -04:00
comfyanonymous 5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous 476d543fe8 Fix for older python.
from: https://github.com/comfyanonymous/ComfyUI/discussions/476
2023-04-15 10:56:15 -04:00
comfyanonymous d98a4de9eb LatentCompositeMasked: negative x, y don't work. 2023-04-14 00:49:19 -04:00
comfyanonymous f48f0872e2 Refactor: move nodes_mask_convertion nodes to nodes_mask. 2023-04-14 00:21:01 -04:00
comfyanonymous e1db7a2038 Merge branch 'image-to-mask' of https://github.com/missionfloyd/ComfyUI
# Conflicts:
#	nodes.py
2023-04-14 00:15:48 -04:00
comfyanonymous 35a2c790b6
Update comfy_extras/nodes_mask.py
Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com>
2023-04-14 00:12:15 -04:00
missionfloyd 9371924e65 Move mask conversion to separate file 2023-04-13 03:11:17 -06:00
mligaintart 022a9f271b Adds masking to Latent Composite, and provides new masking utilities to
allow better compositing.
2023-04-06 15:18:20 -04:00
comfyanonymous 871a76b77b Rename and reorganize post processing nodes. 2023-04-04 22:54:33 -04:00
comfyanonymous af291e6f69 Convert line endings to unix. 2023-04-04 13:56:13 -04:00
EllangoK 56196ab0f7 use common_upcale in blend 2023-04-04 10:57:34 -04:00
EllangoK fa2febc062 blend supports any size, dither -> quantize 2023-04-03 09:52:04 -04:00
EllangoK 4c7a9dbcb6 adds Blend, Blur, Dither, Sharpen nodes 2023-04-02 18:44:27 -04:00
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous 2e73367f45 Merge T2IAdapterLoader and ControlNetLoader.
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous e1a9e26968 Add folder_paths so models can be in multiple paths. 2023-03-17 18:01:11 -04:00
comfyanonymous 494cfe5444 Prevent model_management from being loaded twice. 2023-03-15 15:18:18 -04:00
comfyanonymous c8f1acc4eb Put image upscaling nodes in image/upscaling category. 2023-03-11 18:10:36 -05:00
comfyanonymous 9db2e97b47 Tiled upscaling with the upscale models. 2023-03-11 14:04:13 -05:00
comfyanonymous 905857edd8 Take some code from chainner to implement ESRGAN and other upscale models. 2023-03-11 13:09:28 -05:00
comfyanonymous 7ec1dd25a2 A tiny bit of reorganizing. 2023-03-06 01:30:17 -05:00
comfyanonymous 47acb3d73e Implement support for t2i style model.
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.

Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models

StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00