Commit Graph

2717 Commits

Author SHA1 Message Date
Rafie Walker 6761233e9d
Implement Self-Attention Guidance (#2201)
* First SAG test

* need to put extra options on the model instead of patcher

* no errors and results seem not-broken

* Use @ashen-uncensored formula, which works better!!!

* Fix a crash when using weird resolutions. Remove an unnecessary UNet call

* Improve comments, optimize memory in blur routine

* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
pythongosssss 390078904c
Group node fixes (#2259)
* Prevent cleaning graph state on undo/redo

* Remove pause rendering due to LG bug

* Fix crash on disconnected internal reroutes

* Fix widget inputs being incorrect order and value

* Fix initial primitive values on connect

* basic support for basic rerouted converted inputs

* Populate primitive to reroute input

* dont crash on bad primitive links

* Fix convert to group changing control value

* reduce restrictions

* fix random crash in tests
2023-12-13 00:56:39 -05:00
comfyanonymous b454a67bb9 Support segmind vega model. 2023-12-12 19:09:53 -05:00
comfyanonymous 824e4935f5 Add dtype parameter to VAE object. 2023-12-12 12:03:29 -05:00
comfyanonymous 32b7e7e769 Add manual cast to controlnet. 2023-12-12 11:32:42 -05:00
comfyanonymous 3152023fbc Use inference dtype for unet memory usage estimation. 2023-12-11 23:50:38 -05:00
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 2023-12-11 18:36:29 -05:00
comfyanonymous ba07cb748e Use faster manual cast for fp8 in unet. 2023-12-11 18:24:44 -05:00
pythongosssss ab93abd4b2
Prevent cleaning graph state on undo/redo (#2255)
* Prevent cleaning graph state on undo/redo

* Remove pause rendering due to LG bug
2023-12-11 12:33:35 -05:00
comfyanonymous 57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
Dr.Lt.Data 69033081c5 mask editor bugfix
- Addressing the issue where an unnecessary hidden panel disrupts the drawing.
2023-12-11 00:24:28 +09:00
comfyanonymous 340177e6e8 Disable non blocking on mps. 2023-12-10 01:30:35 -05:00
comfyanonymous 614b7e731f Implement GLora. 2023-12-09 18:15:26 -05:00
comfyanonymous cb63e230b4 Make lora code a bit cleaner. 2023-12-09 14:15:09 -05:00
comfyanonymous 9e411073e9 Add instructions for those that have python 3.12 2023-12-09 13:41:30 -05:00
comfyanonymous eccc9e64a6 Merge branch 'group-reroute-fix' of https://github.com/pythongosssss/ComfyUI 2023-12-09 12:01:26 -05:00
comfyanonymous da74e3bbe3 Update pytorch nightly packaging workflow. 2023-12-09 12:01:17 -05:00
comfyanonymous 174eba8e95 Use own clip vision model implementation. 2023-12-09 11:56:31 -05:00
pythongosssss 080ef75c31 fix 2023-12-09 13:19:21 +00:00
pythongosssss 9aaf368a41 Fix internal reroutes connected to other groups 2023-12-09 13:04:35 +00:00
comfyanonymous 97015b6b38 Cleanup. 2023-12-08 16:02:08 -05:00
comfyanonymous a4ec54a40d Add linear_start and linear_end to model_config.sampling_settings 2023-12-08 02:49:30 -05:00
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 2023-12-08 02:35:45 -05:00
comfyanonymous cdff081023 Fix hypertile. 2023-12-07 15:22:35 -05:00
comfyanonymous efb704c758 Support attention masking in CLIP implementation. 2023-12-07 02:51:02 -05:00
comfyanonymous 248d9125b0 Merge branch 'ht_deterministic' of https://github.com/asagi4/ComfyUI 2023-12-07 01:45:11 -05:00
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
asagi4 03eadbb53c Make HyperTile deterministic 2023-12-06 21:17:56 +02:00
comfyanonymous 2db86b4676 Slightly faster lora applying. 2023-12-06 05:13:14 -05:00
comfyanonymous e134547341 Merge branch 'reroute-converted-inputs' of https://github.com/pythongosssss/ComfyUI
# Conflicts:
#	web/extensions/core/widgetInputs.js
2023-12-06 03:01:35 -05:00
Dr.Lt.Data 8112a0d9fc
improve: Mask Editor (#2171)
* renewal mask editor

* fix: ignoring keydown when 2nd open
2023-12-06 01:56:03 -05:00
comfyanonymous ef29542030 Merge branch 'primitive-text-replacement' of https://github.com/pythongosssss/ComfyUI 2023-12-05 23:11:03 -05:00
pythongosssss 8de6f94f5c Allow widget placeholder replacement on primitives 2023-12-05 21:02:10 +00:00
pythongosssss bcc469a2c9 try to stop test failing 2023-12-05 20:28:52 +00:00
pythongosssss a99da6667f reroute + primitive tests 2023-12-05 20:28:05 +00:00
pythongosssss 44265e0810 Allow connecting primitivenode to reroutes 2023-12-05 20:27:13 +00:00
comfyanonymous 1bbd65ab30 Missed this one. 2023-12-05 12:48:41 -05:00
comfyanonymous 9b655d4fd7 Fix memory issue with control loras. 2023-12-04 21:55:19 -05:00
comfyanonymous 26b1c0a771 Fix control lora on fp8. 2023-12-04 13:47:41 -05:00
comfyanonymous be3468ddd5 Less useless downcasting. 2023-12-04 12:53:46 -05:00
comfyanonymous ca82ade765 Use .itemsize to get dtype size for fp8. 2023-12-04 11:52:06 -05:00
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 2023-12-04 03:12:18 -05:00
comfyanonymous 6efe561c2a Merge branch 'fix-template-sorting' of https://github.com/pythongosssss/ComfyUI 2023-12-03 22:51:23 -05:00
pythongosssss 77ab2c3f69 fix template sorting 2023-12-03 17:17:23 +00:00
pythongosssss 44d8abadf0 allow muting group node 2023-12-03 17:04:16 +00:00
pythongosssss 496de0891d Allow removing erroring embedded groups
Unregister group nodes on workflow change
2023-12-03 16:49:48 +00:00
comfyanonymous 61a123a1e0 A different way of handling multiple images passed to SVD.
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]

now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous b2517b4ceb Load api workflow if regular workflow isn't in loaded image. 2023-12-02 13:56:11 -05:00