Commit Graph

35 Commits

Author SHA1 Message Date
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous fa28d7334b Remove useless code. 2023-06-23 12:35:26 -04:00
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 2023-06-15 18:42:30 -04:00
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 2023-06-15 15:00:10 -04:00
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 2023-06-15 14:29:26 -04:00
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 2023-06-14 19:46:08 -04:00
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2023-05-20 16:01:02 -04:00
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2023-05-20 15:07:21 -04:00
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2023-05-05 18:11:41 -04:00
comfyanonymous bae4fb4a9d Fix imports. 2023-05-04 18:10:29 -04:00
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2023-05-02 14:17:51 -04:00
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2023-05-02 13:31:43 -04:00
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2023-04-04 22:22:02 -04:00
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2023-03-31 13:04:39 -04:00
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2023-03-22 14:49:00 -04:00
comfyanonymous c692509c2b Try to improve VAEEncode memory usage a bit. 2023-03-22 02:45:18 -04:00
comfyanonymous 54dbfaf2ec Remove omegaconf dependency and some ci changes. 2023-03-13 14:49:18 -04:00
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2023-03-13 12:45:54 -04:00
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2023-03-13 11:36:48 -04:00
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous 1de86851b1 Try to fix memory issue. 2023-03-11 15:15:13 -05:00
comfyanonymous cc8baf1080 Make VAE use common function to get free memory. 2023-03-05 14:20:07 -05:00
comfyanonymous fcb25d37db Prepare for t2i adapter. 2023-02-24 23:36:17 -05:00
comfyanonymous 09f1d76ed8 Fix an OOM issue. 2023-02-17 16:21:01 -05:00
comfyanonymous 4efa67fa12 Add ControlNet support. 2023-02-16 10:38:08 -05:00
comfyanonymous 509c7dfc6d Use real softmax in split op to fix issue with some images. 2023-02-10 03:13:49 -05:00
comfyanonymous 1f6a467e92 Update ldm dir with latest upstream stable diffusion changes. 2023-02-09 13:47:36 -05:00
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2023-02-09 12:43:29 -05:00
comfyanonymous e8c499ddd4 Split optimization for VAE attention block. 2023-02-08 22:04:20 -05:00
comfyanonymous 5b4e312749 Use inplace operations for less OOM issues. 2023-02-08 22:04:13 -05:00
comfyanonymous 220afe3310 Initial commit. 2023-01-16 22:37:14 -05:00