* Update node_helpers.py to use generic pillow wrapper to resolve multiple meta-data related issues.
replaced open_image function with a generic pillow function that takes Pil functions as a dependency injection and applies the ImageFile.LOAD_TRUNCATED_IMAGES try except fix to them.
This provides an extensible function to handle related errors that can wrap offending functions when discovered without the need to repeat code.
* Update a few Pil functions to use node_helpers.pillow wrapper
Update a Pil function calls in a few locations to use the generic node_helpers.pillow wrapper that takes the function as a dependency injection and uses the try except method with ImageFIle.LOAD_TRUNCATED_IMAGES solution
* Corrected comment in issue #s fixed.
* Update node_helpers.py to remove import of Image from PIL
import of Image is no longer required as functions are Injected
* Fix issue with how PIL loads small PNG files nodes.py
Added flag to prevent ValueError: Decompressed Data Too Large
when loading PNG images with large meta data such as large embedded color profiles
* Update LoadImage node to fix error when loading PNG's in nodes.py
Fixed Value Error: Decompressed Data Too Large thrown by PIL when attempting to opening PNG files with large embedded ICC colorspaces by setting the follow flag to true when loading png images: ImageFile.LOAD_TRUNCATED_IMAGES = True
* Update node_helpers.py to include open_image helper function
open_image includes try except to catch Pillow Value Errors that occur when large ICC profiles are embedded in images.
* Update LoadImage node to use open_image helper function inplace of Image.open
open_image helper function in node_helpers.py fixes a Pillow error when attempting to open images with large embedded ICC profiles by adding an exception handler to load the image with truncated meta data if regular loading is not possible.
This sampler is an LCM sampler that upscales the latent during sampling.
It can be used to generate at a higher resolution with an LCM model very
quickly.
To try it use it with a basic 5 step LCM workflow with scale_ratio 1.5 or
2.0
* Implement Differential Diffusion
* Cleanup.
* Fix.
* Masks should be applied at full strength.
* Fix colors.
* Register the node.
* Cleaner code.
* Fix issue with getting unipc sampler.
* Adjust thresholds.
* Switch to linear thresholds.
* Only calculate nearest_idx on valid thresholds.
* First SAG test
* need to put extra options on the model instead of patcher
* no errors and results seem not-broken
* Use @ashen-uncensored formula, which works better!!!
* Fix a crash when using weird resolutions. Remove an unnecessary UNet call
* Improve comments, optimize memory in blur routine
* SAG works with sampler_cfg_function
The img2vid model is conditioned on clip vision output only which means
there's no CLIP model which is why I added a ImageOnlyCheckpointLoader to
load it. Note that the unClipCheckpointLoader can also load it because it
also has a CLIP_VISION output.
SDV_img2vid_Conditioning is the node used to pass the right conditioning
to the img2vid model.
VideoLinearCFGGuidance applies a linearly decreasing CFG scale to each
video frame from the cfg set in the sampler node to min_cfg.
SDV_img2vid_Conditioning can be found in conditioning->video_models
ImageOnlyCheckpointLoader can be found in loaders->video_models
VideoLinearCFGGuidance can be found in sampling->video_models
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.
percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.
Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
This node takes a list of sigmas and a sampler object as input.
This lets people easily implement custom schedulers and samplers as nodes.
More nodes will be added to it in the future.