fbdb14d4c4
Use a simple CLIP model implementation instead of the one from transformers. This will allow some interesting things that would too hackish to implement using the transformers implementation. |
||
---|---|---|
.. | ||
diffusionmodules | ||
distributions | ||
encoders | ||
attention.py | ||
ema.py | ||
sub_quadratic_attention.py | ||
temporal_ae.py |