Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 12 Julio 2024StableDiffusion

AuraFlow is available in Diffusers

12 Julio 2024 at 13:02
AuraFlow is available in Diffusers

Kudos to the Fal team for releasing the largest truly open-source text-to-image model -- AuraFlow!

It's available in `diffusers`. Just install it from the source and enjoy 🤗

Image taken from the model page of AuraFlow

Check out the model page for more info: https://huggingface.co/fal/AuraFlow.

Additionally, this PR allows you to run the Aura Flow Transformer model in 15GBs of VRAM by using offloading at the modeling level: https://github.com/huggingface/diffusers/pull/8853.

submitted by /u/RepresentativeJob937
[link] [comments]
AnteayerStableDiffusion

Running DreamBooth LoRA fine-tuning with SD3 in a free-tier Colab

3 Julio 2024 at 12:27
Running DreamBooth LoRA fine-tuning with SD3 in a free-tier Colab

We worked on a mini-project to show how to run SD3 DreamBooth LoRA fine-tuning on a free-tier Colab Notebook 🌸

The project is educational and is meant to serve as a template. Only good vibes here please 🫡
https://github.com/huggingface/diffusers/tree/main/examples/research_projects/sd3_lora_colab

Techniques used:

* We first pre-compute the text embedding as undoubtedly it's the most memory-intensive part when you use all three text encoders of SD3. Additionally, to keep the memory requirements manageable for the free-tier Colab, we use the 8bit T5 (8bit as in `llm.int8()`). This helped us reduce the memory requirements from 20GB to 10GB.

* We then use a myriad of popular techniques to conduct the actual training. This includes 8-bit Adam, SDPA, and gradient checkpointing.

Yes, none of these is new or nerve-breaking. But felt nice to be able to pull it off and put it together.

https://preview.redd.it/shvv9hl8raad1.png?width=2320&format=png&auto=webp&s=8431ac02d4df75f13711df20113506ece0b37048

submitted by /u/RepresentativeJob937
[link] [comments]
❌
❌