Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerStableDiffusion

Running DreamBooth LoRA fine-tuning with SD3 in a free-tier Colab

3 Julio 2024 at 12:27
Running DreamBooth LoRA fine-tuning with SD3 in a free-tier Colab

We worked on a mini-project to show how to run SD3 DreamBooth LoRA fine-tuning on a free-tier Colab Notebook 🌸

The project is educational and is meant to serve as a template. Only good vibes here please 🫡
https://github.com/huggingface/diffusers/tree/main/examples/research_projects/sd3_lora_colab

Techniques used:

* We first pre-compute the text embedding as undoubtedly it's the most memory-intensive part when you use all three text encoders of SD3. Additionally, to keep the memory requirements manageable for the free-tier Colab, we use the 8bit T5 (8bit as in `llm.int8()`). This helped us reduce the memory requirements from 20GB to 10GB.

* We then use a myriad of popular techniques to conduct the actual training. This includes 8-bit Adam, SDPA, and gradient checkpointing.

Yes, none of these is new or nerve-breaking. But felt nice to be able to pull it off and put it together.

https://preview.redd.it/shvv9hl8raad1.png?width=2320&format=png&auto=webp&s=8431ac02d4df75f13711df20113506ece0b37048

submitted by /u/RepresentativeJob937
[link] [comments]
❌
❌