Running DreamBooth LoRA fine-tuning with SD3 in a free-tier Colab
We worked on a mini-project to show how to run SD3 DreamBooth LoRA fine-tuning on a free-tier Colab Notebook 🌸 The project is educational and is meant to serve as a template. Only good vibes here please 🫡 Techniques used: * We first pre-compute the text embedding as undoubtedly it's the most memory-intensive part when you use all three text encoders of SD3. Additionally, to keep the memory requirements manageable for the free-tier Colab, we use the 8bit T5 (8bit as in `llm.int8()`). This helped us reduce the memory requirements from 20GB to 10GB. * We then use a myriad of popular techniques to conduct the actual training. This includes 8-bit Adam, SDPA, and gradient checkpointing. Yes, none of these is new or nerve-breaking. But felt nice to be able to pull it off and put it together. [link] [comments] |