Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

simpletuner v0.9.8.1 released with exceptional flux-dev finetuning quality

simpletuner v0.9.8.1 released with exceptional flux-dev finetuning quality

Release: https://github.com/bghira/SimpleTuner/releases/tag/v0.9.8.1

Demo LoRA: https://huggingface.co/ptx0/flux-dreambooth-lora-r16-dev-cfg1/blob/main/pytorch_lora_weights.safetensors

After Bunzero hinted to us that the magic trick to preserving Flux's distillation was to set `--flux_guidance_value=1`, I immediately went to update all of the default parameters and guides to give more information about this parameter and its impact.

Essentially, the earlier code from today was capable of tuning very good LoRAs but they had the unfortunate side-effect of requiring the use of CFG nodes at inference time, which slowed them down, and (so far) reduces the quality of the model ever so slightly.

The new defaults will avoid this, ensuring more broad compatibility with inference platforms like AUTOMATIC1111/stable-diffusion-webui which might never really receive these extra bits of logic.

Examples of dreamboothing two subjects into one LoRA at once:

it even gets her tattoo

houston, we've got proper freckles

River Phoenix standing next to a River in Phoenix

this model didn't know what a Juggalo was but boy God we've made sure it does now

what's next

I'm going to be adding IP Adapter training support. but I'm also interested in exploring piecewise rectified flow, using a frozen quantised Schnell model as a teacher for itself as a student; this will almost undoubtedly reduce the creativity of Schnell down to about Dev's level... but could also possibly unlock the ability to make further-distilled, task-specific Schnell models, which would be viable commercially.

submitted by /u/terminusresearchorg
[link] [comments]

Flux.1-Dev Compact version

Hi, Alienhaze just released a compact versions of flux. These compact versions come with the various combinations of Clips and Models, ensuring that they can be easily loaded directly via nodes such as 'CheckpointLoaderSimple' and others, without the need for additional configuration.

"By reducing the final size by about 30%, these models are also accessible on less powerful machines, although I am not sure how this is possible. In my tests, I have not observed any differences in the generated outputs compared to the full versions."

Civitai link : https://civitai.com/models/637170?modelVersionId=712441

submitted by /u/julieroseoff
[link] [comments]

Flux Updates, Nvidia 'Cosmos' Project, and AI Video Game Strike | This Week in AI Art ✨

Greetings 👽, AI art enthusiasts. In an industry where innovation is constant, staying informed is crucial. Here's our weekly roundup of significant advancements in Stable Diffusion community (or is it Flux, now?) and beyond.

Click here to read the full article with proper formatting, links, visuals, etc.

🛠️ Flux Advancements

  • SimpleTuner v0.9.8 released for efficient Flux training on various GPUs
  • New method to improve Flux's prompt adherence and introduce negative prompts
  • ControlNet (Canny) model released for FLUX.1-dev
  • X-Labs releases 6 new Flux LoRAs for style adaptation

🎮 AI Impact on Entertainment

  • SAG-AFTRA initiates strike against video game industry over AI-related worker protections
  • Main issue: Disagreement over protections for voice and movement performers
  • "Side letter six" clause may limit strike's impact on some ongoing game productions

🎥 Nvidia's 'Cosmos' AI Project

  • Nvidia working on massive video foundation model called "Cosmos"
  • Project involves scraping large amounts of video content from various platforms
  • Raises ethical and legal questions about data collection practices in AI development

📡 On Our Radar

  • Deep-Live-Cam: Real-time webcam face swapping tool
  • LLM Saga: AI D&D Game Engine
  • Apple's ml_mdm: Open-source image synthesis framework
  • CogVideoX-2B: Text-to-video model
  • ReSyncer: AI lip-sync system

Want updates emailed to you weekly? Subscribe.

submitted by /u/OkSpot3819
[link] [comments]
❌