Fast and Furious but Handicapped
Credits - Instagram @ Paskoboy [link] [comments] |
Credits - Instagram @ Paskoboy [link] [comments] |
https://huggingface.co/ostris/OpenFLUX.1
From the model description:
After numerous iterations and spending way too much of my own money on compute to train this, I think it is finally at the point I am happy to consider it a beta. I am still going to continue to train it, but the distillation has been mostly trained out of it at this point. So phase 1 is complete. Feel free to use it and fine tune it, but be aware that I will likely continue to update it.
This is a fine tune of the FLUX.1-schnell model that has had the distillation trained out of it. Flux Schnell is licensed Apache 2.0, but it is a distilled model, meaning you cannot fine-tune it. However, it is an amazing model that can generate amazing images in 1-4 steps. This is an attempt to remove the distillation to create an open source, permissivle licensed model that can be fine tuned.
(I am only making an introduction and am in no way associated with the creator of this model.)
submitted by /u/ninjasaid13 [link] [comments] |
submitted by /u/Devajyoti1231 [link] [comments] |
submitted by /u/xavier047 [link] [comments] |
submitted by /u/ItsCreaa [link] [comments] |
Hey guys, I’m looking for the best (or any) method of converting an image into a 3D asset using ML.
Preferably an offline solution, not too worried it it doesn’t generate “perfect” meshes.
I don't know if one already exists but I just whipped it up quickly. Pretty buggy at the moment. If there's interest I'll clean it up and release a usable version. [link] [comments] |
Notice the detail on iris, with just a low power injection V2 post - I had uploaded last night, but my screenshots were too small and I've made some improvements to the workflow. I find any regular noise injection node I use with flux errors out, so this is a work around - use the blend value to adjust. In short I use blended latent of two ksamplers, with one of them very early in the denoising process, to add extra noise before passing to a final ksampler to finish. All in the workflow above. [link] [comments] |
I recently encountered an interesting issue with Flux Loras that I thought I'd share, along with a simple solution that might help others facing similar problems. The Problem: A Discord user reached out for help with a Lora they had trained on a messy oil painting style. They had spent considerable time and effort training the Lora, aiming for a distinct, textured look. However, when using it with Flux, the results weren't quite hitting the mark. Initially, the user thought they might have undertrained the Lora and considered increasing the training steps. This is a common assumption when Loras don't perform as expected, but in this case, more training wasn't the answer. The Solution: After some experimentation, I found a straightforward fix that doesn't require retraining the Lora: Raise the max/base shift range I typically set both max and base to 2.0 This allows Flux more freedom to deviate from its fine-tuned look Adjust the CFG (Classifier-Free Guidance) value A lower CFG allows for less pressure on the Flux base model style I've found a value of around 1.7 works well Why This Works: Flux has a strong, pre-trained style that can sometimes overpower Lora inputs, especially for more stylized or "messy" aesthetics. By increasing the shift range and lowering the CFG, we're essentially giving the Lora more influence over the final output, allowing it to break away from Flux's default tendencies. Important Note: While these adjustments can help achieve the desired style, they come with a trade-off. Increasing the shift range may reduce prompt adherence. You'll need to experiment to find the right balance for your specific needs. Example Settings: Max/Base Shift: 2.0 CFG: 1.7 Has anyone else experimented with similar adjustments, particularly with heavily stylized Loras? What results have you seen? I'd love to hear about your experiences and any other tips you might have for working with Flux Loras! [link] [comments] |
I'm a bit of a noob to AI but from my experimentation two years ago I found that recreating character illustrations from different angles etc etc has been very hard to do.
Is there anything new that will help do this? Where would you suggest I start - ie. what software/UI/'datasets' (?)?
Hi. I'm trying to use flux (fluxgym) to train a LoRA. Once the training is finished I plug the lora to flux forge but I am getting multiple error in my command window
With TypeError:'NoneType' object is not iterable
On the forge Ui
What am I doing wrong
Using flux dev bnb nf4 model for generating images.
No vae.
Very much new to lora training, All help will be appreciated
I would like to know how many images are needed, if I should make any changes to the images, if I should separate them by folders and the difference in the dataset of characters, styles and concepts.
Hi. Sorry, this is a pretty silly question from someone who's been using SD a lot but knows nothing about coding. I first started using SD on a computer that was on Dark mode, and that's where I learnt all the ropes about A1111. I am now in a computer that is in light mode, and I don't want to change that, but I have gotten used to the A1111 interface in dark mode and prefer to use it that way. So every time I open A1111, I add "/?__theme=dark" at the end of the URL and reload (I learnt this trick searching online). But I was wondering if there is something I can change in webui.bat or somewhere else so that it does this automatically on its own? It's not a big deal, obviously, just thought it would be nice.