CivitAI: "Our card processor pulled out a day early, without warning."
![]() | submitted by /u/FrontalSteel [link] [comments] |
https://huggingface.co/QuantStack/MoviiGen1.1-VACE-GGUF
This is a GGUF version of Moviigen1.1 with additional VACE addon, that works in native workflows!
For those who dont know, moviigen is a wan2.1 model that got finetuned on cinematic shots (720p and up)
And VACE allows to use control videos, just like controlnets for image generation models. These GGUFs are the combination of both.
A basic workflow is here:
https://huggingface.co/QuantStack/Wan2.1-VACE-14B-GGUF/blob/main/vace_v2v_example_workflow.json
If you wanna see what vace does go here:
https://www.reddit.com/r/StableDiffusion/comments/1koefcg/new_wan21vace14bggufs/
and if you wanna see what Moviigen does go here:
https://www.reddit.com/r/StableDiffusion/comments/1kmuccc/new_moviigen11ggufs/
![]() | Hey, so I found this pixel-art animation and I wanted to generate something similar using Stable Diffusion and WAN 2.1, but I can't get it to look like this. How was this made? Am I using the wrong tools? I noticed that the pixels in these videos aren't even pixel perfect, they even move diagonally, maybe someone generated a pixel-art picture and then used something else to animate parts of the picture? There are AI tags in the corners, but they don't help much with finding how this was made. Maybe someone who's more experienced here could help with pointing me into the right direction :) Thanks! [link] [comments] |
Recently OnomaAI Research team released Illustrious 2 and Illustrious Lumina too. Still, it seems they are not good in performance or the community doesn't want to move, as Illustrous 0.1 and its finetunes are doing a great Job, but if this is the case, then what is the benefit of getting a version 2 when it is not that good?
Does anybody here know or use the V2 of Illustrious? What do you think about it?
asking this because I was expecting V2 to be a banger!
Hi dudes and dudettes...
Ive just returned from some time without genning, i hear those two are the current best models for gen? Is it true? If so, which is best?
![]() | This was generated with completely open-source local tools using ComfyUI [link] [comments] |
I see many LORAs but most of them have unrealistic proportions like plastic dolls or anime characters. I would like to train my own, but can't seem to find a good guide without conflicting opinions.
Can anybody help with the following: - how many training images? - what should captions be? - remove background on training images? - kouhya settings - which model to train on? (Been using realisticpony to generate images) - should only 1 reference character be used? I have permission from my friend to use their art for training, and they have various characters with a similar shape but different sizes - any other tips or advice?
I dont like the plastic doll look most models generate, and most generations have generate shapes that are usually fake and round, plastic looking, and has no "sag" or gravity effect on the fat/weight. Everyone comes out looking either like a swimsuit model, overweight, or a plastic doll.
Any tips would be greatly appreciated, my next attempt I think needs to improve captions, background removal, and possibly train on a different model.
![]() | submitted by /u/smereces [link] [comments] |
![]() | workflow https://pastebin.com/3BxTp9Ma solved the problem with causvid killing the motion by using two samplers in series: first three steps without the causvid lora, subsequent steps with the lora. [link] [comments] |
![]() | Took a while, curious what y’all think! Raunchy but tasteful humor warning? More to come here! [link] [comments] |
![]() | submitted by /u/Extension-Fee-8480 [link] [comments] |
The usual culprits are either luddites or feminist groups, but Pornhub saw a similar fate when it was forced to delete all amateur videos. It was activist driven and specifically targeted. I wouldn't be so sure Visa and Mastercard hate money for no reason. I'd bet money there was a brigade involved of people who were coached into the right buzzwords to put in an email or letter and who to send said email/letter to.
So like complex scenes or even a simple scene like girl in bikini holding a cup and straw can sometimes be a little overwhelming for the AI and it will still create multiple wrong details in the background or on the characteristics of the character.
Any reason for this? Feels like it should be capable of doing more detailed stuff by now.
I have a 7900 GRE and I’ve tried a simple search + yt tutorial already. Anyone have any tried and true methods?
![]() | Hi all, I have a 1024 × 1024 GPT-image-1 render. I used "philz1337x / clarity-upscaler" via replicate because I got good references for it but it hallucinated a bunch [see attached picture:] It's for a web-service so it has to be top-notch, can be paid but would love something that I can play with without paying a bunch ahead. Which model/chain would you start with? [link] [comments] |