Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

new MoviiGen1.1-VACE-GGUFs 🚀🚀🚀

https://huggingface.co/QuantStack/MoviiGen1.1-VACE-GGUF

This is a GGUF version of Moviigen1.1 with additional VACE addon, that works in native workflows!

For those who dont know, moviigen is a wan2.1 model that got finetuned on cinematic shots (720p and up)

And VACE allows to use control videos, just like controlnets for image generation models. These GGUFs are the combination of both.

A basic workflow is here:

https://huggingface.co/QuantStack/Wan2.1-VACE-14B-GGUF/blob/main/vace_v2v_example_workflow.json

If you wanna see what vace does go here:

https://www.reddit.com/r/StableDiffusion/comments/1koefcg/new_wan21vace14bggufs/

and if you wanna see what Moviigen does go here:

https://www.reddit.com/r/StableDiffusion/comments/1kmuccc/new_moviigen11ggufs/

submitted by /u/Finanzamt_Endgegner
[link] [comments]

How to do flickerless pixel-art animations?

How to do flickerless pixel-art animations?

Hey, so I found this pixel-art animation and I wanted to generate something similar using Stable Diffusion and WAN 2.1, but I can't get it to look like this.
The buildings in the background always flicker, and nothing looks as consistent as the video I provided.

How was this made? Am I using the wrong tools? I noticed that the pixels in these videos aren't even pixel perfect, they even move diagonally, maybe someone generated a pixel-art picture and then used something else to animate parts of the picture?

There are AI tags in the corners, but they don't help much with finding how this was made.

Maybe someone who's more experienced here could help with pointing me into the right direction :) Thanks!

submitted by /u/Old_Wealth_7013
[link] [comments]

why nobody is interested in the new V2 Illustrious models?

Recently OnomaAI Research team released Illustrious 2 and Illustrious Lumina too. Still, it seems they are not good in performance or the community doesn't want to move, as Illustrous 0.1 and its finetunes are doing a great Job, but if this is the case, then what is the benefit of getting a version 2 when it is not that good?

Does anybody here know or use the V2 of Illustrious? What do you think about it?

asking this because I was expecting V2 to be a banger!

submitted by /u/krigeta1
[link] [comments]

Local Open Source is almost there!

Local Open Source is almost there!

This was generated with completely open-source local tools using ComfyUI
1- Image: Ultra Real Finetune (Flux 1Dev fine-tune, available on CivitAi)
2- Animation: WAN 2.1 14B Fun control, with DWpose estimator, no lipsync needed, using the official comfy workflow
3- Voice Changer: RVC on Pinokio, you can also use easyaivoice.com it's a free online tool that does the same thing easier
3- Interpolation and Upscale: I used Davinci Resolve (Paid Studio version) to interpolate from 12fps to 24fps and upscale (x4), but that also can be done for free in comfyUI

submitted by /u/younestft
[link] [comments]

Training LORA for body part shape

I see many LORAs but most of them have unrealistic proportions like plastic dolls or anime characters. I would like to train my own, but can't seem to find a good guide without conflicting opinions.

  • I used Kouhya, trained on SD 1.5 model with 200 images that i cropped to a width of 768 and height of 1024
  • images cropped out all faces and focused on lower back and upper thighs
  • i used wd14 captioning and added some prefixes that related to the shape of the butt
  • trained with 20 repeats and 3 epochs
  • tested saved checkpoint at 6500 steps
  • no noticeable difference with or without the LORA

Can anybody help with the following: - how many training images? - what should captions be? - remove background on training images? - kouhya settings - which model to train on? (Been using realisticpony to generate images) - should only 1 reference character be used? I have permission from my friend to use their art for training, and they have various characters with a similar shape but different sizes - any other tips or advice?

I dont like the plastic doll look most models generate, and most generations have generate shapes that are usually fake and round, plastic looking, and has no "sag" or gravity effect on the fat/weight. Everyone comes out looking either like a swimsuit model, overweight, or a plastic doll.

Any tips would be greatly appreciated, my next attempt I think needs to improve captions, background removal, and possibly train on a different model.

submitted by /u/Mental-Arrival6175
[link] [comments]

I wouldn't be surprised if the payment processors dumped Civitai because of activists

The usual culprits are either luddites or feminist groups, but Pornhub saw a similar fate when it was forced to delete all amateur videos. It was activist driven and specifically targeted. I wouldn't be so sure Visa and Mastercard hate money for no reason. I'd bet money there was a brigade involved of people who were coached into the right buzzwords to put in an email or letter and who to send said email/letter to.

submitted by /u/No-Issue-9136
[link] [comments]

Why is it that Stable diffusion can handle realism stuff almost perfectly but still struggles with detailed anime stuff?

So like complex scenes or even a simple scene like girl in bikini holding a cup and straw can sometimes be a little overwhelming for the AI and it will still create multiple wrong details in the background or on the characteristics of the character.

Any reason for this? Feels like it should be capable of doing more detailed stuff by now.

submitted by /u/mil0wCS
[link] [comments]

Upscaling a GPT-image-1 to Print-Ready?

Upscaling a GPT-image-1 to Print-Ready?

Hi all, I have a 1024 × 1024 GPT-image-1 render.
Goal: Print-ready images, by API.

I used "philz1337x / clarity-upscaler" via replicate because I got good references for it but it hallucinated a bunch [see attached picture:]

https://preview.redd.it/3rq0ax107j2f1.png?width=1080&format=png&auto=webp&s=c792dd07836444cc29e68cc9e79b8dbb3f64c2e5

It's for a web-service so it has to be top-notch, can be paid but would love something that I can play with without paying a bunch ahead.

Which model/chain would you start with?

submitted by /u/PazGruberg
[link] [comments]
❌