Vista Normal
- StableDiffusion
- Mind blowing development for Open source video models - STG instead of CFG - code published
Mind blowing development for Open source video models - STG instead of CFG - code published
submitted by /u/CeFurkan [link] [comments] |
- StableDiffusion
- Tried the HunyuanVideo, looks cool, but it took 20 minutes to generate one video (544x960)
Tried the HunyuanVideo, looks cool, but it took 20 minutes to generate one video (544x960)
submitted by /u/chain-77 [link] [comments] |
Glamour Shots 1990 💄✨ - Flux LoRA for The Most Glamorous Portraits & More!
submitted by /u/an303042 [link] [comments] |
- StableDiffusion
- My generative AI experiment has received a major update: Biomes, Regeneration, Map, Zoom, and more. We now have over 6,000 players who have collectively generated more than 40,000 locations in the shared world.
- StableDiffusion
- Free ComfyUI Workflow to Upscale & AI Enhance Your Images! Hope you enjoy clean workflows 🔍
Free ComfyUI Workflow to Upscale & AI Enhance Your Images! Hope you enjoy clean workflows 🔍
submitted by /u/blackmixture [link] [comments] |
A new version of HelloMeme will be released soon
submitted by /u/songkey [link] [comments] |
SANA, NVidia Image Generation model is finally out
submitted by /u/Caffdy [link] [comments] |
AnimateDiff + Reference Image + Light Map to video
submitted by /u/kenvinams [link] [comments] |
It's crazy how far we've come! excited for 2025!
The 2022 video was actually my first ever experiment with video to video using disco diffusion, here's a tutorial I made. 2024 version uses Animatediff, I have a tutorial on the workflow, but using different video inputs [link] [comments] |
Storytelling prompts in Flux is a game changer. My results have gotten much better.
I knew in Flux you weren't supposed to use tags like 'young man, holding sword, snowy landscape' etc.
So I was writing prompts more like 'A photograph of a young man holding a sword in a snowy arctic environment...'
Recently I started writing scenes as essentially creative writing prompts and the results have been so much better. eg. 'A strong gust of wind violently tosses a thin layer of dry snow into a swirling cloud. The barren landscape is inhospitable to flora and fauna alike yet the arctic warrior must defy these impossible elements to achieve what he has set out to do....'
And I've gotten way better images this way. Before i just kept adding details manually and you get to a point of diminishing returns eg. 'he has a metallic chest plate that is embossed with ancient characters and layered with blah blah blah and scales down the arms' you get the point. If you write in a more creative style it seems to do a way better job of filling out the correct details that make sense in that context.
Hope this is helpful for some people!
[link] [comments]
ComfyUIWrapper for HunyuanVideo - kijai/ComfyUI-HunyuanVideoWrapper
submitted by /u/marcoc2 [link] [comments] |
ComfyUI vid2vid
It's not perfect, but like most videos I do, it's to work towards proof of method for turning live action into anime Are we getting closer to anime? Also, I know, I messed up the film grain, it's static lol. [link] [comments] |
Hello, I'm new here, could you help me?
I'm new to AI image generation, I currently use ComfyUI and SD XL 1.0 with TensorRT, yesterday I started trying to use ReActor but I realize that my images are great before the upscale to fit the face into the photo.
I've already used Dreambooth (Google Colab/Shivam Shrirao) and I know how wonderful it was to see the face proportional to the size of the head, beard, hair, and everything else. Can I do the same training with images in ComfyUI to use in image generation? Is my GPU (4070 Ti Super) enough for this local training? I remember that Dreambooth took a while using Google Colab's GPUs that were dedicated to this, I'm worried if my GPU will be able to handle it without taking so long.
If you consider that it is not necessary to do training with faces, such as sending 5 to 10 photos of the same face in different positions for training, is there another way that is better?
Note: I can currently use ReActor, but I see some flaws such as incorrect proportion of the face in relation to the head, greenish and blurry ears, blurry face. I tried changing the upscaling, face detector, face restore, I got a good adjustment but it didn't eliminate these deformations. That's why I believe that training as done in Dreambooth would be ideal for a perfect face, but that's just me and my inexperience talking, lol.
If you are interested in my generated images, check out My profile on Civit AI.
[link] [comments]
Roast my Masterpiece
The biggest flaw in my opinion is there is way too many people, and it turns into a blob. If I was able to remove 95% of the people then this would be a masta piece [link] [comments] |
Deleted Scene from Pulp Fiction - (LTX-Video i2v + LTXTricks)
submitted by /u/JackKerawock [link] [comments] |
LTX Motion Trick, missing video pipeline in VideoHelperSuite, help?
submitted by /u/AnThonYMojO [link] [comments] |
How to render a black man without facial hair?
Hi guys! I'm trying to create a simple image of a basketball player standing on a basketball court.
No matter what I put in the prompt, Flux always gives the man a goatee or a beard, I am not able to get a result without facial hair, even if I explicitly ask for it like "clean shaven, beardless, etc."
This is really weird. Is the model overtrained on black people with facial hair or something? I'm slowly going mad here :D
[link] [comments]
Trying to keep up with AI News
submitted by /u/Cbo305 [link] [comments] |