Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 22 Mayo 2025StableDiffusion

GrainScape UltraReal - Flux.dev LoRA

22 Mayo 2025 at 00:54
GrainScape UltraReal - Flux.dev LoRA

This updated version was trained on a completely new dataset, built from scratch to push both fidelity and personality further.

Vertical banding on flat textures has been noticeably reduced—while not completely gone, it's now much rarer and less distracting. I also enhanced the grain structure and boosted color depth to make the output feel more vivid and alive. Don’t worry though—black-and-white generations still hold up beautifully and retain that moody, raw aesthetic. Also fixed "same face" issues.

Think of it as the same core style—just with a better eye for light, texture, and character.
Here you can take a look and test by yourself: https://civitai.com/models/1332651

submitted by /u/FortranUA
[link] [comments]

Destruction & Damage - Break your stuff! LoRa for Flux!

22 Mayo 2025 at 14:22
Destruction & Damage - Break your stuff! LoRa for Flux!

Flux and other image Models are really bad at creating destroyed or damaged things by default. My Lora is quite the improvement. Also you get a more photo realistic look than with just the Flux Dev Base Model. Destruction & Damage - Break your stuff! - V1 | Flux LoRA | Civitai
Tutorial Knowledge:
https://www.youtube.com/watch?v=6_PEzbPKk4g

submitted by /u/Little-God1983
[link] [comments]

I bought a used GPU...

21 Mayo 2025 at 23:45

I bought a (renewed) 3090 on Amazon for around 60% below the price of a new one. Then I was surprised that when I put it in, it had no output. The fans ran, lights worked, but no output. I called Nvidia who helped me diagnose that it was defective. I submitted a request for a return and was refunded, but the seller said I did not need to send it back. Can I do anything with this (defective) GPU? Can I do some studying on a YouTube channel and attempt a repair? Can I send it to a shop to get it fixed? Would anyone out there actually throw it in the trash? Just wondering.

submitted by /u/Gloomy_Astronaut8954
[link] [comments]

Anything speaking against a MSI GeForce RTX 5090 32G GAMING TRIO OC for stable diffusion?

22 Mayo 2025 at 12:44

A friend bought this and decided to go with something else and offers me to buy it for 10% less than in the shop. Is this a good choice for stable diffusion and training loras or is there something speaking against it?

submitted by /u/MarinatedPickachu
[link] [comments]

Badge Bunny Episode 0

21 Mayo 2025 at 17:34
Badge Bunny Episode 0

Here we are. The test episode is completed to try out some features of various engines, models, and apps for creating a fantasy/western/steampunk project.
Various info:
Images: created with MJ7 (the new omnireference is super useful)
Sound Design: I used both ElevenLabs (for voices and some sounds) and Kling (more for some effects, but it's much more expensive and offers more or less the same as ElevenLabs)
Motion: Kling 1.6 (yeah, I didn’t use version 2 because it’s super pricey — I wanted to see what I could get with the base 1.6 using 20 credits. I’d say it turned out pretty good)
Lipsync: and here comes the big discovery! The best lipsync engine by far, which also generates lipsynced video, is in my opinion Wan 2.1 Fantasy Speaking. Exceptional. Just watch when the sheriff says: "Try scamming someone who's carrying a gun." 😱
Final note: I didn’t upscale anything — everything is LD. I’m lazy. And I was more interested in testing other aspects!
Feedback is always welcome. 😍
PLEASE SUBSCRIBE IF YOU LIKE:
https://www.youtube.com/watch?v=m_qMt2fsgV4&ab_channel=CortexSoundCollective
for more Episodes!

submitted by /u/TheNocturnalista
[link] [comments]

How do you check for overfitting on a LoRA model?

22 Mayo 2025 at 06:23
How do you check for overfitting on a LoRA model?

Basically what the title says. I've gone through testing every epoch at full strength (LoRA:1.0) but every one seems to have distortion, so I've found LoRA:0.75 strength is the best I can get without distortion. preferably, I wish I could get full LoRA:1.0 strength but it distorts too much.

Trained on illustrious with civitai's trainer following this article's suggestion for training parameters: https://civitai.com/articles/10381/my-online-training-parameter-for-style-lora-on-illustrious-and-some-of-my-thoughts

I only had 32 images to work with (above style from my own digital artworks) so it was 3 repeats of batches of 3 images to a total of 150 epochs.

submitted by /u/HydroChromatic
[link] [comments]

Got A New GPU, What Should I Expect It To Do?

22 Mayo 2025 at 14:14

So, I have been using the 3060 for a while. It was a good card, served me well with SDXL. I was quite content with it. But then someone offered me a 3090 for like $950, and I took it. So now I'm going to have a 3090. And that's 24gb of vram.

But aside from like, running faster, I don't actually know what this enables me to generate in terms of models. I assume this means I should be able to run Flux Dev without needing quants, probably? I guess what i'm really asking is, what sorts of things can you run on a 3090 that you can't on a 3060, or that are worse on the weaker card?

I want to make a list of things for me to try out when I install it into my tower.

submitted by /u/ArmadstheDoom
[link] [comments]

Advice for doing canny with controlnet to replace a character in a pre-existing image?

22 Mayo 2025 at 13:55

So example.

I have this image of kurumi here

but I'm trying to replace it with tionishia here

any advice for getting better results? It still looks low quality despite my images being high quality like this

Was also wondering how I can get her actual character in the shot and not an aged down version of her? Just looks weird to me that its trying to match kurumi 1:1 so it ages her down. Is there anyway I can improve the image + background where it looks higher quality?

I'm really happy with what canny can do so far but I just wanna get better results so I can replace all my favorite images with astraea and tio.

submitted by /u/mil0wCS
[link] [comments]

One of the banes of this scene is when something new comes out

21 Mayo 2025 at 16:01

I know we dont mention the paid services but what just came out makes most of what is on here look like monkeys with crayons. I am deeply jealous and tomorrow will be a day of therapy reminding myself why I stick to open source all the way. I love this community, but sometimes its sad to see the corporate world blazing ahead with huge leaps knowing they do not have our best interests at heart.

This is the only place that might understand the struggle. Most people seem very excited by the new release out there. I am just disheartened by it. The corporates as always control everything and that sucks balls.

rant over. thanks for listening. I mean, it is an amazing leap that just took place, but not sure how my PC is ever going to match it with offerings from open source world and that sucks.

submitted by /u/superstarbootlegs
[link] [comments]

I need help with Ai video and image

22 Mayo 2025 at 14:32

Hey everyone! 🙏 I’m currently working on an Indian-style mythology web series and looking for an AI-based video editor (like Pika Labs, Runway, or similar) who can help me put together a short promo video (15–30 seconds).

The series has a mythological fantasy vibe—think reincarnation, curses, dramatic moments, and flower-filled scenes. I already have a concept and reference images for the promo. I’d love someone who can help create a visual-heavy, cinematic teaser using AI-generated images of the actors .

submitted by /u/AlphaMistress1
[link] [comments]

Simular repo like omni-zero

22 Mayo 2025 at 13:50

Hello guys!Earlier I find out a repo named omni-zero.the function is zero-shot stylized portrait creation.but I find out it need over 20g vram which I need a100 or v100 in colab.so I wonder can someone recommend some repo seem like this function but can run in gtx 2080ti use 16gvram or less,at least I can run in t4.thanks

submitted by /u/NoOne8141
[link] [comments]

SDXL workflow for inpainting, for a professional image shot in a studio?

22 Mayo 2025 at 12:58

As a professional photographer, SDXL was quite mind-blowing when it first came out. I have never felt so cooked in my career. Over time, I've been learning to integrate it into my workflow, and now I want to primarily use it for editing instead of Photoshop. I would love a suggestion for a workflow that can separate my subject from the background and change it into something more dynamic and eye-catching. Please help🙏🏽

submitted by /u/ExpertBackground5214
[link] [comments]

Text to Panorama Generation. How do you commerciallize this?

22 Mayo 2025 at 12:45

Hi, I have seen many research on text to Panorama but rarely any companies or startups doing that commercially. What do you think is the reason? Are you familiar with any startups who do this besides SkyboxAI?

I can see many good opportunities which haven't been explored yet.

Like for architectural firms to quickly create asset and check the environment around the location for example or for the production companies of VFX making a Panorama for script to storyboarding during the early filmmaking process. Granted that is extremely niche, but it is a great use case of using GenAI as a tool that saves manhours and not replacing someone's job.

Hell you can also use it for VR but it's obviously not so popular so there's that.

What is your opinion? Do you think it's a viable idea to commercialize this?

submitted by /u/karenbaskins
[link] [comments]

Best model or setup for face swapping?

22 Mayo 2025 at 08:28

What is the best model for doing face swap? I'd like to create characters with consistent faces across different pictures that I can use for commercial purposes (which rules out Flux Redux and Fill.

I've got ComfyUI installed on my local machine but I'm still learning how it all works. Any help would be good.

submitted by /u/KiwiNFLFan
[link] [comments]
❌
❌