![]() | Hi folks, I've just published a huge update to the Inpaint Crop and Stitch nodes. "✂️ Inpaint Crop" crops the image around the masked area, taking care of pre-resizing the image if desired, extending it for outpainting, filling mask holes, growing or blurring the mask, cutting around a larger context area, and resizing the cropped area to a target resolution. The cropped image can be used in any standard workflow for sampling. Then, the "✂️ Inpaint Stitch" node stitches the inpainted image back into the original image without altering unmasked areas. The main advantages of inpainting only in a masked area with these nodes are:
What's New?This update does not break old workflows - but introduces new improved version of the nodes that you'd have to switch to: '✂️ Inpaint Crop (Improved)' and '✂️ Inpaint Stitch (Improved)'. The improvements are:
The Inpaint Crop and Stitch nodes can be downloaded using ComfyUI-Manager, just look for "Inpaint-CropAndStitch" and install the latest version. The GitHub repository is here. Video TutorialThere's a full video tutorial in YouTube: https://www.youtube.com/watch?v=mI0UWm7BNtQ . It is for the previous version of the nodes but still useful to see how to plug the node and use the context mask. Examples(drag and droppable png workflow) (drag and droppable png workflow) Want to say thanks? Just share these nodes, use them in your workflow, and please star the github repository. Enjoy! [link] [comments] |
Vista de Lectura
Huge update to the ComfyUI Inpaint Crop and Stitch nodes to inpaint only on masked area. (incl. workflow)
Any time you pay money to someone in this community, you are doing everyone a disservice. Aggressively pirate "paid" diffusion models for the good of the community and because it's the morally correct thing to do.
I have never charged a dime for any LORA I have ever made, nor would I ever, because every AI model is trained on copyrighted images. This is supposed to be an open source/sharing community. I 100% fully encourage people to leak and pirate any diffusion model they want and to never pay a dime. When things are set to "generation only" on CivitAI like Illustrious 2.0, and you have people like the makers of illustrious holding back releases or offering "paid" downloads, they are trying to destroy what is so valuable about enthusiast/hobbyist AI. That it is all part of the open source community.
"But it costs money to train"
Yeah, no shit. I've rented H100 and H200s. I know it's very expensive. But the point is you do it for the love of the game, or you probably shouldn't do it at all. If you're after money, go join Open AI or Meta. You don't deserve a dime for operating on top of a community that was literally designed to be open.
The point: AI is built upon pirated work. Whether you want to admit it or not, we're all pirates. Pirates who charge pirates should have their boat sunk via cannon fire. It's obscene and outrageous how people try to grift open-source-adjacent communities.
You created a model that was built on another person's model that was built on another person's model that was built using copyrighted material. You're never getting a dime from me. Release your model or STFU and wait for someone else to replace you. NEVER GIVE MONEY TO GRIFTERS.
As soon as someone makes a very popular model, they try to "cash out" and use hype/anticipation to delay releasing a model to start milking and squeezing people to buy "generations" on their website or to buy the "paid" or "pro" version of their model.
IF PEOPLE WANTED TO ENTRUST THEIR PRIVACY TO ONLINE GENERATORS THEY WOULDN'T BE INVESTING IN HARDWARE IN THE FIRST PLACE. NEVER FORGET WHAT AI DUNGEON DID. THE HEART OF THIS COMMUNITY HAS ALWAYS BEEN IN LOCAL GENERATION. GRIFTERS WHO TRY TO WOO YOU INTO SACRIFICING YOUR PRIVACY DESERVE NONE OF YOUR MONEY.
[link] [comments]
My Krita workflow (NoobAI + Illustrious)
![]() | I want to share my creative workflow about Krita. I don't use regions, i prefer to guide my generations with brushes and colors, then i prompt about it to help the checkpoint understand what is seeing on the canvas. I often create a layer filter with some noise, this adds tons of details, playing with opacity and graininess. The first pass is done with NoobAI, just because it has way more creative angle views and it's more dynamic than many other checkpoints, even tho it's way less sharp. After this i do a second pass with a denoise of about 25% with another checkpoint and tons of loras, as you can see, i have used T-Illunai this time, with many wonderful loras. I hope it was helpful and i hope you can unlock some creative idea with my workflow :) [link] [comments] |
Updated my Nunchaku workflow V2 to support ControlNets and batch upscaling, now with First Block Cache. 3.6 second Flux images!
It can make a 10 Step 1024X1024 Flux image in 3.6 seconds (on a RTX 3090) with a First Bock Cache of 0.150.
Then upscale to 2024X2024 in 13.5 seconds.
My Custom SVDQuant finetune is here:https://civitai.com/models/686814/jib-mix-flux
[link] [comments]
Bladeborne Rider
![]() | Bladeborne Rider - By HailoKnight"Forged in battle, bound by steel — she rides where legends are born."Ride into battle with my latest Illustrious LoRA! These models never cease to amaze me how far we can push creativity! And the best part of it is to see what you guys can make with it! :O Example prompt used: Hope you can enjoy! You can find the lora here: [link] [comments] |
Do you edit your AI images after generation? Here's a before and after comparison
![]() | Hey everyone! This is my second post here — I’ve been experimenting a lot lately and just started editing my AI-generated images. In the image I’m sharing, the right side is the raw output from Stable Diffusion. While it looks impressive at first, I feel like it has too much detail — to the point that it starts looking unnatural or even a bit absurd. That’s something I often notice with AI images: the extreme level of detail can feel artificial or inhuman. On the left side, I edited the image using Forge and a bit of Krita. I mainly focused on removing weird artifacts, softening some overly sharp areas, and dialing back that “hyper-detailed” look to make it feel more natural and human. I’d love to know: Thanks for checking it out! I’m still learning, so any feedback is more than welcome 😊 My CivitAI: espadaz Creator Profile | Civitai [link] [comments] |
Wan 2.1 I2V (So this is the 2nd version with Davinci 2x Upscaling)
![]() | Check it out [link] [comments] |
A1111 suddenly stopped working for me after 1 yr?
![]() | Hi, I've been using a1111 SD 1.5 for over a year, but recently I get this error. Can i get some help? I also get prompted to log-in to github now which didn't happen until recently... [link] [comments] |
Wan 2.1-Fun 1.3b Really doing some heavy lifting
![]() | Images created with Flux Dev. Animated with Wan 2.1-Fun 1.3b with keyframes at the beginning, middle and end. Prompt: The cosmic entity slowly emerges from the darkness. Its form, a nightmarish blend of organic and arcane, shifts subtly. Tentacles writhe behind its head, their crimson tips glowing faintly. Its eyes blinks slowly, the pink iris reflecting the starlight. Golden, jagged horns gleam as they catch the cosmic star light in outer space. [link] [comments] |
I read that 1% Percent of TV Static Comes from radiation of the Big Bang. Any way to use TV static as latent noise to generate images with Stable Diffusion ?
![]() | See Static? You’re Seeing The Last Remnants of The Big BangOne percent of your old TV's static comes from CMBR (Cosmic Microwave Background Radiation). CMBR is the electromagnetic radiation left over from the Big Bang. We humans, 13.8 billion years later, are still seeing the leftover energy from that event [link] [comments] |
looking for a extension but forgot the name
i stop using stable diffusion for over a year and did a clean install but now ifg a useful extension i had. it lets u delete checkpoints/lora easy and gives u prompts for the lora ur using
[link] [comments]
Changed Drive Letter, now getting "Fatal error in launcher: Unable to create process using"
Can anyone make sense of whats going on, my next step is to scrap and start from scratch but if theres a simple fix that would great too!
--------------------------
F:\SD-JAN2025\venv\Scripts>activate.bat
(venv) F:\SD-JAN2025\venv\Scripts>pip3 uninstall torch
Fatal error in launcher: Unable to create process using '"G:\SD-JAN2025\venv\Scripts\python.exe" "F:\SD-JAN2025\venv\Scripts\pip3.exe" uninstall torch': The system cannot find the file specified.
(venv) F:\SD-JAN2025\venv\Scripts>pip uninstall torch
Fatal error in launcher: Unable to create process using '"G:\SD-JAN2025\venv\Scripts\python.exe" "F:\SD-JAN2025\venv\Scripts\pip.exe" uninstall torch': The system cannot find the file specified.
(venv) F:\SD-JAN2025\venv\Scripts>py pip uninstall torch
C:\Users\*user*\AppData\Local\Programs\Python\Python312\python.exe: can't open file 'F:\\SD-JAN2025\\venv\\Scripts\\pip': [Errno 2] No such file or directory
(venv) F:\SD-JAN2025\venv\Scripts>pip uninstall torch
Fatal error in launcher: Unable to create process using '"G:\SD-JAN2025\venv\Scripts\python.exe" "F:\SD-JAN2025\venv\Scripts\pip.exe" uninstall torch': The system cannot find the file specified.
(venv) F:\SD-JAN2025\venv\Scripts>where python
F:\SD-JAN2025\venv\Scripts\python.exe
C:\Users\*user*\AppData\Local\Programs\Python\Python310\python.exe
C:\Users\*user*\AppData\Local\Programs\Python\Python312\python.exe
C:\Users\*user*\AppData\Local\Microsoft\WindowsApps\python.exe
(venv) F:\SD-JAN2025\venv\Scripts>deactivate.bat
F:\SD-JAN2025\venv\Scripts>where python
F:\SD-JAN2025\venv\Scripts\python.exe
C:\Users\*user*\AppData\Local\Programs\Python\Python310\python.exe
C:\Users\*user*\AppData\Local\Programs\Python\Python312\python.exe
C:\Users\*user*\AppData\Local\Microsoft\WindowsApps\python.exe
-------------------------------------
Fatal error in launcher: Unable to create process using '"G:
still points to my old drive letter, G.
[link] [comments]
Is there any way to improve the Trellis model?
![]() | Hi everyone, Recently, I’ve been digging deeper into how Trellis works to see if there are ways to improve the output quality. Specifically, I’m exploring ways to evaluate and enhance rendered images from 360-degree angles, aiming for sharper and more consistent results. (Previously, I mainly focused on improving image quality by using better image generation models like Flux-Pro 1.1 or optimizing evaluation metrics.) I also came across Hunyan3D V2, which looks promising—but unfortunately, it doesn’t support exporting to Gaussian Splatting format. Has anyone here tried improving Trellis, or has any idea how to enhance the 3D generation pipeline? Maybe we can brainstorm together for the benefit of the community. Example trellis + flux pro 1.1: Prompt: 3D butterfly with colourful wings [link] [comments] |