The Differential Diffusion node is a default node in ComfyUI (if updated to most recent version). It is placed in the Model link between Loader and Sampler and it really shines with Inpainting with a (blurred) mask, it creates high quality inpainting results.
Workflows GDrive folder:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
The workflow shown is called 'Inpaint.json'.
The 'create grouped node' option in ComfyUI has been updatedwith the option to rearrange the order of the included nodes and to make inputs, widgets and outputs invisible. This greatly improved the grouped node's usability.
The new 'Default Grouped' workflow includes image ratio selection, prompt styler, Kohya Deep Shrink, Self Attention Guidance, latent upscale, face detailer, image upscale, 5 post-processing nodes and a JPG save. It can be found in the Workflows GDrive folder:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
The IPAdapter custom node for ComfyUI AI image generation has a new composition option. There also is a new combined Composition plus Style Transfer node.
Download link for the used ComfyUI Workflows (look for the ones with IPAdapter in the name):
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
IPAdapter Github:
https://github.com/cubiq/ComfyUI_IPAdapter_plus
ComfyUI Stable Diffusion IPAdapter has been upgraded to v2. This video shows a new function, Style Transfer. From another source there is a Composition Model for IPAdapter that we also investigate.
Download link for the used ComfyUI Workflows (look for the ones with IPAdapter in the name):
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
IPAdapter Github:
https://github.com/cubiq/ComfyUI_IPAdapter_plus
Composition Model:
https://huggingface.co/ostris/ip-composition-adapter/tree/main
A person named 'Diva' created 78 styles for the ComfyUI SDXL Prompt Styler that is used in all my workflows. They work quite nice. This video shows all styles with three prompts: house, scenery, woman. The checkpoint used is Dreamshaper XL Turbo v2.1, seed 22.
Workflows:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
Diva styles:
https://civitai.com/models/135307/comfyui-diva-styles-workflow
Keywords: Stable Diffusion, ComfyUI, SDXL Prompt Styler, styles, Diva,
With Stable Diffusion AI image generation the Dreamshaper Turbo checkpoint has been my model of choice since its release. It needs just 6 sampling steps, which makes it fast, and I prefer the images even over non Turbo checkpoints. A new trend now is 'Lightning' checkpoints, they promise nice images in just 4 steps. In this video I compare 6 checkpoints:
- Dreamshaper Turbo 2.1
- RealitiesEdge Turbo 7
- Dreamshaper Lightning
- RealitiesEdge Lightning 7
- Juggernaut Lightning 9
- Realvis Lightning 4
Of course we can edit a Stable Diffusion image with an external image editor. Quite a few modifications can already take place inside ComfyUI already, via a collection of Post Processing nodes, like the WAS, Pro-Post, and Post Processing node collections.
Several options are available to place certain elements at a specific spot in the image to create the composition or scene you intend. My personal favorite way is to use Controlnet with either the Segments or the Depth model.
Turns out that the Segments model from CivitAI gives an error. This is the one that was used in the video: https://huggingface.co/SargeZT/sdxl-controlnet-seg/tree/main
The folder that contains the workflows plus a README:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
The workflow used in the video is called Controlnet (Turbo).
High quality images can be made with a combination of Self-Attention Guidance and Kohya Deep Shrinkplus Upscaling. Lol, in my enthusiasm on the intro screen and in the video I mixed up the term ... it is called the Self-Attention Guidance ... not Self-Awareness.
With that out of the way ... some updated Stable Diffusion ComfyUI workflows are available for download. They have a lot of options, yet are very simple to operate.
- Size selector
- Style selector
- LoRA slots
- Kohya Deep Shrink
- Self Awareness Guidance
- Latent Upscale
- Face Detailer
- Image Upscale
The workflow from the video is named Default (Turbo).
The folder that contains the workflows plus a README:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
Upscalers: https://openmodeldb.info/
Impact Pack: https://github.com/ltdrdata/ComfyUI-Impact-Pack
Kohya Deep Shrink: https://youtu.be/E3Ss9-QZ7Cw
Salf Awareness Gyuidance: https://youtu.be/isYR4Fy0jm0
14 keywords for better image: https://youtu.be/RUHQbbMA4_c
The Kohya Deep Shrink node, that currently resides in the ComfyUI 'for testing' folder, makes it possible to generate images more than twice the default size, without upscaling. The images show more detail and look more photorealistic, even at default size, and the bonus is a speed increase of 15-20%
Link to workflows:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
The December 2023 update of ComfyUI contained the Self_Ateention Guidance node. If you'd add this node in the Model link between Loader and Sampler, nine out of ten images will look sharper, more detailed and have more color and contrast. It comes at the cost of a litle bit of speed. With Ctrl-Q the node can simply be bypassed.
There has been a study (7 months ago, in AI terms that is 'old' by now :) ) which concluded that there are 14 keywords that will in many cases improve your image when they are added to the prompt.
These keywords are: cinematic, colorful background, concept art, 8k, dramatic lighting, high detail, highly detailed, hyper realistic, intricate, intricate sharp details, octane render, smooth, studio lighting, trending on artstation.
The effect may have been more dramatic with the checkpoints that were out a year ago ... current checkpoints may be less susceptible to terms like 'octane render, 8k, trending on artstation', they also need less, or even no, negative prompt. However, tests I conducted with Dreamshaper Turbo v2, released in Feb 2024, showed that still nowadays the rendered images often look (subtly) nicer with the keywords added than without.
The study: https://arxiv.org/pdf/2209.11711.pdf
Outpainting can be used in ComfyUI to add a part to an image outside of the original canvas, growing it larger. With outpainting we can create multiple images, with say 1/3 overlap area, that can then be stitched together in an image editor to create a large panorama image.
Link to the workflows from the video:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
If there's trouble with the Latent Upscale node, try this one:
https://github.com/city96/SD-Latent-Upscaler
Link to video on ComfyUI Manager:
https://youtu.be/5AHBRVaJf9k?si=nGrBiBxhw0dGDJUl
Large collection of upscalers:
https://openmodeldb.info/
Integrated Nodes can be used to create neat and tidy workflows. It is a custom node extension that can be used as an alternative to ComfyUI's built in Grouped Nodes functionality and it has the following advantages over ComfyUI Grouped Nodes:
- Integrated Nodes can be used in any workflow, whereas ComfyUI Grouped Nodes only reside in the workflow where they were created
- With Integrated Nodes we can choose which inputs, outputs and widgets we want to see or to hide, which makes the new node compact and less cluttered.
Mentioning an art-style or an artist in your prompt is one of the strongest influencers of the outcome of txt2img generation in Stable Diffusion. Art-styles are easy to pick via the 'Prompt-Styler node, but of course styles and / or srtist names can be added yourself in the prompt (pos or neg). Art-styles and artists can be found via several websites, see the links below.
The ComfyUI workflow used in the video is the one called Default Turbo and is downloadable via below links.
Prompt Styler node
https://stable-diffusion-art.com/sdxl-styles/
https://openaijourney.com/stable-diffusion-styles/
Art styles:
https://supagruen.github.io/StableDiffusion-CheatSheet/ click on NOTES
https://weirdwonderfulai.art/resources/stable-diffusion-xl-sdxl-art-medium/
Artist names:
https://supagruen.github.io/StableDiffusion-CheatSheet/
https://stablediffusion.fr/artist-style Lists 4000 artists.
https://sdxl.parrotzone.art/ Quick filter on style
https://huggingface.co/spaces/mattthew/SDXL-artists-browser Multiple filter options
https://aipromptguide.com/ artist info
Link to the workflow from the video (and more):
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
If there's trouble with the Latent Upscale node, try this one:
https://github.com/city96/SD-Latent-Upscaler
Link to a large collection of PNG image workflows:
https://drive.google.com/drive/folders/1GqKYuXdIUjYiC52aUVnx0c-lelGmO17l?usp=drive_link
Link to video on ComfyUI Manager:
https://youtu.be/5AHBRVaJf9k?si=nGrBiBxhw0dGDJUl
Large collection of upscalers:
https://openmodeldb.info
I'm very enthusiastic about Dreamshaper XL Turbo, which works both for ComfyUI and A1111, and probably all other AI image generators based on Stable Diffussion.
It is fast, it takes only 6 steps to render a high quality image. It is versatile, supports many image styles. It does not need extensive prompting, simply tell it what you like to see.
Link to the workflows from the video:
https://drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
If there's trouble with the Latent Upscale node, try this one:
https://github.com/city96/SD-Latent-Upscaler
Link to a large collection of PNG image workflows:
https://drive.google.com/drive/folders/1GqKYuXdIUjYiC52aUVnx0c-lelGmO17l?usp=drive_link
Link to video on ComfyUI Manager:
https://youtu.be/5AHBRVaJf9k?si=nGrBiBxhw0dGDJUl
Large collection of upscalers:
https://openmodeldb.info/
Pythongossss remains active with updates of his ComfyUI custom node. In recent updates a new option was added to conveniently change the workflow via a dropdown list. Also a generated image can, via the right-click menu, be transferred to another workflow that has an image load node (e.g. an img2img or inpaint) after which that workflow opens with the image already loaded.
The Pythongossss Scripts can be installed via the ComfyUI manager or via Github:
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
Video on ComfyUI Manager:
https://youtu.be/5AHBRVaJf9k?si=nGrBiBxhw0dGDJUl
Previous video on Pythongossss scripts:
https://youtu.be/E5Np3Hvcj58
Every time when you place a new node in a ComfyUI workflow it has round corners. What if you like the "Box" shape with the square corners to be the default? This video shows how:
1. Download this JavaScript file
https://drive.google.com/file/d/1A7ycg0K7C7wR3FOzPRqE_WAwQc9YcqAw/view?usp=sharing
2 Move it to the ...\ComfyUI\web\extensions folder
3 Close and re-start ComfyUI