Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

The new ControlNet++ union model does NOT require control type information

The new ControlNet++ union model does NOT require control type information

All of the following examples were created without running the condition transformer or the control encoder. I modified the current implementation of ControlNet in A1111 to get it working with a leg missing. The images on the left are literally the input to the union model, no preprocessors applied. The images on the right are the results, txt2img. You can even input raw images, and it still works. I think it's kind of wild:

OpenPose + Background image

Canny + Background image

OpenPose (it works even without control type information)

Simple 3D render, no textures

Image

The architecture I'm using:

https://preview.redd.it/n0rn7vtdd8cd1.png?width=678&format=png&auto=webp&s=d3a4b56b38df7f6a3acc550e840bfd763502a023

submitted by /u/Suspicious_Bag3527
[link] [comments]

Am I wasting time with AUTOMATIC1111?

I've been using the A1111 for a while now and I can do good generations, but I see people doing incredible stuff with ConfyUI and it seems to me that the technology evolves much faster than the A1111.

The problem is that that thing seems very complicated and tough to use for a guy like me who doesn't have much time to try things out since I rent a GPU on vast.ai

Is it worth learning ConfyUI? What do you guys think? What are the advantages over A1111?

submitted by /u/Bass-Upbeat
[link] [comments]
❌