Having trouble with image cutouts? Try this free AI solution
Previously, to create an image with a transparent background, we had to invest several hours of meticulous work in Photoshop. Even then, a minor mistake could lead to results far below expectations, creating a sense of deep frustration. Although artificial intelligence excels in removing backgrounds, with some complex images AI background removal technology may still struggle to eliminate all elements perfectly. However, the advent of the latest AI technology, LayerDiffusion, has completely changed the game. With just a few simple commands, it can quickly generate a transparent background image that meets our expectations, sweeping away the previous troubles.
This tool is the latest masterpiece brought to us by the creators of ControlNet. But how exactly does LayerDiffusion achieve all this? How can we effectively utilize LayerDiffusion? Let's delve into it.
1. Technical Principles
The feature of LayerDiffusion is its ability to precisely identify the transparency information within an image while maintaining the overall appearance of the image intact. Compared to traditional clipping techniques that rely on training with pixel colors, this latent diffusion model makes the clipping process easier and more accurate.
This tool utilizes visualization techniques to train a foundational model for generating transparent images and is capable of training multi-layer models to generate multiple layers at the same time. When training the basic diffusion model (Scenario a), the weights of all models can be trained. However, when training multi-layer models (Scenario b), only two LoRAs (foreground LoRA and background LoRA) are trainable.
2. Installation
Firstly, you'll need to download the ComfyUI LayerDiffuse extension (opens in a new tab). If you're not sure how to install it, you can follow this tutorial: How to install ComfyUI extensions? (opens in a new tab)
Additionally, you can also download the extension author's prepared workflow from GitHub, and then import it into ComfyUI or Comflowy. After importing the workflow into Comflowy, you'll see prompts for missing plugins; just click the install button to install the missing plugins with one click.
Layer Diffusion supports the XL and 1.5 base models, and the Checkpoint model I downloaded here is DreamShaper XL Lightning (opens in a new tab). If you want to try out more high-quality models, you can find and download them on the Model page (opens in a new tab).
3. Effect Demonstration
This is a workflow for generating transparent materials, where I used simple prompts to generate a dog. The final clipped dog's outline is very precise, even accurately recognizing the edge reflections.
Of course, you can also download your favorite Lora model to control the final generation style. I used a Lora model that generates anime sticker-style images. As you can see, the final generated puppy is very cute and has a cartoon sticker feel to it.
The Lora model I used here is Stickers.Redmond - Stickers Lora for SD XL (opens in a new tab) It generates sticker images with great color and texture. If you're interested, you might want to try it out.
In our previous issue AI-Weekly-010 (opens in a new tab), we introduced the Face-to-sticker tool that converts face photos into stickers online. Now, you can achieve this effect through this workflow.
Prompt: Taylor Swift,portrait, sticker, 8k, high quality
Prompt: Justin bieber,portrait, sticker, 8k, high quality
Prompt: Anne Hathaway,portrait, sticker, 8k, high quality