Create Consistent, Editable AI Characters & Backgrounds for your Projects! (ComfyUI Tutorial)
TLDRThis tutorial showcases a workflow for creating AI-generated characters and backgrounds with Stable Diffusion 1.5 and SDXL. It teaches how to generate multiple character views, integrate them into various styles, and control emotions with prompts. The guide includes a free post sheet for character bones, a step-by-step installation guide, and tips for character customization. The video also explores using the workflow for creating AI influencers, like a cheese influencer, and integrating characters into backgrounds with expressions and poses.
Takeaways
- 😀 The video tutorial teaches how to create AI characters and integrate them into backgrounds for projects like children's books, AI movies, or AI influencers.
- 🎨 The workflow is compatible with stable diffusion 1.5 and sdxl, allowing for any style of character and background generation.
- 📈 A post sheet is introduced, which depicts a character's bones from different angles and can be used to generate multiple views of a character in one image.
- 🔧 The tutorial provides a step-by-step guide for installing and setting up the workflow in ComfyUI, including where to find and place the models.
- 👤 The presenter demonstrates creating an AI influencer character, starting with a basic prompt and refining it to fit a specific niche, such as a cheese influencer.
- 🧑🎨 Tips are given for improving character generation, such as adding descriptive prompts and adjusting the seed in the sampler for better consistency.
- ✨ The face detailer tool is utilized to enhance the quality of faces in the generated images, making them more consistent and realistic.
- 😜 The process includes generating different expressions for the character by adjusting the prompt and using the face detailer with specific settings.
- 🖼️ The workflow allows saving different poses as separate images and then combining them into a full character sheet.
- 🌄 The final part of the workflow involves placing the character into various locations and giving them props, like cheese, to present to the audience.
- 🔗 Additional tools and resources are mentioned, such as training a character model with generated images or using Mid Journey's character reference tool for placing the character in different scenes.
Q & A
What is the main purpose of the video tutorial?
-The main purpose of the video tutorial is to demonstrate how to create consistent, AI-generated characters, pose them automatically, integrate them into backgrounds, and control their emotions using simple prompts with a workflow compatible with Stable Diffusion 1.5 and SDXL.
What is the significance of the post sheet in the workflow?
-The post sheet is significant as it depicts a character's bones from different angles in the open pose format, which allows the use of ControlNet to generate characters based on these bones, enabling the creation of multiple views in a single image.
How can I access the post sheet mentioned in the video?
-The post sheet can be accessed for free by downloading it from the creator's Patreon page.
What are the recommended steps to set up the workflow in ComfyUI?
-The recommended steps include importing the post sheet into ComfyUI, choosing a model, matching the K sampler settings to the model's recommendations, and using a custom workflow with a step-by-step guide provided by the creator.
Can I use any model with the workflow?
-Yes, any model can be used as long as the K sampler settings are matched to the recommended ones for the model. However, for SD 1.5 models, the ControlNet should be switched to an Open Pose ControlNet compatible with SD 1.5.
What is the role of the 'face detailer' in the workflow?
-The 'face detailer' automatically detects all the faces in the image and redifuses them to improve consistency and quality, especially for small or broken faces in the preview image.
How can I save different poses of the character as separate images?
-The workflow includes a step that allows saving different poses as separate images by cutting them out and saving them in the subsequent steps.
What is the purpose of adding 'Pixar character' as a prompt in the face detailer settings?
-Adding 'Pixar character' as a prompt helps the face detailer to generate expressions that are more in line with the style of Pixar characters, enhancing the realism and consistency of the character's face.
How can I generate expressions for the character in the workflow?
-Expressions for the character can be generated by using the face detailer with additional descriptive prompts and adjusting the 'dooy strength' to control the intensity of the new expressions.
What are the different ways to integrate the character into the background in the workflow?
-The workflow offers three ways to integrate the character into the background: using a latent noise mask to fix seams, denoising the full background to match the character, and using a blur note to address different focal planes and create a more cinematic look.
How can I train my own model based on the generated character images?
-To train your own model, you can save out all the different images of the character's faces using a save image node in the workflow, and then use these images for training with the appropriate software or tools.
What is the final step in the workflow for creating a full character sheet?
-The final step in the workflow is to add all the different expressions together, upscale them, and also upscale the single image of the face to complete the full character sheet.
Outlines
🎨 AI Character Creation and Emotion Control
This paragraph introduces a video tutorial on creating consistent AI-generated characters using Stable Diffusion 1.5 and SDXL. The workflow allows for automatic posing, integration into backgrounds, and control over character emotions through simple prompts. A key feature is generating multiple character views in a single image using a downloadable post sheet that depicts character bones from various angles. The video demonstrates setting up the workflow in COMI, choosing models, and adjusting settings for optimal character generation. It also covers troubleshooting generation issues and using the face detailer to improve character consistency. The process concludes with saving different character poses and expressions, creating a comprehensive character sheet.
🖼️ Integrating AI Characters into Backgrounds
The second paragraph delves into integrating AI-generated characters into various backgrounds. It discusses using the character reference tool in Mid Journey to place characters in different settings and the challenges of achieving specific poses. The speaker then introduces a controllable character workflow with three steps: posing the character, generating a fitting background, and integrating the character into the background. The workflow involves using an IP adapter to maintain character likeness, creating poses with OpenPose, and adjusting settings to fix seams and lighting inconsistencies. The paragraph also covers methods to enhance the character-background integration, such as denoising and blurring, and concludes with adding elements like cheese to the character's pose.
🧀 Customizing AI Influencers with COMI Workflow
The final paragraph focuses on customizing AI influencers using the COMI workflow. It suggests reducing control net weights to allow characters more freedom in their poses and using auto-que to generate numerous images of characters in various poses and locations. The speaker encourages viewers to explore and personalize the workflow, offering exclusive files and resources for Patreon supporters. The video concludes with a humorous offer for the cheese industry to book the AI character 'Huns Schle' for presentations, highlighting the creative potential of the workflow for diverse applications.
Mindmap
Keywords
💡Stable Diffusion 1.5
💡Control Net
💡Post Sheet
💡AI Influencer
💡ComfyUI
💡WildCard XL Turbo Model
💡Face Detailer
💡Expressions
💡IP Adapter
💡OpenPose AI
💡Latent Noise Mask
💡Auto-Queue
Highlights
Tutorial on creating consistent, editable AI characters and backgrounds using ComfyUI.
Workflow compatible with stable diffusion 1.5 and sdxl, allowing any style.
Generate multiple views of a character in a single image using a post sheet.
Free download of the post sheet on Patreon for character generation.
Use control net to generate characters from different angles.
Custom workflow in ComfyUI for automatic character generation.
Step-by-step guide provided for installing and setting up workflows.
Importance of matching K sampler settings to the model.
Creating an AI influencer with a unique cheese niche.
Adding descriptive prompts to improve character generation consistency.
Using face detailer to enhance character facial features.
Saving different character poses as separate images.
Generating expressions with a Pixar character prompt for a unique look.
Adjusting the 'dooy strength' to control expression intensity.
Combining expressions and upscaling the final character sheet.
Training a custom model using generated character images.
Using Mid journey's character reference tool for placing characters in different locations.
Integrating characters into backgrounds with a custom workflow.
Fixing seams and focal planes to integrate character with background.
Creating poses for characters using openpose.ai.
Adjusting character and background to match with denoising techniques.
Adding objects like cheese to the character's pose in the prompt.
Generating hundreds of images with different poses and locations automatically.
Access to exclusive example files and additional resources on Patreon.