【Midjourney】新機能 Consistent Character(同じキャラクター作成)の使い方を紹介します
TLDRMidjourneyの新機能「Consistent Character」を紹介。この機能はキャラクターの一貫性を保ちながら、同じキャラクターを生成する。--crefオプションを使用してスタイルと一貫性を設定し、--cwで参照画像の強さを調整。実際の画像では微妙な違いはあるが、全体的に一貫性を維持。キャラクターの姿勢やアクションの変化にも対応し、マンガやアニメの創作が容易になる。詳細なチュートリアルも今後紹介予定で、この機能はマンガ創作の基礎を築いたと言える。
Takeaways
- 😀 Midjourney has released a new feature called 'Consistent Character' which allows for the creation of images with the same character style.
- 🎨 The function uses the `--cref` option to generate images with style and consistency, and the strength of the reference image can be adjusted with `--cw` from 0 to 100.
- 🔄 The default setting for `--cw` is 100, which ensures that the face, hair, and clothes are consistent with the reference image.
- 👤 If `--cw` is set to 0, only the face will be referenced, leading to significant changes in hair and clothing styles.
- 📸 The feature is not suitable for real images, but the speaker suggests trying it out to see how it performs.
- 🖼️ The speaker demonstrates the feature using original images, including 2D whole bodies, upper bodies, semi-real, and real images, and notes the differences in style and consistency.
- 🔍 The details such as buttons on clothes may not be perfect, and the speaker acknowledges that future improvements are expected in these areas.
- 🎭 The feature can maintain character consistency even when changing positions, which is beneficial for creating manga and anime.
- 🏃♂️ When adding actions like walking or running, the character's consistency is well maintained, suggesting the feature's potential for dynamic scenes.
- 🤽♀️ The feature can handle changes in visible range and actions, but there are limitations, such as difficulty in creating close-up images from full-body images.
- 🛠️ Image processing tools can be used to edit details like the face in zoomed-in images or to adjust colors and patterns of clothes.
Q & A
What is the new feature 'Consistent Character' in Midjourney?
-The 'Consistent Character' feature in Midjourney is a function that allows users to generate images of the same character with style and consistency. It was previously not a formal function and had limitations depending on the painting style, but now it has been released as an official feature.
How does the 'cref' option work in the new feature?
-The 'cref' option is used to create the specified image with the new feature. It helps in generating images that maintain a consistent style and character representation.
What is the purpose of the '--cw' setting in the feature?
-The '--cw' setting allows users to set the strength of the reference image, with a range from 0 to 100. It determines the level of consistency in the character's face, hair, and clothes based on the reference image.
Why are the old methods of creating consistent characters considered obsolete?
-The old methods are considered obsolete because they were not formal functions and had various limitations. The new feature provides a more reliable and consistent way to generate the same character across different images.
What does the default setting of 100 for '--cw' imply?
-A default setting of 100 for '--cw' implies that the character's face, hair, and clothes will strictly adhere to the reference image, ensuring maximum consistency.
What happens when the '--cw' setting is set to 0?
-When the '--cw' setting is set to 0, only the face of the character will be referenced from the image, allowing for more variation in hair and clothing styles.
Is the new feature suitable for real images?
-According to the official announcement, the new feature is not specifically designed for real images. However, the speaker suggests trying it out to see how well it performs with real images.
Can the new feature be used to create images of characters in different positions?
-Yes, the feature can maintain character consistency even when changing positions, such as side and back views, which is beneficial for creating manga and anime.
What are some limitations the speaker noticed regarding details like buttons on clothes?
-The speaker noticed that details such as buttons on clothes may not be perfect in the generated images, indicating that there is room for improvement in the feature's ability to handle small details.
How can users adjust the visibility range of the generated images?
-Users can adjust the visibility range by using image processing tools to edit zoomed-in images or by using the ZoomOut function to create full-body images from upper body images.
What is the speaker's final assessment of the new feature's ability to maintain character consistency?
-The speaker concludes that the new feature performs exceptionally well in maintaining character consistency, is easy to use compared to traditional methods, and offers high accuracy, making it accessible even for amateurs to achieve a certain level of quality in character creation.
Outlines
😲 Introduction to Midjourney's Character Consistency Feature
The speaker introduces a new feature on Midjourney that allows for the consistent generation of the same character, which was previously attempted with limited success. The old methods are now considered obsolete as the new feature offers a more formal and reliable approach. The function uses the --cref option to create images with style and consistency, and the --cw option to adjust the strength of the reference image. The speaker demonstrates the feature with various images, noting that while the details may not be perfect, the overall consistency is impressive. They also discuss the limitations of using real images and suggest that further experimentation is needed to fully understand the feature's capabilities.
🔍 Exploring Character Consistency Across Different Scenarios
The speaker proceeds to test the character consistency feature in various scenarios, including different views (side and back) and actions (walking, running, sitting, sleeping, drinking, swimming, and sword fighting). They find that the feature maintains character consistency well, even when changing the visible range and actions. The speaker notes that while some details like clothing buttons may not be perfect, the overall results are within an acceptable range. They also mention that the feature's ability to handle semi-realistic and real images is promising, and they express excitement about the potential for creating manga and anime with this tool.
🎨 Conclusion on Midjourney's Character Consistency and Future Tutorials
In conclusion, the speaker reflects on the ease and high accuracy of the character consistency feature compared to traditional methods. They acknowledge slight variations in clothing details and patterns but consider these changes acceptable. The speaker expresses that this feature enables even amateurs to achieve a certain level of quality in their creations, laying a solid foundation for manga creation. They announce plans to introduce tutorials for creating manga using the new features and encourage viewers to subscribe to the channel and rate the content before signing off.
Mindmap
Keywords
💡Character Consistency
💡--cref option
💡--cw
💡Style
💡Niji
💡Inpaint
💡Semi-realistic and Real
💡ZoomOut function
💡Manga and Anime
💡Weights
Highlights
Midjourney introduces a new feature called 'Consistent Character' that allows for the creation of the same character in different images.
This feature is a formal function replacing the previous informal methods with limitations based on painting styles.
The --cref option is used to generate images with style and consistency, referencing a specified image.
The strength of the reference image can be adjusted with the --cw option, ranging from 0 to 100.
At 100, the face, hair, and clothes will closely match the reference image; at 0, only the face is referenced.
The feature is not recommended for real images, but the speaker suggests trying it out to make a decision.
The speaker demonstrates the feature using original images, including 2D whole bodies and upper bodies, as well as semi-real and real images.
The feature maintains character consistency even without the 'niji' option, although the style of the images changes.
Details such as buttons on clothes are not perfect, and the speaker acknowledges the need for future improvements.
The feature's ability to generate consistent characters in different poses, such as side and back views, is tested and found successful.
The speaker discusses the potential of the feature for creating manga and anime, noting its ease of use and high accuracy.
Adjusting the weight of the reference image can change the character's appearance, allowing for partial changes if desired.
The feature's ability to handle different actions, such as walking, running, sitting, and sleeping, is demonstrated.
The speaker explores the feature's limitations with visible range changes, such as zooming in or out, and suggests workarounds.
The feature's performance with semi-realistic and real images is tested, showing subtle differences but overall consistency.
The speaker concludes that the feature is a significant advancement for AI character creation, making it accessible to amateurs and professionals alike.
The video ends with a teaser for future tutorials on creating manga using the new feature.