Midjourney Unveils a New Feature: Consistent Characters in AI Images

JJohn March 12, 2024 7:01 AM

Midjourney, the widely recognized AI image generating service, has launched a new feature that allows the recreation of consistent characters across multiple images, a function much sought after by its user base. This innovation marks a significant shift in the field of AI imagery, overcoming the persistent challenge of erratic character generation.

Consistent character generation: A major breakthrough

Midjourney, an AI image generating service has unveiled a groundbreaking new feature. This innovation allows for the generation of consistent characters across a series of images, a highly anticipated functionality that has been a major stumbling block in the field of AI image generation. The introduction of this feature marks a considerable shift towards overcoming the inherent inconsistencies of AI image generators.

Inconsistency: The main hurdle in AI image generators

The primary issue encumbering AI image generators is their inconsistency in producing responses. For instance, regardless of whether a prompt is repeated or if key words are reused, these AI applications generate something different for each prompt. This inconsistency is a significant issue, especially when creating narratives that require a character to move through different scenes, settings, and situations while maintaining their distinctive traits.

The introduction of the '--cref' tag

Midjourney is tackling the issue of inconsistency by introducing a new tag known as “–cref”, short for “character reference”. Users can add this tag at the end of their text prompts, and the AI will attempt to match the character’s facial features, body type, and clothing based on a URL provided by the user. As the feature continues to improve, it could elevate Midjourney from being a trendy toy to a more professional tool in generative AI imagery.

Applying the '--cref' tag in practice

The newly introduced tag is designed to work optimally with images generated by Midjourney in the past. The process for a user would involve generating or retrieving the URL of an already generated character. The user would then type a new prompt in, adding the “–cref [URL]” tag to the end, and Midjourney will attempt to reproduce that character in the user's new setting. While the results are far from following the original character exactly, it's a step in the right direction.

In addition to the “–cref” tag, users can control the degree of similarity the new image has with the original character by using the “–cw” tag followed by a number between 1 and 100. Lower “cw” values lead to more variance in the resulting image. On the other hand, higher “cw” values result in the new image closely following the original character.

The feature, although in its early stages, has already been launched and is currently being tested by creators and users alike. Despite being at an embryonic stage, the feature has shown promising results, and is encouraging for the future of AI-generated imagery. This early release provides an excellent opportunity for users to familiarize themselves with the new functionality and play around with the possibilities it offers.

More articles

Also read

Here are some interesting articles on other sites from our network.