Google’s Styledrop Draws Mona Lisa in 3 Minutes

Google Research has developed an innovative tool called Styledrop, which allows users to create personalized images with certain artistic styles. Styledrop uses the image generator in the text, powered by the generative visual transformer Muse3, which was presented by Google earlier this year. Muse has been trained in 3 billion parameters, providing high-quality image generation.

With Styledrop, users can offer an image and indicate the desired artistic style, such as “Creaming gold 3D-collection”, “Wooden sculpture”, “3D-point” or “cartoon pattern”. Styledrop then generates impressive images of objects, taking into account the desired style, as well as a printing house that corresponds to the stylistic features of the images.

Styledrop can effectively learn a new style by setting up a very small number of trained parameters (less than 1% of the total number of model parameters) and improve quality using iterative learning with human or automated feedback. It is also able to demonstrate impressive results even when the user provides only one image indicating the desired style.

The program that has not yet been released for the public is expected to be a valuable assistant for art directors and graphic designers. They can create photorealistic images of given products or those including text that reflects the same colors, structure, and style.

According to the study, configuring the style of the text-in-excusse Styledrop on Muse convincingly exceeds other methods including Dreambooth, Imagen, and Stable Diffusion.

Styledrop is available at https://styledrop.github.io/.

/Reports, release notes, official announcements.