Apple made an AI image tool that lets you make edits by describing them
Apple researchers released a new model that lets users describe in plain language what they want to change in a photo without ever touching photo editing software.
The MGIE model, which Apple worked on with the University of California, Santa Barbara, can crop, resize, flip, and add filters to images all through text prompts.
MGIE, which stands for MLLM-Guided Image Editing, can be applied to simple and more complex image editing tasks like modifying specific objects in a photo to make them a different shape or come off brighter. The model blends two different uses of multimodal language models. First, it learns how to interpret user prompts. Then it “imagines” what the edit would look like (asking for a bluer sky in a photo becomes bumping up the brightness on the sky portion of an image, for example).
When editing a photo with MGIE, users just have to type out what they want to change about the picture. The paper used the example of editing an image of a pepperoni pizza. Typing the prompt “make it more healthy” adds vegetable toppings. A photo of tigers in the Sahara looks dark, but after telling the model to “add more contrast to simulate more light,” the picture appears brighter.
Posted on: 2/14/2024 7:00:18 AM
|