Adobe MAX 🤝AI: Key takeaways from the 2023 event

Generative AI took center stage at this year’s Adobe MAX event, prominently featured in the opening keynote, with more than a dozen mentions, and a thrilling preview of over 100 new features set to enhance the Creative Cloud software suite. Here’s what creatives are talking about after this year’s event.

Firefly Image Model 2

Currently in its beta phase, Firefly Image Model 2 comes loaded with new advanced AI features. The model provides users more control over prompts and it generates highly detailed images in resolutions four times higher than its first model. This new model also allows you to upload artwork or images to copy the art styles to generate more realistic and creative image generations. Creatives can also rejoice because Firefly Image Model 2, just like the original Firefly model, was trained on Adobe stock images, openly licensed content and public domain images. While controversy clouds other image generation tools due to the way their image datasets were sourced, Firefly is designed to generate content that’s safe to use in your commercial work.

Text-to-vector Generative AI

The introduction of innovative, text-based generative AI capabilities in Photoshop left everyone in awe (at least that’s how our team felt). The ability to perform photo manipulations, expand backgrounds, and add objects with just a few words was truly astonishing. Now, these remarkable features are making their way to Illustrator. ‘Text-to-vector,’ has entered the chat.

With these generative AI tools, Illustrator empowers users to create vector-based illustrations, subjects, scenes, and icons by entering text prompts. While text-to-vector is still in its beta phase, we’re excited for its potential capabilities.  

One standout feature, known as ‘Retype,’ enables you to identify fonts by scanning images. Essentially, it transforms static text within an image into editable text. This tool can even help you pinpoint the exact font used in the image or suggest similar fonts.

Text-based video editing in Premiere Pro

Premiere Pro’s latest update focuses on text-based video editing and … you guessed it …  AI-powered enhancements. Premiere Pro can now generate a transcript of interview content, and editing the video is as simple as highlighting the portions of the transcript that you want to use.  Notable features include a filler word detection tool for quick removal of filler words, such as “ums” and “uhs,” and an enhanced speech function that uses AI to improve audio quality by eliminating background noise and distortions. 

Text-to-template AI tools in Adobe Express

Adobe Express, the company’s desktop and mobile design platform for making template-based content such as social posts, got a lot of love at this year’s event. Express is getting some notable new features, making it more useful for users with no formal design training who have easy-to-use content creation needs. 

The first feature, Generative Fill, harnesses the same technology found in Adobe Photoshop, enabling users to effortlessly add, remove, or replace objects within images right from Adobe Express.

The second feature, Text-to-Template, is an addition that empowers users to design editable templates for various purposes, such as greeting cards, flyers, wedding invitations, and posters, with just a few simple prompts.

Adobe Express now offers a content scheduling tool for planning, creating, scheduling, and publishing entire social media campaigns, even providing direct publishing support for platforms like TikTok.

Lightroom’s Lens Blur Enhanced by AI

Lightroom users were not left in the dark, offering them a set of exciting AI-powered tools, including an enhanced editing experience on the Lightroom mobile app. One standout addition is the HDR Optimization feature, which takes photo enhancement to an entirely new level.

The star among these new Lightroom features is the AI-powered Lens Blur, enabling users to effortlessly add a stylish DSLR-style blur effect to their photos with just a single tap.

Inevitably, creatives have mixed feelings about the rapid rise of AI design tools. On the one hand, they can eliminate much of the drudge work designers find themselves doing. On the other, do they ultimately eliminate the designer themselves? 

As one might expect, Adobe put a positive spin on the AI design revolution. Talking to Hypebeast, Scott Belsky, Adobe’s current Chief Strategy Officer and EVP, Design & Emerging Products said, “These tools are only as good as the ideas you have and how to use them. What’s changed now is that there’s less friction to get something from your head into something that’s visual.” 

What’s in store for future creatives?

As Adobe’s fast-evolving product suite becomes easier to master, Belsky suggests that creatives who previously specialized in one or two tools will be able to learn more, and move between them with greater ease. Tools like Adobe Express lower the barrier to entry for creating effective content assets, but he’d like us to believe that “the ceiling has gone up” too.

Increasingly, the tools exist to make more professional looking design assets quickly, whether in the hands of a professional designer or a jill-of-all-trades social media manager. Belsky would like us to believe that this should make designers more impactful in their work. Yes, your agency designer may be using AI tools to produce things more quickly. But the time saved should be used for big picture thinking, strategic storytelling and better brand building.

Effective creativity, he suggests, is less about mastering tools than it is about concepting things that move the audience. “​​Something that captures your imagination and pulls at your heart, that’s effective creativity, [and it’s] going to continue to come from humans because it has to be governed by story and craft and meaning and emotion.”