How IBM is using Adobe Firefly to reshape design

By Matthew Finnegan

With the rollout of Adobe Firefly last year, IBM made generative AI (genAI) image creation available to thousands of designers.

It didn’t take long for the tool to make an impression, said Billy Seabrook, global chief design officer for IBM’s consulting arm.

Firefly was used, for example, to create 200 visual assets for IBM’s “Let’s Create” marketing campaign last year that involved rendering an image of a question mark in a variety of textures and colors. Firefly generated many variations — from a question mark made of flower petal to one made of chain link — that designers could choose to then edit and refine into a final product.

With the tools, what could have been a lengthy process taking months was dramatically sped up. “Very little resource was needed, it was done really quickly, and, funnily enough, it was one of — if not the — most high-performing marketing campaign of recent time,” said Seabrook. “It did exceptionally well.”

The use of Adobe for the external marketing campaign was a one-off, said Seabrook, but the aim is to do the same thing on similar projects as the genAI software matures. Beyond the assets created, the company hopes to use Firefly to automate design tasks and, ultimately, increase productivity.

“The biggest challenge I think everybody faces in the industry right now is delivering really great experiences and really high-quality content with more efficiency, and [at a] faster pace,” said Seabrook.

He cited the adage — “faster, better, cheaper: pick two” — and said: “We really are under the pressure, like everybody in the world, to deliver all three: great content, faster and cheaper."

Having previously held senior positions at Citi and eBay, Seabrook now leads design for IBM’s consulting arm, where he's responsible for a team that includes 1,600 or so designers that focus on content design, visual UX, and more. That’s just one part of IBM’s overall design workforce, with more than 3,500 designers across the organization.

IBM is a long-time Adobe customer and partner, providing consulting services to clients about using tools such as Firefly, making it a natural early adopter of the technology in-house.

IBM’s Firefly deployment began with a pilot when Adobe made the product available in beta. Since then, after the general launch last September, the tool has been rolled out to roughly 20,000 IBM staffers with Adobe Creative Cloud licenses. That gives them access to Firefly in Adobe apps such as Photoshop and Express, though details on employee uptake were unavailable.

IBM designers get to grips with Firefly

Aside from creating visual assets for the customer-facing “Let’s Create” campaign, Firefly has been used internally by the IBM marketing team to generate concept images and internal presentation decks, for example.

For IBM’s consulting division, designers use Firefly for two main purposes, said Seabrook. One is to support the design-thinking approach that involves generation of ideas — often called ideation — prototyping, and iteration based on customer interactions.

“Typically, that is quite a manual process,” Seabrook explained, because it involves in-person, workshop-like sessions over the course of multiple days or even weeks, and plenty of back and forth with sketches.

Firefly is often used to streamline visualizations, he said. When sketching a rough concept, staff can prompt the tool to create anything from purposely fantastical images (“they're conceptual in nature, but it telegraphs the idea,” said Seabrook) all the way through to “more realistic type of frames that could be used in a storyboard,” he said.

That's saved significant time for staffers, said Seabrook, cutting a two-week process to a couple of days in some cases.

After the ideation phase, Firefly has been used in Adobe Photoshop and Illustrator for retouching, resizing, and other “last mile” production work. Firefly can quickly fill in gaps in an image: adding legs to a photo that was cropped at the waist, for instance.

That would typically require a long process of compositing multiple stock or original images and retouching to make it appear seamless. “You can use generative AI now to fill in that void dynamically, instantly,” he said. “And the quality of that is improving on a day-to-day basis. So that's a huge time saver for us.”

Localization is another advantage, where backgrounds can be swapped out to represent a different country “especially if the background is not going to be any risk of potential offense or branded material that would be a legal challenge,” he said. “If it's simply changing the time of day, or expanding an image to include more foliage, something that's low risk, it's a great use case for the technology today.”

GenAI to reshape visual designer roles

Generative AI's ability to complete certain tasks opens the door to more jobs being automated. Along those lines, IBM attracted attention last year with plans to replace thousands of jobs with AI via a hiring freeze.

When it comes to the impact of genAI tools on the design profession more broadly, Seabrook sees a potential shift to smaller teams while simultaneously placing greater import on designer expertise. He pointed to a survey of around 1,500 business executives, creative managers and practitioners conducted by IBM’s internal research team highlighting the impact of genAI on design professionals. The study points to a more strategic role for designers, said Seabrook.

“They can be curators, prompt engineers, they can…be the decision maker and point out things that a professional would know best, in terms of originality or the different levels of quality of the output,” he said. “So there's actually an expectation that there's more demand for those resources. They're in high demand, yet they're fewer seats — that is the interesting contradiction that we're seeing.”

The result could be that designers become more generalist, with just one person using genAI able do the job of two or three workers. In that case, teams could get smaller, with the “positive spin” being that designers will be freed to take on new workloads.

“Many more projects can now be accomplished, so those people can move on to different engagements that weren't previously possible," Seabrook said. "So [this leads to] an increase in volume of work, but the individual projects will be managed by smaller teams.

“...That's the interesting dynamic that we're seeing now.”

GenAI models for image creation continue to advance

While IBM has seen advantages from using Firefly internally, Seabrook also highlighted some limitations at this – still relatively early – stage in its development.

For instance, images created by the Firefly models tend to have a particular style that suits some uses better than others. There can be an illustrative sheen to outputs, with a level of saturation that is indicative of AI-generated visuals. “It's a characteristic of the medium,” said Seabrook.

That might be a good fit for some scenarios — sci-fi style imagery for example — and genAI has opened a new genre of creative concept, he said. It can generate fantastical images that would otherwise take a long time to produce manually.

“As the quality improves and it gets a little bit more realistic and indecipherable from a regular photograph, it opens up a whole different type of use case in terms of how you would use that that imagery,” said Seabrook.

He also anticipates future capabilities where reference material can be used to filter content generated by Firefly to adhere to brand style guides. “The whole industry is looking forward to that type of customization, because that's when the output will be really differentiated for your brand,” he said.

As with other areas of genAI development, the capabilities of the models underpinning Firefly are advancing quickly. The Firefly 2.0 model was a “big step improvement” over 1.0.

As the technology matures, and outputs become more reliable and predictable, Seabrook envisions a wider range of uses, such as “real-time personalization at scale” in marketing campaigns for clients.

“...The technology is going to continue to improve," he said. "We're going to keep our eye on making sure that it's trustworthy content, it's clearly credentialized in the right way so people know that it's generated by AI, and all those other important aspects of ethics and bias are baked into everything that we do.

"Assuming all those checkout, then then we will start to deploy it in many more use cases for clients,” he said.

Seabrook sees a lot of potential for Firefly to be used more widely within IBM, as well. “It's a good bet, and we're really looking forward to the next generation that's coming out soon.”

© Computer World