Generative AI has exploded in popularity. New text‑to‑image, video, and audio models appear every month, each claiming to be the best. For many designers, photographers, and filmmakers, this flood of choice creates uncertainty. Should you commit to a single AI provider? Can one model handle all your projects? What if you want to try several without juggling multiple accounts? Adobe Firefly answers these questions with a surprising strategy: it brings leading third‑party models into its own platform. With a Firefly subscription, you can experiment across multiple providers without leaving your creative workflow. This guide explains what third‑party models in Adobe Firefly are, why they matter, and how to make the most of them.

What are third‑party models in Adobe Firefly?

When Adobe launched Firefly, it focused on its own generative AI. Those models were trained on licensed and public domain content, making them safe for commercial use. As the market matured, Adobe recognised that no single AI model satisfies every creative need. To give customers more flexibility, it now integrates partner models directly into Firefly. These third‑party models include image and video generators from OpenAI, Google, Luma AI, Black Forest Labs, Runway, Pika, Ideogram, and others. They live alongside Adobe’s proprietary tools and can be selected via a simple dropdown in the Firefly interface. Whether you’re working with the Text to Image tool, creating videos, or building a moodboard in Firefly Boards, you can switch between models with one click.

The partner list has grown quickly. Early in 2025, Adobe announced support for OpenAI’s GPT Image model, Google’s Imagen 3 and Veo 2, and Black Forest Labs’ Flux 1.1 Pro. More models from Pika, Ideogram, fal.ai, Luma AI, and Runway followed. By late summer, the catalogue included Google’s Gemini 2.5 Flash Image (nicknamed “Nano Banana”), Flux 1.1, and FLUX.1 Kontext from Black Forest Labs, Luma AI’s Ray 3, OpenAI’s GPT Image, and several video models such as Runway’s Aleph and Moonvalley’s Marey. Firefly Boards now supports both image and video models from these partners, giving you a broad palette of tools within a single workspace.

Why did Adobe embrace third‑party models?

From the start, Adobe marketed Firefly as “commercially safe” because its models are trained on vetted datasets. While this reassured brands, many creators still wanted to explore models trained on wider internet data sets. Adobe observed that some users were experimenting with other providers during the ideation phase, only to return to Firefly for final production. Rather than lose those customers to competing platforms, the company built a mechanism to host external models.

Adobe also frames the integration as a way to empower creators. Executives describe Firefly as a one‑stop shop for AI‑assisted creativity, and they argue that offering multiple models gives users ultimate choice. Industry observers note that integrating competitor models into Photoshop, Illustrator, and Firefly marks a first for Creative Cloud. Critics point out that inviting rival models into your flagship platform could indicate uncertainty about your own technology, but there is no denying the appeal of having so many options under one roof.

How to use third‑party models within Adobe Firefly

Selecting a model: a seamless workflow

The simplest way to access partner models is through the Model dropdown. In Firefly’s web app and the integrated versions within Adobe Express and Photoshop (beta), the General Settings panel includes a list of available models. You can choose between Adobe’s own engines and partner options. Switching models recalculates the output using the new algorithm. Because the interface remains the same, you can compare results from different models without changing your prompt or leaving the application. The same dropdown appears when using Firefly Boards or generating text effects, so the process is consistent across tools.

Adobe Creative Cloud All Apps

Firefly Boards: collaborative ideation with multiple engines

Firefly Boards is more than a moodboard; it’s an AI‑first environment for collecting inspiration, generating assets, and refining concepts. Boards includes partner models for both images and video. Features like Presets let you generate images in different styles with a single click. Generative Text Edit (in beta) allows you to swap or update text directly within a visual without leaving Boards. Describe Image analyzes an existing image and produces a ready‑to‑edit prompt, saving you from guessing the right keywords.

Boards integrates a growing number of models: Luma AI’s Ray 3 and Ray 2 for cinematic quality, Google’s Gemini 2.5 Flash Image and Veo 3, Pika 2.2 for quick video loops, and FLUX.1 Kontext, Runway’s Gen‑4, and the Aleph and Marey video models. You can combine these models in multi‑modal workflows, using one to draft a storyboard and another to refine details. All content created in Boards synchronizes with your Creative Cloud account, so your experiments are ready for further editing.

Photoshop and Creative Cloud: deeper integration

Adobe has begun embedding partner models directly into its flagship applications. Photoshop now offers Gemini 2.5 Flash Image and FLUX.1 Kontext Pro within its generative fill feature. These models expand what generative fill can do: you can add, remove, or transform elements in an image by typing a prompt. Gemini 2.5 is strong at stylized details and subtle edits, while FLUX.1 Kontext excels at maintaining perspective and environmental harmony. Photoshop automatically records the prompt and model used on a separate layer, making it easy to refine edits, apply masks, or revert changes later.

Key third‑party models available today

OpenAI’s GPT Image model

GPT Image brings OpenAI’s advanced text understanding and photorealistic rendering to Firefly. It excels at semantic accuracy, ensuring that complex prompts are interpreted correctly and that multiple elements appear coherent. If you ask for “a cyberpunk street at night with neon reflections on wet pavement”, GPT Image will align the lighting, reflections, and architecture in a believable way. The model is available in the Text to Image tool and Firefly Boards.

Google’s Imagen and Veo models

Google contributes both image and video models. Imagen 3 is an image generator known for high‑quality rendering and flexible composition. Veo 2 and Veo 3 are video models that turn text prompts or images into short clips. They are useful for motion graphics, storyboards, and mood videos. These models can be selected directly from the Model dropdown in Firefly or within Firefly Boards.

Black Forest Labs’ FLUX models

Black Forest Labs offers Flux 1.1 and FLUX.1 Kontext. Flux 1.1 is a high‑resolution image model available in Text to Image and Firefly Boards. FLUX.1 Kontext focuses on contextual accuracy and perspective. It is integrated into Photoshop, where it helps with complex edits such as changing backgrounds or adjusting viewpoints while keeping the rest of the scene intact.

Luma AI’s Ray series and Moonvalley Marey

Luma AI specialises in neural rendering and 3D cinematography. Ray 3 and Ray 2 provide cinematic quality for image‑to‑video transformations and video generation. They handle dynamic lighting and depth well, making them suitable for storyboards or motion‑driven sequences. Moonvalley’s Marey model generates dynamic motion clips, turning still images into fluid animations. It works particularly well for fashion or product demonstrations.

Pika, Ideogram, Runway, and more

Pika’s models focus on quick video generation and animated transitions, ideal for social media or marketing content. Ideogram specialises in typography and text‑image integration, useful for posters and book covers. Runway offers multiple models: Aleph for creating storyboards and rough cuts, and Gen‑4 for combined text and image generation. Fal.ai and other up‑and‑coming labs are expected to join the platform in the future. Adobe has stated that it will continue to add more third‑party models as they become available.

Benefits and challenges of using third‑party models

The biggest advantage of Adobe’s multi‑model approach is flexibility. Instead of being locked into one provider, you can explore different styles and capabilities without managing multiple subscriptions. Photorealism, stylised illustration, cinematic video, and precise typography are all available within a single interface. This variety can spark new ideas and reduce the time spent switching between apps.

Integration also streamlines your workflow. Because partner models are embedded in Creative Cloud, you can move assets directly into Photoshop, Illustrator, or Adobe Express. Photoshop’s generative fill automatically adds non‑destructive layers that record which model was used, allowing you to refine or revert edits easily. Billing is unified through Firefly credits, so there is no need to juggle payment systems for different providers.

There are, however, some challenges. Not every partner model is trained on licensed content, so the outputs may not be commercially safe. Adobe labels its own models as safe for production but encourages users to treat partner models as experimental when working on client projects. Creators must assess the licensing and data‑training practices of each model before using the output commercially. With many options available, decision fatigue is another risk. To avoid becoming overwhelmed, define your goals and choose models that align with your desired outcome.

The future of third‑party models in Adobe Firefly

Adobe has signalled that it will continue to expand its partner ecosystem. Upcoming models from fal.ai and additional releases from OpenAI, Google, and other labs are expected. Firefly Boards has graduated from beta to a global release with subscription tiers ranging from Standard to Premium. The inclusion of advanced video models such as Runway Aleph and Moonvalley Marey suggests that Firefly will evolve into a comprehensive multi‑modal hub. Industry analysts describe Firefly’s transformation as a shift from a single product to a platform that aggregates models from across the AI landscape. This direction could allow Adobe to negotiate exclusive access to cutting‑edge technology and set standards for responsible AI. Yet it also raises questions about whether Adobe’s own models will remain central as the ecosystem grows.

As a creative professional, I find the platform strategy both liberating and challenging. On the one hand, the ability to test state‑of‑the‑art models without leaving the Adobe environment is inspiring. On the other hand, the sheer breadth of options can obscure the distinctive qualities of each model. In the future, the success of Firefly may hinge less on any individual model and more on how well the platform integrates diverse tools, manages licensing, and offers a cohesive user experience.


One tool, many models

For anyone unsure which AI model or provider is best, third‑party models in Adobe Firefly offer a compelling answer: you don’t have to choose just one. By inviting leading models from OpenAI, Google, Luma AI, Black Forest Labs, Runway, Pika, Ideogram, and more into its interface, Firefly becomes an all‑in‑one creative toolbox. You can generate photorealistic images with GPT Image, craft cinematic videos with Veo, refine perspective using FLUX, and experiment with dynamic motion through Marey—all without leaving the Adobe ecosystem. Firefly Boards extends this flexibility into collaborative moodboarding, while integrations with Photoshop and other Creative Cloud apps make it easy to move from ideation to final production.

The key to using these tools effectively is understanding what each model does well and applying it thoughtfully. Consider the licensing implications, manage your generative credits wisely, and resist the temptation to experiment endlessly. With a clear brief and an open mind, the multi‑model environment in Adobe Firefly can be a powerful ally in your creative practice. Whether you’re a designer, photographer, filmmaker, or marketer, this all‑in‑one platform lets you explore the best of generative AI without getting lost in the technology.

Learn more about AI for creative professionals here at WE AND THE COLOR.