Skip to content

All versions since v6.11.0

v6.11.0

InvokeAI v6.11.0

This is a feature release of InvokeAI which provides support for the new FLUX.2 Klein image generation and edit models as well as a few small improvements and bug fixes. Before we get to the details, consider taking our 2026 User Engagement Survey. We want to know who you are, how you use InvokeAI, and what new features we can add to make the software even better.

Support for FLUX.2 Klein models

The FLUX2 Klein family of models (F2K) are fast, high quality image generation and editing models. Invoke provides support for multiple versions, including both the fast-but-less-precise 4 billion (4B) and the slower-but-more-accurate 9 billion (9B) models, as well as quantized versions of these models suited for systems with limited VRAM. These models are small and fast; the fastest can render images in seconds with just four steps.

In addition to the usual features (txt2img, img2img, inpainting, outpainting) F2K offers a unique image editing feature which allows you to make targeted modifications to an image or set of images using prompts like “Change the goblet in the king’s right hand from silver to gold,” or “Transfer the style from image 1 to image 2”.

Suggested hardware requirements are:

FLUX.2 Klein 4B - 1024×1024

  • GPU: Nvidia 30xx series or later, 12GB+ VRAM (e.g. RTX 3090, RTX 4070). FP8 version works with 8GB+ VRAM.*
  • Memory: At least 16GB RAM.
  • Disk: 10GB for base installation plus 20GB for models (Diffusers format with encoder).

FLUX.2 Klein 9B - 1024×1024

  • GPU: Nvidia 40xx series, 24GB+ VRAM (e.g. RTX 4090). FP8 version works with 12GB+ VRAM.
  • Memory: At least 32GB RAM.
  • Disk: 10GB for base installation plus 40GB for models (Diffusers format with encoder).

Getting Started with F2K

After updating InvokeAI, you will find a new FLUX.2 Klein starter pack for in the Starter Models section of the Model Manager. This will download three files: the Q4 quantized version of F2K 4B, which is suitable to run on low-end hardware, and two supporting files: the FLUX.2 VAE, and a quantized version of the FLUX.2 Qwen3 text encoder.

After installing the bundle, select the “FLUX.2 Klein 4B (GGUF Q4)” model in theGeneration section of Invoke’s left panel. Also go to the Advanced section at the bottom of the panel and select the F2K VAE and text encoder models that were installed with the starter bundle. (If you don’t select these, you will get an warning message on the first generation that tells you to do this.) Recommended generation settings are:

  • Steps: 4-6
  • CFG: 1-2

Modestly increasing the number of steps may increase accuracy somewhat. If you work with the Base versions of F2K (available from HuggingFace), increase the steps to >20 and the CFG to 3.5-5.0.

Text2img, img2img, inpainting and outpainting will all work as usual. InvokeAI does not currently support F2K LoRAs or ControlNets (there have not been many published so far). In addition, only the Euler sampler is currently available. Support for LoRAs and additional schedulers will be added in a future release.

Prompting with FLUX.2

Like ZiT, F2K’s text encoder works best when you provide it with long prose prompts that follow the framework Subject + Setting + Details + Lighting + Atmosphere. For example: “An elderly king is standing on a low dais in front of a crowded and chaotic banquet hall bursting with courtiers and noblemen. He is shown in profile, facing his noblemen, holding high a jeweled chalice of wine to toast the unification of his fifedoms. This is a cinematic shot set that conveys historical grandeur and a medieval vibe.

F2K does not perform any form of prompt enhancement, so what you write is what the model sees. See FLUX.2 Prompting Guide for more guidance.

Image Editing

F2K provides an image editing mode that works like a souped-up version of Image Prompt (IP) Adapters. Drag-and-drop or upload an image to the Reference Image section of the Prompt panel. Then instruct the model on modifications you wish to make using active verbs. You may issue multiple instructions in the same prompt.

  • Change the king’s chalice from silver to gold. Give him a crown, and grow him a salt-and-pepper beard.
  • Change the image style to a scifi/fantasy vibe.
  • Use an anime style and give the noblemen and courtiers brightly-colored robes.

F2K editing supports multiple reference images, letting you transfer visual elements (subjects, style and background) from one to another. When prompting over multiple images, refer to them in order as “image 1,” “image 2,” and so forth.

  • Give the king in image 1 the crown that appears in image 2.
  • Transfer the style of image 1 to image 2.

Dealing with multiple reference images is tricky. There is no way to adjust the weightings of each image, and so you will have to be explicit in the prompt about which visual elements you are combining. If you cannot get the effect you are looking for by modifying the prompt, you may find success by changing the order of images.

Also be aware that each image significantly increases the model’s VRAM usage. If you run into memory errors, use a smaller (quantized) model, or reduce the number and size of the reference images.

Other Versions of F2K Available in the Model Manager

To find additional supported versions of F2K, type “FLUX.2” into the Starter Models search box. This will show you the following types of files:

  • FLUX.2 Klein 4B/9B (Diffusers) These are the full-size all-in-one diffusers versions of F2K which come bundled with the VAE and text encoder.
  • FLUX.2 Klein 4B/9B These are standalone versions of the full-size F2K which require installation of separate VAE and text encoders. Note that the 4B and 9B models require different text encoders, “FLUX.2 Klein Qwen3 4B Encoder” and “FLUX.2 Klein Qwen3 8B Encoder” respectively. (Not a misprint: use the 9B F2K model with the 8B text encoder!)
  • FLUX.2 Klein 4B/9B (FP8) These are the standalone versions quantized to 8 bits. The 4B model will run comfortably on macines with 8GB VRAM, while the 9B model will run on machines with 12GB or higher. As with all quantized versions, there is minor loss of generation accuracy.
  • FLUX.2 Klein 4B/9B (Q4) These are standalone versions that have been quantized to 4 bits, resulting in very small and fast models that can run on cards with 6-8 GB VRAM.

There is only one F2K VAE, and it happens to be same as the one used by FLUX.1 and Z-Image Turbo. However, there are several text encoder options:

  • FLUX.2 Klein Qwen3 4B Encoder Use this encoder with the F2K 4B versions. It also works with Z-Image Turbo.
  • Z-Image Qwen3 Text Encoder (quantized) This is a Q6-quantized version of the text encoder, that works with both F2K and ZiT. You may use this on smaller memory systems to reduce swapping of models in and out of VRAM.
  • FLUX.2 Klein Qwen3 8B Encoder Use this encoder with the F2K 9B versions. It is not compatible with ZiT.

You will find additional F2K models on HuggingFace and other model repositories, including the base models intended for fine-tuning and LoRA training. We have not exhaustively tested InvokeAI compatibility with all the available variants. Please report any incompatible models to InvokeAI Issues.

Many credits to @Pfannkuchensack for contributing F2K support.

Other Features in this Release

The other features in this release include:

Z-Image Turbo Variance Enhancer

ZiT tends to produce very similar images for a given prompt. To increase image diversity, @Pfannkuchensack contributed a Seed Variance Enhancer node which adds calibrated amounts of noise to the prompt conditioning prior to generation. You will find this feature in the Generation panel under Advanced Options. When activated, you will see two sliders, one for Variance Strength and the other for Randomize Percent. The first slider controls how much noise will be added to the conditioned prompt, and the second controls what proportion of the conditioning’s weights will be altered. Using the default randomization of 50% of the values, a variance strength of 0.1 will produce subtle variations, while a strength of 0.5 will produce very marked deviation from the prompt. Increasing the percentage of weights modified will also increase the level of variation.

Improved Support for High-Resolution FLUX.1 Images

A new denoising tuning algorithm, introduced by @Pfannkuchensack, increases the accuracy of FLUX.1 generations at high resolutions. When a FLUX.1 model is selected, a new DyPE option will appear in the Generation panel. Its settings are Off (the default) to disable the algorithm, Auto to automatically activate DyPE when rendering images greater than 1536 pixels in either dimension, and 4K Optimized to activate the algorithm with parameters that are tuned for 4K images. Note that if you do not have sufficient VRAM to generate 4K images, this feature will not help you generate them. Instead, generate a smaller image and use Invoke’s Upscaling feature.

Canvas high level transform smoothing

Another improvement contributed by @DustyShoe: The Canvas raster layer transform operation now supports multiple types of smoothing, thereby reducing the number of artifacts when an area is upscaled.

Text Search and Highlighting in the Image Metadata Tab

The Image Viewer’s info (🛈) tab now has a search field that allows you to rapidly search and highlight text in image metadata, details, workflow and generation graph. In addition, the left margin of the metadata display has been widened to make the display more readable.

Thanks to @DustyShoe for this improvement.

Bugfixes

Several bugs were caught and fixed in this release and are listed in the detailed changelog below. Thanks to first-time contributors @kyhavlov and @aleyan for the bugs they caught and fixed.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

Translation Credits

Many thanks to the following language translators who contributed to this release: @Harvester62 (Italian) and @DustyShoe (Russian).

Also many thanks to Weblate for granting InvokeAI a free Open Source subscription to use its translation management service.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.10.0…v6.11.0

v6.11.1

InvokeAI 6.11.1

This is a bugfix release that corrects several image generation and user interface glitches:

  • Fix FLUX.2 Klein image generation quality (@Pfannkuchensack)
    • At higher step values and larger images, the FLUX.2 Klein models were generating image artifacts characterized by diagonals, cross-hatching and dust. This bug is now corrected.
  • Restore denoising strength for outpaint mode (@Pfannkuchensack)
    • Previously, when outpainting, the denoising strength was pinned at 1.0 rather than observing the value set by the user.
  • Only show FLUX.1 VAEs when a FLUX.1 main model is selected (@Pfannkuchensack)
    • This fix prevents the user from inadvertently selecting a FLUX.2 VAE when generating with FLUX.1.
  • Reset ZiT seed variance toggle when recalling images without that metadata (@Pfannkuchensack)
    • When remixing an image generated by Z-Image Turbo, the setting of the seed variance toggle (which increases image diversity) is now correctly restored.
  • Improve DyPE area calculation (@JPPhoto)
    • DyPE increases the quality of FLUX.1 models at higher resolutions.. This fix improves how the algorithm’s parameters are automatically adjusted for image size.
  • Remove duplicate DyPE preset dropdown in generation settings (@Pfannkuchensack
    • The DyPE dropdown in generation settings is no longer duplicated in the generation UI.

In addition to these bug fixes, new Russian translations were added by (@DustyShoe).

Checkout the roadmap

To see what the development team has planned for forthcoming releases, check out the InvokeAI roadmap. Feature releases will be issued roughly monthly.

Take the user survey

And don’t forget to tell us who you are, what features you use, and what features you most want to see included in future releases. Take the InvokeAI 2026 User Engagement Survey and share your thoughts!

Credits

In addition to the authors of these bug fixes, many thanks to @blessedcoolant, @skunkworxdark, and @mickr777 for their time and patience testing and reviewing the code.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.11.0…v6.11.1

v6.12.0 Latest

InvokeAI v6.12.0

This is a feature release of InvokeAI which provides support for multiple accounts on the same InvokeAI backend, enhanced support for the Z-Image and FLUX.2 models, multiple user interface enhancements, and new utilities for managing models.

[Jump to Installing and Updating]

Multi-User Mode (Experimental)

Have you ever wished you could share your InvokeAI instance with your friends, family or coworkers, but didn’t want to share your galleries or give everyone the ability to add and delete models? Now you can. InvokeAI 6.12 introduces an experimental multi-user mode that allows you to create separate user accounts with login names and passwords. Each account’s image boards, images, canvas state and UI preferences are separate from the others. Users with administrative privileges are allowed to perform system-wide tasks such as adding and configuring models and managing the session queue, while ordinary users are prevented from making this type of change.

InvokeAI Log-In Screen

See the Multi-User Mode User’s Guide for information on setting up and using this mode.

Multi-User mode was contributed by @lstein .

Enhanced Support for Z-Image and FLUX.2 Models

Z-Image Base — This version of InvokeAI adds support for the Z-Image Base model family. This is an undistilled version of Z-Image suitable for fine-tuning and LoRA training. It also provides a high level of image diversity while preserving excellent image quality.

FLUX.2 LoRAs — InvokeAI now supports a variety of FLUX.2 Klein LoRA formats.

Thanks to @Pfannkuchensack for his work on these enhancements.

Paged Gallery Browsing — Paged gallery browsing is back. Go to image board settings and select “Use Paged Gallery View” to replace infinite gallery scrolling with page-by-page navigation.

image

Arrow Key Navigation — The arrow keys now work correctly when browsing a gallery. When the Viewer is in focus, the right and left arrow keys will navigate through the currently selected gallery. When the gallery thumbnails are in focus, the right/left/up/down arrows navigate among them.

@DustyShoe contributed these enhancements.

New Canvas Features

The Canvas now features several new features added by @DustyShoe

Text Tool — The Canvas now features a Text tool that allows you to insert text in a variety of fonts, sizes and styles, move it around the canvas, and commit it to the raster layer.

Linear and radial gradient tools — These new tools add radial and linear gradients to the Canvas. The gradients use color transparency and the foreground/background colors to draw gradients in the direction of the mouse movement.

image

Invert Button for Regional Guidance Layers — You can now select any Regional Guidance region and select the “invert” button to exchange painted regions with unpainted ones and vice versa. As an added bonus, the invert button also works with Inpaint Masks.

Layer Controls Moved The controls for creating, duplicating and deleting canvas layers have been moved from the top of the layers list to the bottom, which is more consistent with how other graphics packages position their layer controls and, we think, more intuitive. Long-term Canvas users may need to adjust to the new positioning.

A few improvements contributed by @lstein aim to make it easier to maintain the model and image databases.

Remove Orphaned Models — Over time InvokeAI may accumulate unused “orphan” models in its models directory that take up space but have no entries in the models database for one reason or another. This means they take up disk space without being usable. A new “Sync Models” button in the Model Manager detects and offers to delete such orphaned models. Developers and other users who have access to the source code repository will also find a script, located in scripts/remove_orphaned_models.py , that will do the same thing from the command line.

Remove Dangling Models — The converse problem occurs when a model directory, or one of its files, was removed or renamed externally, causing it to be referenced in the models database but not be usable. There is now a “Missing Files” filter option in the Model Manager that will identify models that are damaged or deleted. You can then select the models you wish to delete and remove them from the database. In addition, the model selection menus will no longer display models that are missing or broken.

Gallery Maintenance Script — For users with access to the source code repository, the scripts/gallery_maintenance.py python script will clean up dangling and orphaned gallery images. Dangling images are those that appear in the Invoke gallery database but whose files have been deleted from disk. Orphaned images are those that have files on disk but are missing from the database. A related database maintenance tool with more bells and whistles can also be found in @Pfannkuchensack ‘s GitHub at https://github.com/Pfannkuchensack/sqlite_invokeai_db_tool.

Workflow Iterator Improvements

@JPPhoto fixed the way that workflow collections work. Previously when you created a Collection and passed it to an iterator, the items in the collection would be passed to downstream nodes in an unpredictable order. Now, the order of items in the collection is preserved, making complex workflows more predictable and reproducible.

Remote Controlling Invoke’s Generation Parameters

It is now possible to programmatically set Invoke’s generation parameters using a new REST endpoint. This allows a script or other external program to select the model, image size, seed, steps, LoRAs, reference images, and all the other parameters that go into a generation. For documentation of the feature see:

@lstein added this feature.

Translations

Thanks to @Harvester62 for providing the Italian translations for this release.


Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

Behind-the-Scenes Improvements

This release are contains a number of bug fixes and performance enhancements.

  • Optimize cache locking in Klein text encoder — (@girlyoulookthebest) This addresses a race condition in the Model Cache which prematurely removed the FLUX.2 Klein encoder from memory.
  • Run Text Encoder on CPU — (@lstein) This is an option available in the details panel of the Model Manager that allows you to force large text encoder models to run on CPU rather than GPU. This preserves VRAM for use by the denoiser steps and in some cases improves performance. Thanks to @girlyoulookthebest who found and fixed a bug in this feature.
  • Fix IP Adapters losing their model path — (@Pfannkuchensack) Fixes the Model Manager’s “reidentify” function when run on IP Adapter models.
  • Kill the server with a single ^C — (@lstein) When previous version of Invoke were launched from a command-line terminal, it used to require two key board interrupts (control-C) to completely shut it down. This is now fixed.
  • Persist the selected board and image across browser sessions — (@lstein) The last image board selected is now restored when you edit a browser session and restart it.

Detailed Change Log

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.11.0…v6.12.0

This site was designed and developed by Aether Fox Studio.