Skip to content

All versions since InvokeAI Version 2.2.0

InvokeAI Version 2.2.0

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what’s new for InvokeAI.

Update 1 December 2022 -

  • The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.

  • Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!

  • Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!

  • 1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.

  • Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.

  • DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.

First-time Installation

For those installing InvokeAI for the first time, please use this recipe: For automated installation, open up the “Assets” section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.

For manual installation download one of the “Source Code” archive files located in the Assets below. Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI and follow the instructions in Manual Installation.

Upgrading

For those wishing to upgrade from an earlier version, please use this recipe: Download one of the “Source Code” archive files located in the Assets below. Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:

environment-lin-amd.yml # Linux with an AMD (ROCm) GPU environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU environment-mac.yml # Macintoshes with MPS acceleration environment-win-cuda.yml # Windows with an NVIDA CUDA GPU

Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:

Macintosh and Linux using a symbolic link: ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml

Replace xxx and yyy with the appropriate OS and GPU codes.

Windows: copy environments-and-requirements\environment-win-cuda.yml environment.yml

When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory. Now run the following commands in the InvokeAI directory.

conda env update
conda activate invokeai
python scripts/preload_models.py

Additional installation information, including recipes for installing without Conda, can be found in Manual Installation

Known Bugs

  1. If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
  2. The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
  3. InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
  4. The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.

Contributing Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live. Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main branch, so please make your pull requests against this branch.

Support For support, please use this repository’s GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.

InvokeAI Version 2.2.2 - A Stable Diffusion Toolkit

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what’s new for InvokeAI.

Update 1 December 2022 -

  • The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.

  • Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!

  • Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!

  • 1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.

  • Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.

  • DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.

First-time Installation

For those installing InvokeAI for the first time, please use this recipe: For automated installation, open up the “Assets” section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.

For manual installation download one of the “Source Code” archive files located in the Assets below. Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI and follow the instructions in Manual Installation.

Upgrading

For those wishing to upgrade from an earlier version, please use this recipe: Download one of the “Source Code” archive files located in the Assets below. Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:

environment-lin-amd.yml # Linux with an AMD (ROCm) GPU environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU environment-mac.yml # Macintoshes with MPS acceleration environment-win-cuda.yml # Windows with an NVIDA CUDA GPU

Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:

Macintosh and Linux using a symbolic link: ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml

Replace xxx and yyy with the appropriate OS and GPU codes.

Windows: copy environments-and-requirements\environment-win-cuda.yml environment.yml

When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory. Now run the following commands in the InvokeAI directory.

conda env update
conda activate invokeai
python scripts/preload_models.py

Additional installation information, including recipes for installing without Conda, can be found in Manual Installation

Known Bugs

  1. If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
  2. The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
  3. InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
  4. The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.

Contributing Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live. Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main branch, so please make your pull requests against this branch.

Support For support, please use this repository’s GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.

InvokeAI 2.2.3

Note: This point release removes references to the binary installer from the installation guide. The binary installer is not stable at the current time. First time users are encouraged to use the “source” installer as described in Installing InvokeAI with the Source Installer

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what’s new for InvokeAI.

Update 1 December 2022 -

  • The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.

  • Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!

  • Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!

  • 1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.

  • Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.

  • DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.

First-time Installation

For those installing InvokeAI for the first time, please use this recipe: For automated installation, open up the “Assets” section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.

For manual installation download one of the “Source Code” archive files located in the Assets below. Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI and follow the instructions in Manual Installation.

Upgrading

For those wishing to upgrade from an earlier version, please use this recipe: Download one of the “Source Code” archive files located in the Assets below. Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:

environment-lin-amd.yml # Linux with an AMD (ROCm) GPU environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU environment-mac.yml # Macintoshes with MPS acceleration environment-win-cuda.yml # Windows with an NVIDA CUDA GPU

Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:

Macintosh and Linux using a symbolic link: ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml

Replace xxx and yyy with the appropriate OS and GPU codes.

Windows: copy environments-and-requirements\environment-win-cuda.yml environment.yml

When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory. Now run the following commands in the InvokeAI directory.

conda env update
conda activate invokeai
python scripts/preload_models.py

Additional installation information, including recipes for installing without Conda, can be found in Manual Installation

Known Bugs

  1. If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
  2. The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
  3. InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
  4. The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.

Contributing Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live. Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main branch, so please make your pull requests against this branch.

Support For support, please use this repository’s GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.

InvokeAI Version 2.2.4 - A Stable Diffusion Toolkit

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what’s new for InvokeAI.

Version 2.2.4 is a bugfix release. The major user-visible change is that we have overhauled the installation experience to make it faster and more stable. Please see Installation Overview for instructions on using the new installer, and see the .zip files in the Assets section below for the installer for your preferred platform. Note that you will need to install Python 3.9 or 3.10 to use the new installation method.

The new installers are located here. They have been updated 13 December in order to prevent a segfault crash on certain Macintosh systems.

There are a number of installation-related changes that previous InvokeAI users should be aware of:

Everything now lives in the invokeai directory.

Previously there were two directories to worry about, the directory that contained the InvokeAI source code and the launcher scripts, and the invokeai directory that contained the models files, embeddings, configuration and outputs. With the 2.2.4 release, this dual system is done away with, and everything, including the invoke.bat and invoke.sh launcher scripts, now live in a directory named invokeai. By default this directory is located in your home directory (e.g. \Users\yourname on Windows), but you can select where it goes at install time.

InvokeAI-installer-2.2.4-p5-linux.zip InvokeAI-installer-2.2.4-p5-mac.zip InvokeAI-installer-2.2.4-p5-windows.zip

After installation, you can delete the install directory (the one that the zip file creates when it unpacks). Do not delete or move the invokeai directory!

The .invokeai initialization file has been renamed invokeai/invokeai.init

You can place frequently-used startup options in this file, such as the default number of steps or your preferred sampler. To keep everything in one place, this file has now been moved into the invokeai directory and is named invokeai.init.

To update from Version 2.2.3

The easiest route is to download and unpack one of the 2.2.4 installer files. When it asks you for the location of the invokeai runtime directory, respond with the path to the directory that contains your 2.2.3 invokeai. That is, if invokeai lives at C:\Users\fred\invokeai, then answer with C:\Users\fred and answer “Y” when asked if you want to reuse the directory.

The update.sh (update.bat) script that came with the 2.2.3 source installer does not know about the new directory layout and won’t be fully functional.

To update to 2.2.5 (and beyond) there’s now an update path.

As they become available, you can update to more recent versions of InvokeAI using an update.sh (update.bat) script located in the invokeai directory. Running it without any arguments will install the most recent version of InvokeAI. Alternatively, you can get set releases by running the update.sh script with an argument in the command shell. This syntax accepts the path to the desired release’s zip file, which you can find by clicking on the green “Code” button on this repository’s home page. Here are some examples:

# 2.2.4 release
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.4.zip
# 2.2.5 release (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.5.zip
# current development version
update.sh https://github.com/invoke-ai/InvokeAI/archive/main.zip
# feature branch 3d-movies (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/3d-movies.zip

Other 2.2.4 Improvements

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.2.3…v2.2.4

InvokeAI Version 2.2.5 - A Stable Diffusion Toolkit

We are pleased to announce a features and bugfix update to InvokeAI with the release of version 2.2.5.

What’s New in 2.2.5

WebUI

  • The WebGUI now features a Model Manager that lets you load and edit models interatively. It also allows you to pick a folder to scan and import new .ckpt files @blessedcoolant
  • Add Unified Canvas Alternate UI Beta: We added a new alternative UI to the Unified Canvas that mimics traditional photo editing applications you might be familiar with. You can switch to this new UI in the Settings menu by activating the new toggle option. @blessedcoolant
  • Restore and Upscale hotkeys have been changed from ‘R’ and ‘U’ to ‘Shift+R’ and ‘Shift+U’ respectively. This was done to avoid accidental keystrokes triggering these operations. @blessedcoolant
  • Added Localization. Support has been added for Russian, Italian, Portuguese (Brazilian), German, Polish @blessedcoolant Translators:
    • Russian: @netsvetaev
    • Italian: @Harvester62
    • Portuguese (Brazilian): @M-art-ucci
    • German: cofter
    • Polish: pejotr
    • Spanish: dreglad

If you are interested in translating InvokeAI to your language, please feel free to reach out to us on Discord.

CLI

  • Add the —karras_max option to the command line. @lstein
  • Add the –version option to get the version of the app. @lstein
  • Remove requirement for Hugging Face token, now that it is no longer rqeuired. @ebr

Docker

  • Optimize dockerfile. @mauwii
  • Allow usage of GPU’s in Docker. @xrd

Bug Fixes & Updates

  • Fix not being able to load the model while inpainting when using the free_gpu_mem option. @rmagur1203
  • Various installer improvements. @lstein
  • Fix segfault error on MacOS when using homebrew. @ebr
  • Fix a None type error when nsfw_checker was turned on. @limonspb
  • Fix the number of tokens to cap to 75 and handle blends accordingly. @damian0815
  • [CLI] Fix the time step not displaying correctly during img2img. @wfng92
  • [WebUI] Fix the initial theme setting not displaying correctly in the selector after reload. @kasbah
  • [WebUI] Fix of Hires Fix on Img2Img tab @hipsterusername
  • Fix embeddings not working correctly. @blessedcoolant
  • Fix an issue where the —config launch argument was not being recognized. @blessedcoolant
  • Retrieve threshold from an image even if it is 0. @JPPhoto
  • Add –root_dir as an alternate arg for –root during launch.
  • Relax HuggingFace login requirements during setup. @ebr
  • Fixed an issue where the —no-patchmatch would not work. @lstein
  • Fixed a crash in img2img @lstein
  • Documentation, updates, typos and fixes. @limonspb, @lstein, @hipsterusername, @mauwii

Developer

  • Add concurrency to Github actions. @mauwii
  • Github action to lint python files with pyflakes @keturn
  • Fix circular dependencies on the frontend @kasbah
  • Add Github action for linting the frontend. @kasbah
  • Fix all linting warnings on the frontend. @kasbah
  • Add auto formatting for the frontend. @kasbah

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.2.4…latest

Installation

To install InvokeAI 2.2.5 on a new system, please download the zip file below, unzip it, and run the script install.sh (Macintosh, Linux) or install.bat (Windows). A walkthrough can be found at Installation Overview . InvokeAI-installer-v2.2.5p2-linux.zip InvokeAI-installer-v2.2.5p2-mac.zip InvokeAI-installer-v2.2.5p2-windows.zip

Upgrading

If you have InvokeAI 2.2.4 installed, you can upgrade it quickly using an update script. Download the zip file below, and unpack it. Place the file update.bat (Windows) or update.sh (Linux/Mac) into your invokeai folder, replacing the update script that was previously there. Then launch the new update script from the command line or by double-clicking.

InvokeAI-updater-v2.2.5p2.zip

Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

InvokeAI Version 2.3.0

We are pleased to announce a features and performance update to InvokeAI with the release of version 2.3.0.

What’s New in 2.3.0

There are multiple internal and external changes in this version of InvokeAI which greatly enhance the developer and user experiences respectively.

Migration to Stable Diffusion diffusers models

Previous versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as “checkpoint”, or “legacy” format, there is a single large weights file ending with .ckpt or .safetensors. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called diffusers and consists of a directory of individual models. The most immediate benefit of diffusers is that they load from disk very quickly. A longer term benefit is that in the near future diffusers models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.

When you perform a new install of version 2.3.0, you will be offered the option to install the diffusers versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of “.ckpt” or “.safetensors” models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.

To take advantage of the optimized loading times of diffusers models, InvokeAI offers options to convert legacy checkpoint models into optimized diffusers models. If you use the invokeai command line interface, the relevant commands are:

  • !convert_model — Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a diffusers model, and import it into InvokeAI’s models registry file.
  • !optimize_model — If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named diffusers model, optionally deleting the original checkpoint file.
  • !import_model — Take the local path of either a checkpoint file or a diffusers model directory and import it into InvokeAI’s registry file. You may also provide the ID of any diffusers model that has been published on the HuggingFace models repository and it will be downloaded and installed automatically.

The WebGUI offers similar functionality for model management.

For advanced users, new command-line options provide additional functionality. Launching invokeai with the argument --autoconvert <path to directory> takes the path to a directory of checkpoint files, automatically converts them into diffusers models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the --ckpt_convert argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a diffusers model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.

Please see INSTALLING MODELS for more information on model management in both the command-line and Web interfaces.

Support for the XFormers Memory-Efficient Crossattention Package

On CUDA (Nvidia) systems, version 2.3.0 supports the XFormers library. Once installed, thexformers package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. xformers will be installed and activated automatically if you specify a CUDA system at install time.

The caveat with using xformers is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable xformers and restore fully deterministic behavior, you may launch InvokeAI using the --no-xformers option. This is most conveniently done by opening the file invokeai/invokeai.init with a text editor, and adding the line --no-xformers at the bottom.

A Negative Prompt Box in the WebUI

There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts (“mangled limbs, bad anatomy”). The [negative prompt] syntax continues to work in the main prompt box as well.

To see exactly how your prompts are being parsed, launch invokeai with the --log_tokenization option. The console window will then display the tokenization process for both positive and negative prompts.

Model Merging

Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in diffusers format, then launch the merger using a new menu item in the InvokeAI launcher script (invoke.sh, invoke.bat) or directly from the command line with invokeai-merge --gui. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged diffusers model and import it into InvokeAI for your use.

See MODEL MERGING for more details.

Textual Inversion Training

Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as “pointillist-style”. After successful training, The subject or style will be activated by including <pointillist-style> in your prompt.

Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any diffusers model. To access training you can launch from a new item in the launcher script or from the command line using invokeai-ti --gui.

See TEXTUAL INVERSION for further details.

A New Installer Experience

The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command pip install InvokeAI --use-pep517. Please see Installation for details.

Developers should be aware that the pip installation procedure has been simplified and that the conda method is no longer supported at all. Accordingly, the environments_and_requirements directory has been deleted from the repository.

Installation

To install or upgrade to InvokeAI 2.3.0, please download the zip file below, unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.0.zip

If you are upgrading from an earlier version of InvokeAI, all you have to do is to run the installer for your platform. When the installer asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.1. You can see which versions are available by going to The PyPI InvokeAI Project Page

Command-line name changes

All of InvokeAI’s functionality, including the WebUI, command-line interface, textual inversion training and model merging, can all be accessed from the invoke.sh and invoke.bat launcher scripts. The menu of options has been expanded to add the new functionality. For the convenience of developers and power users, we have normalized the names of the InvokeAI command-line scripts:

  • invokeai — Command-line client
  • invokeai --web — Web GUI
  • invokeai-merge --gui — Model merging script with graphical front end
  • invokeai-ti --gui — Textual inversion script with graphical front end
  • invokeai-configure — Configuration tool for initializing the invokeai directory and selecting popular starter models.

For backward compatibility, the old command names are also recognized, including invoke.py and configure-invokeai.py. However, these are deprecated and will eventually be removed.

Developers should be aware that the locations of the script’s source code has been moved. The new locations are:

  • invokeai => ldm/invoke/CLI.py
  • invokeai-configure => ldm/invoke/config/configure_invokeai.py
  • invokeai-ti=> ldm/invoke/training/textual_inversion.py
  • invokeai-merge => ldm/invoke/merge_diffusers

Developers are strongly encouraged to perform an “editable” install of InvokeAI using pip install -e . --use-pep517 in the Git repository, and then to call the scripts using their 2.3.0 names, rather than executing the scripts directly. Developers should also be aware that the several important data files have been relocated into a new directory named invokeai. This includes the WebGUI’s frontend and backend directories, and the INITIAL_MODELS.yaml files used by the installer to select starter models. Eventually all InvokeAI modules will be in subdirectories of invokeai.

Known Bugs in RC7

These are known bugs that will not be fixed prior to the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected. Support will be added in the next diffusers library release.
  2. Metadata will not at first be retrieved when uploading an image into the WebGUI. The metadata will appear when the WebUI is reset and the page reloaded.
  3. The noise and threshold values are not loaded into the WebUI when the ”*” (reuse all values) button is selected.
  4. The k_heun and k_dpm_2 schedulers will appear to perform twice as many sampling steps than were requested. This is an artifact of the fact that these schedulers perform two samplings per step and is a cosmetic issue only.
  5. When launching the Textual Inversion console-based GUI, if the command window is too small, the GUI will crash with an obscure error message. Make the window larger and relaunch.

Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Contributors

InvokeAI is the product of the loving attention of a large number of Contributors. For this release in particular, we’d like to recognize the combined efforts of Kevin Turner (@keturn), who got the diffusers port past the finish line, Eugene Brodsky (@ebr), for his work on the new installer, and Matthias Wild (@mauwii), for his many significant improvements to the testing pipeline and for setting up the system that uploads releases to the PyPi Python module repository.

We’d also like to call out Jonathon Pollack (@JPPhoto) for tirelessly testing each release candidate, @blessedcoolant and @psychedelicious for their work on the Web UI Model Manager and other UI features, and Kent Keirsey (@hipsterusername) for his amazing videos, outreach and team management.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.2.4…v2.3.0-rc5

Acknowledgements:

  • This release was supported in part by compute capacity provided by Mahdi Chaker’s “GPU Garden” cluster
  • A thousand thanks to @gogurtenjoyer and @whosawhatsis for their tireless work supporting users on the Discord server, as well as their contributions to bug finding and fixing.

v.2.3.1.post2

We are pleased to announce a bugfix and quality of life update to InvokeAI with the release of version 2.3.1.

What’s New in 2.3.1

This is primarily a bugfix release, but it does provide several new features that will improve the user experience.

Enhanced support for model management

InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.

There are three ways of accessing the model management features:

  1. From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.

image

  1. Using the Model Installer App

Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.

Command-line users can start this app using the command invokeai-model-install.

image

  1. Using the Command Line Client (CLI)

The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.

Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include “inpaint” in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.

Please see INSTALLING MODELS for more information on model management.

An Improved Installer Experience

The installer now launches a console-based UI for setting and changing commonly-used startup options:

image

After selecting the desired options, the installer installs several support models needed by InvokeAI’s face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options

Command-line users can launch the new configure app using invokeai-configure.

This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.

image

Command-line users can run this interface by typing invokeai-configure

Image Symmetry Options

There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).

image

A New Unified Canvas Look

This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:

image

Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:

image

Model conversion and merging within the WebUI

The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.

An easier way to contribute translations to the WebUI

We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project’s translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.

Numerous internal bugfixes and performance issues

This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.

Summary of InvokeAI command line scripts (all accessible via the launcher menu)

CommandDescription
invokeaiCommand line interface
invokeai --webWeb interface
invokeai-model-installModel installer with console forms-based front end
invokeai-ti --guiTextual inversion, with a console forms-based front end
invokeai-merge --guiModel merging, with a console forms-based front end
invokeai-configureStartup configuration; can also be used to reinstall support models
invokeai-updateInvokeAI software updater

Installation

To install or upgrade to InvokeAI 2.3.1, please download the zip file below, unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.1.post2.zip

If you are upgrading from an earlier version of InvokeAI, run the installer and when it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.1. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Last Feature Release on the 2.3.x Branch

This will be the last feature release on the 2.3.x branch. The development team is migrating to a new software architecture called Nodes, which will provide enhanced workflow management features as well as a much easier way for community developers to contribute to the project. We anticipate the transition taking 4-8 weeks (spring 2023). Until that time, we will be releasing bugfixes and other minor updates only.

Known Bugs in 2.3.1

These are known bugs in the release.

  1. MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issue in an upstream library and currently the only workaround is to install and use legacy .ckpt/.safetensors models instead of the diffusers models. For more information on this bug, see this Issue
  2. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected. Support will be added in the next diffusers library release.
  3. Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.
  4. InvokeAI’s memory requirements have increased modestly due to a variety of factors. For help debugging and mitigating out of memory issues, see the Troubleshooting section of the installation guide.
  5. FIXED IN 2.3.1.post1 — model merging fixed
  6. FIXED in 2.3.1.post2 — during installation, output and embeddings directories with spaces in their path names are now handled correctly.

Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Contributors

InvokeAI is the product of the loving attention of a large number of Contributors. For this release in particular, we’d like to recognize the combined efforts of @blessedcoolant, who worked tirelessly on the model management interface despite multiple changes in the backend, and Jonathon Pollack (@JPPhoto) for working deep in the bowels of memory management and image generation. Kudos to @damian0815 and Kevin Turner (@keturn) for their improvements on model memory management and prompt parsing, respectively, and many thanks to Matthias Wild (@mauwii) and Eugene Brodsky (@ebr) for their work on package management and installation.

Last but not least, we acknowledge the tireless efforts of Kent Keirsey (@hipsterusername) for his amazing videos, outreach and team management.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.0…v.2.3.1-rc1

InvokeAI Version 2.3.2

We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.2.

What’s New in 2.3.2

This is a bugfix and minor feature release.

Bugfixes

Since version 2.3.1 the following bugs have been fixed:

  1. Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
  2. Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a “base” model (512 pixels) or the 768-pixel SD-2.1 model.
  3. The “Use All” button was not restoring the Hi-Res Fix setting on the WebUI
  4. When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
  5. Crashes that occurred during model merging.
  6. Restore previous naming of Stable Diffusion base and 768 models.
  7. Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.

As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.

New “Invokeai-batch” script

2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:

a shack in the mountains, photograph
a shack in the mountains, watercolor
a shack in the mountains, oil painting
a chalet in the mountains, photograph
a chalet in the mountains, watercolor
a chalet in the mountains, oil painting
a shack in the desert, photograph
...

If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system’s resources efficiently (make sure you have good GPU cooling).

To try invokeai-batch out. Launch the “developer’s console” using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.2 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.2.post1.zip

To update from 2.3.1 you may use the “update” option (choice 6) in the invoke.sh/invoke.bat launcher script. Alternatively, you may use the installer. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.2. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.2

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.1…v2.3.2

Acknowledgements

Many thanks to @mauwii (Matthias Wilde), @psychedelicious, @blessedcoolant (Vic), @blhook (Pull Shark), and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.

InvokeAI Version 2.3.3 - A Stable Diffusion Toolkit

We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.3.

What’s New in 2.3.3

This is a bugfix and minor feature release.

Bugfixes

Since version 2.3.2 the following bugs have been fixed:

Bugs

  1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
  2. Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
  3. The batch script log file names have been fixed to be compatible with Windows.
  4. Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
  5. Support loading of legacy config files that have no personalization (textual inversion) section.
  6. An infinite loop when opening the developer’s console from within the invoke.sh script has been corrected.
  7. Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.

Enhancements

  1. It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested “Illuminati” model.
  2. The “NegativePrompts” embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
  3. If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
  4. On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
  5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.3 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.3.zip

To update from 2.3.1 or 2.3.2 you may use the “update” option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.3.

Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.3. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.3

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.2.post1…v2.3.3-rc1

Acknowledgements

Many thanks to @psychedelicious, @blessedcoolant (Vic), @JPPhoto (Jonathan Pollack), @ebr (Eugene Brodsky) @JoshuaKimsey, @EgoringKosmos, and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.2.post1…v2.3.3

InvokeAI Version 2.3.4.post1 - A Stable Diffusion Toolkit

We are pleased to announce a features update to InvokeAI with the release of version 2.3.4.

Update: 13 April 2024 - 2.3.4.post1 is a hotfix that corrects an installer crash resulting from an update to the upstream diffusers library. If you have recently tried to install 2.3.4 and experienced a crash relating to “crossattention,” this release will fix the issue.

What’s New in 2.3.4

This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.

LoRA and LyCORIS Support

LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)

To use LoRA/LyCORIS models in InvokeAI:

  1. Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.

  2. Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:

family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)

Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA’s influence stronger. Negative weights are also allowed, which can lead to some interesting effects.

  1. Generate as you usually would! If you find that the image is too “crisp” try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you’ll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA’s training. Don’t try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.

  2. You can change the location of the loras directory by passing the --lora_directory option to `invokeai.

New WebUI LoRA and Textual Inversion Buttons

This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.

old-sea-captain-annotated

Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.

Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.

By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on “Show Textual Inversions from HF Concepts Library.” When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.

Minor features and fixes

This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.4 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.4.post1.zip

To update from versions 2.3.1 or higher, select the “update” option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.4. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.4. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page (Pre-release note: this will only work after the official release.)

Known Bugs in 2.3.4

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Change Log

New Contributors and Acknowledgements

Many thanks to these individuals, as well as @blessedcoolant and @damian0815 for their contributions to this release.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.3…v2.3.4rc1

InvokeAI 2.3.5

We are pleased to announce a features update to InvokeAI with the release of version 2.3.5. This is currently a pre-release for community testing and bug reporting.

What’s New in 2.3.5

This release expands support for additional LoRA and LyCORIS models, upgrades diffusers to 0.15.1, and fixes a few bugs.

LoRA and LyCORIS Support Improvement

  • A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
  • Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
  • Support for the newer LoKR LyCORIS files has been added.

Diffusers 0.15.1

  • This version updates the diffusers module to version 0.15.1 and is no longer compatible with 0.14. This provides a number of performance improvements and bug fixes.

Performance Improvements

  • When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.

Bug Fixes

  • The “import models from directory” and “import from URL” functionality in the console-based model installer has now been fixed.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.5 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. InvokeAI-installer-v2.3.5.zip

To update from versions 2.3.1 or higher, select the “update” option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.5. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
  2. If the xformers memory-efficient attention module is used, each image generated with the same prompt and settings will be slightly different. xformers 0.0.19 reduces or eliminates this problem, but hasn’t been extensively tested with InvokeAI. If you wish to upgrade, you may do so by entering the InvokeAI “developer’s console” and giving the command pip install xformers==0.0.19. You may see a message about InvokeAI being incompatible with this version, which you can safely ignore. Be sure to report any unexpected behavior to the Issues pages.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (late April, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of “nodes”.

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Change Log

New Contributors and Acknowledgements

  • @AbdBarho contributed the checksum performance improvements
  • @StAlKeR7779 (Sergey Borisov) contributed the LoKR support, did the diffusers 0.15 port, and cleaned up the code in multiple places.

Many thanks to these individuals, as well as @damian0815 for his contribution to this release.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1…v2.3.5-rc1

InvokeAI Version 2.3.5.post1

We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post1.

What’s New in 2.3.5.post1

The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.

Here are the new library versions:

LibraryVersion
Torch2.0.0
Diffusers0.16.1
Xformers0.0.19
Compel1.1.5

Other Improvements

When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running (thanks to @pedantic79 for this).

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.5.post1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.5.post1.zip

If you are using the Xformers library, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:

  1. Start the launcher script and select option # 8 - Developer’s console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the “[xformers]” part.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5.post1. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5.post1

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (mid-May, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of “nodes”.

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1…v2.3.5-rc1

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5…v2.3.5.post1

InvokeAI 2.3.5.post2

We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post2.

What’s New in 2.3.5.post2

This is a bugfix release. In previous versions, the built-in updating script did not update the Xformers library when the torch library was upgraded, leaving people with a version that ran on CPU only. Install this version to fix the issue so that it doesn’t happen when updating to future versions of InvokeAI 3.0.0.

As a bonus, this version allows you to apply a checkpoint VAE, such as vae-ft-mse-840000-ema-pruned.ckpt to a diffusers model, without worrying about finding the diffusers version of the VAE. From within the web Model Manager, choose the diffusers model you wish to change, press the edit button, and enter the Location of the VAE file of your choice. The field will now accept either a .ckpt file, or a diffusers directory.

Installation / Upgrading

To install 2.3.5.post2 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.5.post2.zip

If you are using the Xformers library, and running v2.3.5.post1 or earlier, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:

  1. Start the launcher script and select option # 8 - Developer’s console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the “[xformers]” part. From v2.3.5.post2 onward, the updater script will work properly with Xformers installed.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade InvokeAI==2.3.5.post2

To upgrade to an xformers version if you are not currently using xformers, use:

pip install --use-pep517 --upgrade InvokeAI[xformers]

You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5.post2

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (mid-May, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of “nodes”.

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5…v2.3.5.post2

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5.post1…v2.3.5.post2

InvokeAI 3.0.0

InvokeAI Version 3.0.0

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.0 represents a major advance in functionality and ease compared with the last official release, 2.3.5.

Please use the 3.0.0 release discussion thread, for comments on this version, including feature requests, enhancement suggestions and other non-critical issues. Report bugs to InvokeAI Issues. For interactive support with the development team, contributors and user community, you are invited join the InvokeAI Discord Server.

To learn more about InvokeAI, please see our Documentation Pages.

What’s New in v3.0.0

Quite a lot has changed, both internally and externally.

Web User Interface:

  • A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
  • A Dynamic Prompts interface that lets you generate combinations of prompt elements.
  • Preliminary support for Stable Diffusion XL the latest iteration of Stability AI’s image generation models.
  • A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
  • The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
  • An experimental Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface. To activate this, please use the settings icon at the upper right of the Web UI.
  • Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
  • Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
  • Long prompt support (>77 tokens).
  • Memory and speed improvements.

The WebUI can now be launched from the command line using either invokeai-web (preferred new way) or invokeai --web (deprecated old way).

Command Line Tool

The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli that allows you to experiment with InvokeAI nodes.

Installer

The console-based model installer, invokeai-model-install has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.

Internal

Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as “nodes”, which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.


Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.0 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.0.zip

Upgrading in place

All users can upgrade from the 3.0 beta releases using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select “Manually enter the tag name for the version you wish to update to” option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the “Developer’s console” option [8]
  4. Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0.zip" --use-pep517 --upgrade

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Once 3.0.0 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade InvokeAI==3.0.0

To upgrade to an xformers version if you are not currently using xformers, use:

pip install --use-pep517 --upgrade InvokeAI[xformers]

You can see which versions are available by going to The PyPI InvokeAI Project Page


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature as soon as SDXL 1.0 is officially released.

SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.

To experiment with SDXL, you’ll need the “base” and “refiner” models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen). Alternatively, select launcher option [6] “Change InvokeAI startup options” and paste the HF token into the indicated field.

Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9 and stable-diffusion-xl-refiner-0-9. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.

Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9. Press Add Model and wait for the model to download and install. After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9.

Note that these are large models (12 GB each) so be prepared to wait a while.

To use the installed models you will need to activate the Node Editor, an advanced feature of InvokeAI. Go to the Settings (gear) icon on the upper right of the Web interface, and activate “Enable Nodes Editor”. After reloading the page, an inverted “Y” will appear on the left-hand panel. This is the Node Editor.

Enter the Node Editor and click the Upload button to upload either the SDXL base-only or SDXL base+refiner pipelines (right click to save these .json files to disk). This will load and display a flow diagram showing the (many complex) steps in generating an SDXL image.

Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style (“bluebird in a sakura tree” and “chinese classical painting”) and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will eventually be generated and added to the image gallery. Unlike standard rendering, intermediate images are not (yet) displayed during rendering.

Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32 precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

Known Bugs in 3.0

This is a list of known bugs in 3.0 as well as features that are planned for inclusion in later releases:

  • On Macintoshes with MPS, Stable Diffusion 2 models will not render properly. This will be corrected in the next point release.
  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
  • There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.
  • The NSFW checker (blurs explicit images) is currently disabled but will be reenabled in time for the next release.

Getting Help

For support, please use this repository’s GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

What’s Changed Since 2.3.5

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5…v3.0.0rc2

InvokeAI Version 3.0.1

InvokeAI Version 3.0.1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.

To learn more about InvokeAI, please see our Documentation Pages.

What’s New in v3.0.1

  • Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
  • Can install and run both diffusers-style and .safetensors-style SDXL models.
  • Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
  • Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
  • The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
  • During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several “starter” main models.
  • User interface cleanup to reduce visual clutter and increase usability.

Recent Changes

Since RC3, the following has changed:

  • Fixed crash on Macintosh M1 machines when rendering SDXL images
  • Fixed black images when generating on Macintoshes using the Unipc scheduler (falls back to CPU; slow)

Since RC2, the following has changed:

  • Added compatibility with Python 3.11
  • Updated diffusers to 0.19.0
  • Cleaned up console logging - can now change logging level as described in the docs
  • Added download of an updated SDXL VAE “sdxl-vae-fix” that may correct certain image artifacts in SDXL-1.0 models
  • Prevent web crashes during certain resize operations

Developer changes:

  • Reformatted the whole code base with the “black” tool for a consistent coding style
  • Add pre-commit hooks to reformat committed code on the fly

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

InvokeAI-installer-v3.0.1.zip

Upgrading in place

All users can upgrade from 3.0.0 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select “Manually enter the tag name for the version you wish to update to” option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the “Developer’s console” option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade InvokeAI==3.0.1
invokeai-configure --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models.
  2. Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0 (note that these are preliminary IDs - these notes are being written before the SDXL release)
  3. Download the models manually and cut and paste their paths into the Location field in “Import Models”

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0

Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size to 6 GB or higher.


Known Bugs in 3.0

This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:

  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
  • There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.

Getting Help

For support, please use this repository’s GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.0…v3.0.1

Source code and previous installer files

The files below include the InvokeAI installer zip file, the full source code, and previous release candidates for 3.0.1

InvokeAI 3.0.1 (hotfix 3)

InvokeAI Version 3.0.1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.

To learn more about InvokeAI, please see our Documentation Pages.

What’s New in v3.0.1

  • Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
  • Can install and run both diffusers-style and .safetensors-style SDXL models.
  • Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
  • Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
  • The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
  • During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several “starter” main models.
  • User interface cleanup to reduce visual clutter and increase usability.

v3.0.1post3Hotfixes

This release containss a proposed hotfix for the Windows install OSError crashes that began appearing in 3.0.1. In addition, the following bugs have been addressed:

  • Correct issue of some SD-1 safetensors models could not be loaded or converted
  • The models_dir configuration variable used to customize the location of the models directory is now working properly
  • Fixed crashes of the text-based installer when the number of installed LoRAs and other models exceeded 72
  • SDXL metadata is now set and retrieved properly
  • Correct post1’s crash when performing configure with --yes flag.
  • Correct crashes in the CLI model installer

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.0.1post3.zip

Upgrading in place

All users can upgrade from 3.0.0 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select “Manually enter the tag name for the version you wish to update to” option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the “Developer’s console” option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1post3.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade InvokeAI==3.0.1post3
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models.
  2. Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in “Import Models”

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5

Known Bugs in 3.0

This is a list of known bugs in 3.0.1post3 as well as features that are planned for inclusion in later releases:

  • The merge script isn’t working, and crashes during startup (will be fixed soon)
  • Inpainting models generated using the A1111 merge module are not loading properly (will be fixed soon)
  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
  • There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.

Getting Help

For support, please use this repository’s GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1…v3.0.1post1

InvokeAI Version 3.0.2

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation Pages.

What’s New in v3.0.2

  • LoRA support for SDXL is now available
  • Mutli-select actions are now supported in the Gallery
  • Images are automatically sent to the board that is selected at invocation
  • Images from previous versions of InvokeAI are able to imported with the invokeai-import-images command
  • Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
  • Model merging functionality has been fixed
  • Improved Model Manager UI/UX
  • InvokeAI 3.0 can be served via HTTPS
  • Execution statistics are visible in the terminal after each invocation
  • ONNX models are now supported for use with Text2Image
  • Pydantic errors when upgrading inplace have been resolved
  • Code formatting is now part of the CI/CD pipeline
  • …and lots more! You can view the full change log here

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.2 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.0.2.zip

Upgrading in place

All users can upgrade from 3.0.1 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select “Manually enter the tag name for the version you wish to update to” option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the “Developer’s console” option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.

Note:

  • If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your models/ .cache folder before proceeding.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade InvokeAI==3.0.2
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models.
  2. Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in “Import Models”

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5

Known Issues in 3.0

This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:

  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.

Getting Help

For support, please use this repository’s GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!

Detailed Change Log

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1rc3…v3.0.2

InvokeAI v3.0.2post1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation Pages.

What’s New in v3.0.2post1

  • Support for LoRA models in diffusers format
  • Warn instead of crashing when a corrupted model is detected
  • Bug fix for auto-adding to a board

What’s New in v3.0.2

  • LoRA support for SDXL is now available
  • Mutli-select actions are now supported in the Gallery
  • Images are automatically sent to the board that is selected at invocation
  • Images from previous versions of InvokeAI are able to imported with the invokeai-import-images command
  • Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
  • Model merging functionality has been fixed
  • Improved Model Manager UI/UX
  • InvokeAI 3.0 can be served via HTTPS
  • Execution statistics are visible in the terminal after each invocation
  • ONNX models are now supported for use with Text2Image
  • Pydantic errors when upgrading inplace have been resolved
  • Code formatting is now part of the CI/CD pipeline
  • …and lots more! You can view the full change log here

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install 3.0.2post1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.0.2post1.zip

Upgrading in place

All users can upgrade from 3.0.1 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select “Manually enter the tag name for the version you wish to update to” option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the “Developer’s console” option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.

Note:

  • If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your models/ .cache folder before proceeding.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade InvokeAI==3.0.2
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models.
  2. Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in “Import Models”

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5

Known Issues in 3.0

This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:

  • Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
  • Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
  • High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.

Getting Help

For support, please use this repository’s GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!

Detailed Change Long since 3.0.2

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.2…v3.0.2post1

InvokeAI 3.1.0

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!

Download the installer: InvokeAI-installer-v3.1.0.zip

What’s New in v3.1.0

Workflows

InvokeAI 3.1.0 introduces a new powerful tool to aide the image generation process in the Workflow Builder. Workflows combine the power of nodes-based software with the ease of use of a GUI to deliver the best of both worlds.

The Node Editor allows you to build the custom image generation workflows you need, as well as enables you to create and use custom nodes, making InvokeAI a fully extensible platform.

To get started with nodes in InvokeAI, take a look at our example workflows, or some of the custom Community Nodes.

A zip file of example workflows can be found at the bottom of this page under Assets.

Other New Features

  • Expanded SDXL support across all areas of InvokeAI.
  • Enhanced In-painting & Out-painting capabilities.
  • Improved Control Asset Usage, including from the Unified Canvas.
  • Newly added nodes for better functionality.
  • Seamless Tiling is back, with SDXL support!
  • Improved In-inpainting & Out-painting
  • Generation statistics can be viewed from the command line after generation
  • Hot-reloading is now available for python files in the application
  • LoRAs are sorted alphabetically
  • Symbolic links to directories in the autoimport folder are now supported
  • UI/UX Improvements
  • Interactively configure image generation options, the attention system, and the VRAM cache
  • …and so much more! You can view the full change log here

Installation / Upgrading

Installing using the InvokeAI zip file installer

To install v3.1.0 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you have InvokeAI 2.3.5 or older installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.

In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.

InvokeAI-installer-v3.1.0.zip

Upgrading in place

All users can upgrade from 3.0.2 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.1 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):

  1. Enter the root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the upgrade menu option [9]
  4. Select “Manually enter the tag name for the version you wish to update to” option [3]
  5. Select option [1] to upgrade to the latest version.
  6. When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]

Windows users can instead follow this recipe:

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch invoke.sh or invoke.bat
  3. Select the “Developer’s console” option [8]
  4. Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.1.0.zip" --use-pep517 --upgrade
invokeai-configure --root .

This will produce a working 3.1.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

After you have confirmed everything is working, you may remove the following backup directories and files:

  • invokeai.init.orig
  • models.orig
  • configs/models.yaml.orig
  • embeddings
  • loras

To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.

Note:

  • If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your models/.cache folder before proceeding.

What to do if problems occur during the install

Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.

In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.

Migrating images from a 2.3 InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-import-images, which will copy images from any previous version of InvokeAI to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:

invokeai-import-images

This will prompt you to select the destination and source directories, and allow you to select which image gallery board to import into.

Migrating models and settings from an old InvokeAI root directory to a 3.0 directory

We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:

invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>

Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)

Upgrading using pip

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:

pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights

You may specify a particular version by adding the version number to the command, as in:

pip install --use-pep517 --upgrade InvokeAI==3.1.0
invokeai-configure --yes --skip-sd-weights

Important: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.


Getting Started with SDXL

Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.

To download the base and refiner SDXL models, you have several options:

  1. Select option [5] from the invoke.bat launcher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models.
  2. Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
    • stabilityai/stable-diffusion-xl-base-1.0
    • stabilityai/stable-diffusion-xl-refiner-1.0
  3. Download the models manually and cut and paste their paths into the Location field in “Import Models”

Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:

precision: float16
ram: 12.0
vram: 0.5

Known Issues in 3.1

This is a list of known issues in 3.1.0 as well as features that are planned for inclusion in later releases:

  • The max_vram_cache and ram_cache settings in invokeai.yaml have been deprecated and renamed to vram and ram. To adjust cache size, we recommend using the configure script (option [6] in the launcher) to adjust them.
  • Variation generation was not fully functional and did not make it into the release.
  • High-res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find a high-res optimization workflows attached and in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the Workflow tool.

Getting Help

For support, please use this repository’s GitHub Issues tracking service, or join our Discord.


Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Thank you to all of the new and existing contributors to InvokeAI. We appreciate your efforts and contributions!

Detailed Change Log since 3.0.2

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.2post1…v3.1.0rc1

InvokeAI 3.1.1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation, and check out the 3.1 Release Landing Page for the Community Edition!

What’s New in 3.1.1:

  • Node versioning
  • Nodes now support polymorphic inputs (inputs which are a single of a given type or list of a given type, e.g. Union[str, list[str]])
  • SDXL Inpainting Model is now supported
  • Inpainting & Outpainting Improvements
  • Workflow Editor UI Improvements
  • Model Manager Improvements
  • Fixed configuration script trying to set VRAM on macOS

Things to Know:

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn’t have a version. If your workflow runs, you may safely ignore this, and we will add functionality to “upgrade” the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.

Installation and Upgrading

To install v3.1.1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

Download the installer: InvokeAI-installer-v3.1.1.zip

Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.0…v3.1.1

InvokeAI v3.2.0

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading Web Interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please see our Documentation or join the Discord!

What’s New in 3.2.0:

  • Queueing
    • This is a powerful new feature that allows you to queue multiple image generations, create batches, manage the queue, and gain insight into generations.
  • IP-Adapter is now supported
    • Instructions on getting started with IP-Adapter are located in the “Things to Know” section below
  • TAESD is now supported. You can download TAESD or TAESDXL through the model manager UI
  • LoRAs and ControlNets are now able to be recalled with the “Use All” function
  • New nodes! Load prompts from a file, string manipulation, and expanded math functions
  • Node caching - improve performance by using previously cached generation values
  • V-prediction for SD1.5 is now supported
  • Importing images from previous versions of InvokeAI has been fixed
  • Database maintenance script can be run with invokeai-db-maintenance
  • View image metadata with the invokeai-metadata command
  • Workflow Editor UI/UX improvements
  • Unified Canvas improvements & bug fixes

Things to Know:

  • If you experience the server error, TypeError: Invoker.create_execution_state() got an unexpected keyword argument 'queue_id', try clearing your local browser cache or resetting the InvokAI UI (Settings -> Reset UI) before running a generation.

  • You might see a red alert icon on your nodes after loading a workflow. This indicates that the node in the workflow is from an older version of InvokeAI, or the node doesn’t have a version. If your workflow runs, you may safely ignore this, and we will add functionality to “upgrade” the un-versioned nodes in a future update. If the workflow does not work, you will need to delete and add the nodes.

  • To get started with IP-Adapter, you’ll need to download the image encoder and IP-Adapter for the desired based model. Once the models are installed, IP-Adapter is able to be used under the “Control Adapters” options.

    Image Encoders:

    IP-Adapter Models:

    These can be installed from the Model Manager by choosing “Import Models” and pasting in the repoIDs of the desired model. Remember to install the model and the image encoder! For example to get started with IP-Adapter for SD1.5 these are the repo IDs:

    • InvokeAI/ip_adapter_plus_sd15
    • InvokeAI/ip_adapter_sd_image_encoder

    or from the command line by starting the “Developer’s Console” from the invoke.bat launcher and pasting this command:

    invokeai-model-install --add InvokeAI/ip_adapter_sd_image_encoder InvokeAI/ip_adapter_sdxl_image_encoder InvokeAI/ip_adapter_sd15 InvokeAI/ip_adapter_plus_sd15 InvokeAI/ip_adapter_plus_face_sd15 InvokeAI/ip_adapter_sdxl

Installation and Upgrading:

To install v3.2.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI v3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

Download the installer: InvokeAI-installer-v3.2.0.zip

Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please see How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.1.1…v3.2.0

InvokeAI 3.3.0post1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

🌟 What’s New in 3.3.0:

  • T2I-Adapter is now supported
    • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Expanded translations (for many languages!)
  • Unified Canvas improvements and bug fixes

‼️ Things to Know:

  • Future updates will bring a couple of major changes:
    • Starting with 3.3, InvokeAI will only be supported for Python 3.10 and newer versions. Please begin preparing to upgrade your Python environment.
    • Community Nodes will need to update their import structure. InvokeAI internal services are being reorganized to better support Community Nodes and future development efforts.
  • T2I-Adapter and ControlNet cannot currently be used at the same time. This is prevented in the regular UI, but users will find that errors occur if they do not follow this guidance in Workflow development.
  • T2I-Adapters currently require an image output size that is a multiple of 64. This is enforced in the regular UI, but again, you will need to adhere to this constraint in workflow development

💿 Installation and Upgrading:

To install version 3.3.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

**Download the installer: InvokeAI-installer-v3.3.0post1.zip **

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors:

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.2.0…v3.3.0

InvokeAI 3.3.0post2

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

Post2 update

3.3.0post2 is a minor hotfix that corrects incompatibility issues when installing xformers, updates translations, and corrects incompatibilities with systems that have versions of glibc<2.3.3.

🌟 What’s New in 3.3.0:

  • T2I-Adapter is now supported
    • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Expanded translations (for many languages!)
  • Unified Canvas improvements and bug fixes

‼️ Things to Know:

  • Future updates will bring a couple of major changes:
    • Starting with 3.3, InvokeAI will only be supported for Python 3.10 and newer versions. Please begin preparing to upgrade your Python environment.
    • Community Nodes will need to update their import structure. InvokeAI internal services are being reorganized to better support Community Nodes and future development efforts.
  • T2I-Adapter and ControlNet cannot currently be used at the same time. This is prevented in the regular UI, but users will find that errors occur if they do not follow this guidance in Workflow development.
  • T2I-Adapters currently require an image output size that is a multiple of 64. This is enforced in the regular UI, but again, you will need to adhere to this constraint in workflow development

💿 Installation and Upgrading:

To install version 3.3.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

**Download the installer: InvokeAI-installer-v3.3.0post2.zip **

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors:

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.2.0…v3.3.0post2

InvokeAI 3.3.0post3

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

Post3 Update

3.3.0post3 is a hotfix release that fixes canvas locking issues, addresses a breaking change from v5.10 of python-socketio and adds translations

🌟 What’s New in 3.3.0:

  • T2I-Adapter is now supported
    • Models can be downloaded through the Model Manager or the model download function in the launcher script.
  • Multi IP-Adapter Support!
  • New nodes for working with faces
  • Improved model load times from disk
  • Hotkey fixes
  • Expanded translations (for many languages!)
  • Unified Canvas improvements and bug fixes

‼️ Things to Know:

  • Future updates will bring a couple of major changes:
    • Starting with 3.30, InvokeAI will only be supported for Python 3.10 and newer versions. Please begin preparing to upgrade your Python environment.
    • Community Nodes will need to update their import structure. InvokeAI internal services are being reorganized to better support Community Nodes and future development efforts.
  • T2I-Adapter and ControlNet cannot currently be used at the same time. This is prevented in the regular UI, but users will find that errors occur if they do not follow this guidance in Workflow development.
  • T2I-Adapters currently require an image output size that is a multiple of 64. This is enforced in the regular UI, but again, you will need to adhere to this constraint in workflow development

💿 Installation and Upgrading:

To install version 3.3.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

Download the installer: InvokeAI-installer-v3.3.0post3.zip

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.3.0post2…v3.3.0post3

InvokeAI v3.4.0

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

🌟 What’s New in 3.4.0:

  • LCM-LoRAs are natively supported now supported in InvokeAI. See the note in Things to Know below.
  • Community Nodes can now be installed by adding them to the nodes folder of the InvokeAI installation
  • Core Nodes can be automatically updated via the Workflow Editor
  • Large performance improvements: reduced LoRA & text encoder loading times, improved token handling
  • HiRes Fix is has returned!
  • FreeU is supported for workflows
  • ControlNets & T2I-Adapters can now be used together
  • Multi-Image IP-Adapter is now available in Nodes Workflows (Instant LoRA!)
  • Intermediate images are no longer saved to disk
  • ControlNets in .safetensors format are now able to be used (SD1.5 & SD2 only). See the note in Things to Know below.
  • VAE is now able to be recalled with “Use All”
  • Color Picker Improvements
  • Expanded translations (Dutch, Italian and Chinese are almost entirely complete!)
  • InvokeAI now uses Pydantic2 and the latest FastAPI, making certain functions (like Iterate nodes) much more efficient.

‼️ Things to Know:

  • InvokeAI is only be supported for Python 3.10 and newer versions. Please upgrade your Python environment if you are using an older version.
  • Community nodes that were previously installed in the .venv will need to be moved to the nodes folder at the root level of the InvokeAI installation.
  • LCM-LoRAs are natively supported in diffusers format can be downloaded through the model manager using the HuggingFace RepoID. LCMs are supported through a custom node.
  • To support .safetensors ControlNets for SD1.5 & SD2, select option [6] from the launcher to “Re-run the configure script to fix a broken install or to complete a major upgrade”.

💿 Installation and Upgrading:

To install version 3.4.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

🚨 Please ensure your generation queue is has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer:InvokeAI-installer-v3.4.0.zip

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.3.0post3…v3.4.0

InvokeAI 3.4.0post1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

Post Release

  • This post release fixes a bug with generations that prevented Image to Image generations from running successfully.

🌟 What’s New in 3.4.0:

  • LCM-LoRAs are now natively supported supported in InvokeAI. See the note in Things to Know below.
  • Community Nodes can now be installed by adding them to the nodes folder of the InvokeAI installation
  • Core Nodes can be automatically updated via the Workflow Editor
  • Large performance improvements: reduced LoRA & text encoder loading times, improved token handling
  • HiRes Fix is has returned!
  • FreeU is supported for workflows
  • ControlNets & T2I-Adapters can now be used together
  • Multi-Image IP-Adapter is now available in Nodes Workflows (Instant LoRA!)
  • Intermediate images are no longer saved to disk
  • ControlNets in .safetensors format are now able to be used (SD1.5 & SD2 only). See the note in Things to Know below.
  • VAE is now able to be recalled with “Use All”
  • Color Picker Improvements
  • Expanded translations (Dutch, Italian and Chinese are almost entirely complete!)
  • InvokeAI now uses Pydantic2 and the latest FastAPI, making certain functions (like Iterate nodes) much more efficient.

‼️ Things to Know:

  • InvokeAI is only be supported for Python 3.10 and newer versions. Please upgrade your Python environment if you are using an older version.
  • Some users are having issues with PyTorch updates when updating to 3.4. If you encounter an error starting with WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions, open the developer console (option 7) from the launcher and run pip install --force-reinstall torch==2.1.0 --index-url https://download.pytorch.org/whl/cu121
  • Community nodes that were previously installed in the .venv will need to be moved to the nodes folder at the root level of the InvokeAI installation.
  • LCM-LoRAs are natively supported in diffusers format can be downloaded through the model manager using the HuggingFace RepoID. LCMs are supported through a custom node and can be added manually.
  • To support .safetensors ControlNets for SD1.5 & SD2, select option [6] from the launcher to “Re-run the configure script to fix a broken install or to complete a major upgrade”.

💿 Installation and Upgrading:

To install version 3.4.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

🚨 Please ensure your generation queue is has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.4.0post1.zip

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.3.0post3…v3.4.0post1

InvokeAI v3.4.0post2

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

Post Release Notes

  • Fixed LoRAs being applied twice (3.4.0post2)
  • Fixed a bug with generations that prevented Image to Image generations from running successfully. (3.4.0post1)

🌟 What’s New in 3.4.0:

  • LCM-LoRAs are now natively supported supported in InvokeAI. See the note in Things to Know below.
  • Community Nodes can now be installed by adding them to the nodes folder of the InvokeAI installation
  • Core Nodes can be automatically updated via the Workflow Editor
  • Large performance improvements: reduced LoRA & text encoder loading times, improved token handling
  • HiRes Fix is has returned!
  • FreeU is supported for workflows
  • ControlNets & T2I-Adapters can now be used together
  • Multi-Image IP-Adapter is now available in Nodes Workflows (Instant LoRA!)
  • Intermediate images are no longer saved to disk
  • ControlNets in .safetensors format are now able to be used (SD1.5 & SD2 only). See the note in Things to Know below.
  • VAE is now able to be recalled with “Use All”
  • Color Picker Improvements
  • Expanded translations (Dutch, Italian and Chinese are almost entirely complete!)
  • InvokeAI now uses Pydantic2 and the latest FastAPI, making certain functions (like Iterate nodes) much more efficient.

‼️ Things to Know:

  • InvokeAI only supports Python 3.10 and 3.11. Earlier versions are not supported, and 3.12 is not supported at the current time. Please upgrade your Python environment if you are using an older version.
  • Some users are having issues with PyTorch updates when updating to 3.4. If you encounter an error starting with WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions, open the developer console (option 7) from the launcher and run pip install --force-reinstall torch==2.1.0 --index-url https://download.pytorch.org/whl/cu121
  • Community nodes that were previously installed in the .venv will need to be moved to the nodes folder at the root level of the InvokeAI installation.
  • LCM-LoRAs are natively supported in diffusers format can be downloaded through the model manager using the HuggingFace RepoID. LCMs are supported through a custom node and can be added manually.
  • To support .safetensors ControlNets for SD1.5 & SD2, select option [6] from the launcher to “Re-run the configure script to fix a broken install or to complete a major upgrade”.

💿 Installation and Upgrading:

To install version 3.4.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting option [9] to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

🚨 Please ensure your generation queue is has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.4.0post2.zip

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.4.0post1…v3.4.0post2

InvokeAI v3.5.0

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

🌟 What’s New in 3.5.0:

Workflow Library

Until now, a workflow could only be associated with an image, or be downloaded as JSON. The Workflow Library allows workflows to be saved independently to the database. The UI provides sorting and filtering options to manage them.

With the Workflow Library, we can now ship default workflows directly in the app. You’ll see a couple on the Default tab. As the InvokeAI application evolves we will keep these workflows up-to-date, and regularly add more.

Other Enhancements

  • More capable node updating
  • Better errors when your workflow doesn’t match your installed nodes
  • Community node packs auto-report their name, so if your workflow needs nodes you don’t have installed, you’ll see what’s missing
  • Custom field types for nodes
  • Tiled upscaling nodes (BETA)
  • Added many missing translation strings
  • Gallery auto-scroll

‼️ Things to Know:

Invoke might revert to CPU (NVIDA GPU only)

  • Some users have experienced torch reverting to the CPU rather than their GPU. To fix this follow these steps:
  1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
  2. Run: pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  3. If you run into an error with typing_extensions, run: pip install -U typing-extensions
  4. If there is an error with fsspec, run pip install -U fsspec==2023.5.0

Database Migrations

As the app evolves and our database usage gets a bit more complex, we need a way to safely update it. This release introduces a database migration utility and versions the database.

The first time you run v3.5.0, the migrator will set up the database versioning and, for this particular release, updates the images table to flag if an image has a workflow embedded or not. You’ll see a progress bar as it checks each image - it should be pretty quick.

The migration utility rolls back changes if anything goes wrong, is covered by our test suite and has proved itself with manual testing.

Custom Field Types in Nodes

Previously, node authors had to use built-in field types for inputs and outputs of their nodes. While this covered many use-cases, we recognized the need for “custom” field types. This is now fully supported, and any pydantic model can be used as a field type.

💿 Installation and Upgrading:

To install version 3.5.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting “Updated InvokeAI” to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

🚨 Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.5.0.zip

💻 Developer Changes

There are a number of important changes for contributors in this release.

Frontend/UI

The biggest change is that the frontend build is no longer included in main. If you run the app off a clone of the repo, you’ll need to build the frontend to use the UI. See the “Impact to Contributors” section on this PR https://github.com/invoke-ai/InvokeAI/pull/5253.

Other changes:

  • Moved from yarn to pnpm for package management
  • Updated many packages
  • Refactored all workflow schemas and types
  • Workflow migration logic implemented
  • Changes to release process

Backend Changes

This release includes feature-flagged changes to the model manager and a new database migration utility.

Model Manager

The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.4.0post2…v3.5.0

InvokeAI v3.5.1

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

To learn more about InvokeAI, please visit our Documentation or join our Discord server!

🌟 What’s New in 3.5.1

  • Fixed bug with multiple embeddings
  • Added Tiled Upscaling to Default Workflows (Beta)
  • Respect use of torch-sdp from config.yaml

3.5.0 Changes

Workflow Library

Until now, a workflow could only be associated with an image, or be downloaded as JSON. The Workflow Library allows workflows to be saved independently to the database. The UI provides sorting and filtering options to manage them.

With the Workflow Library, we can now ship default workflows directly in the app. You’ll see a couple on the Default tab. As the InvokeAI application evolves we will keep these workflows up-to-date, and regularly add more.

Other Enhancements

  • More capable node updating
  • Better errors when your workflow doesn’t match your installed nodes
  • Community node packs auto-report their name, so if your workflow needs nodes you don’t have installed, you’ll see what’s missing
  • Custom field types for nodes
  • Tiled upscaling nodes (BETA)
  • Added many missing translation strings
  • Gallery auto-scroll

‼️ Things to Know:

Invoke might revert to CPU (NVIDA GPU only)

  • Some users have experienced torch reverting to the CPU rather than their GPU. To fix this follow these steps:
  1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
  2. Run: pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  3. If you run into an error with typing_extensions, run: pip install -U typing-extensions
  4. If there is an error with fsspec, run pip install -U fsspec==2023.5.0

Database Migrations

As the app evolves and our database usage gets a bit more complex, we need a way to safely update it. This release introduces a database migration utility and versions the database.

The first time you run v3.5.1, the migrator will set up the database versioning and, for this particular release, updates the images table to flag if an image has a workflow embedded or not. You’ll see a progress bar as it checks each image - it should be pretty quick.

The migration utility rolls back changes if anything goes wrong, is covered by our test suite and has proved itself with manual testing.

Custom Field Types in Nodes

Previously, node authors had to use built-in field types for inputs and outputs of their nodes. While this covered many use-cases, we recognized the need for “custom” field types. This is now fully supported, and any pydantic model can be used as a field type.

💿 Installation and Upgrading:

To install version 3.5.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting “Updated InvokeAI” to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

🚨 Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.5.1.zip

💻 Developer Changes

There are a number of important changes for contributors in this release.

Frontend/UI

The biggest change is that the frontend build is no longer included in main. If you run the app off a clone of the repo, you’ll need to build the frontend to use the UI. See the “Impact to Contributors” section on this PR https://github.com/invoke-ai/InvokeAI/pull/5253.

Other changes:

  • Moved from yarn to pnpm for package management
  • Updated many packages
  • Refactored all workflow schemas and types
  • Workflow migration logic implemented
  • Changes to release process

Backend Changes

This release includes feature-flagged changes to the model manager and a new database migration utility.

Model Manager

The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.5.0…v3.5.1

Invoke 3.6.0

Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!

🌟 What’s New in 3.6.0

  • UI/UX Overhaul
    • We’re overhauling our brand as we continue to grow into serving businesses & enterprises with solving their deployment challenges for generative AI. At our core, we’re the same - Same mission, same team, same commitment to OSS. See “Things to Know” below when upgrading from 3.4

‼️ Things to Know:

  • When upgrading from 3.4 via the updater script, the UI will not render and will display an error: {"detail":"Not Found"}. To fix this error, open the developer console from the invoke.bat / invoke.sh menu and run: pip install --use-pep517 --upgrade --force-reinstall InvokeAI==v3.6.0
  • bfloat16 is now able to be used with Invoke. The use of bfloat16 will result in different generation results from previous versions of Invoke.
  • Currently known issues:
    • Scan for Models is currently disabled due to the Model Manager refactor

Invoke might revert to CPU (NVIDIA GPU only)

  • Some users have experienced torch reverting to the CPU rather than their GPU. To fix this follow these steps:
  1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
  2. Run: pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  3. If you run into an error with typing_extensions, run: pip install -U typing-extensions
  4. If there is an error with fsspec, run pip install -U fsspec==2023.5.0

💿 Installation and Upgrading:

To install version 3.6.0, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have InvokeAI version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting “Updated InvokeAI” to upgrade, or you can download and run the installer in your existing InvokeAI installation location.

🚨 Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.6.0.zip

💻 Developer Changes

There are a number of important changes for contributors in this release.

Frontend/UI

The biggest change is that the frontend build is no longer included in main. If you run the app off a clone of the repo, you’ll need to build the frontend to use the UI. See the “Impact to Contributors” section on this PR https://github.com/invoke-ai/InvokeAI/pull/5253.

Model Manager

The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions is welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

What’s Changed

See Commits * feat(ui): UX improvements & updated design by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5270 * ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/5365 * feat(ui): redesign followups by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5368 * Release/v3.6.0rc1 by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5372 * ui: redesign followups 2 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5374 * Sisco/docker allow relative paths for invokeai data by @dsisco11 in https://github.com/invoke-ai/InvokeAI/pull/5344 * define tooltip color, optional new logo by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5375 * Updater suggest db backup when installing RC by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5381 * ui: edesign followups 3 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5385 * Release: v3.6.0rc2 by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5386 * ui: redesign followups 4 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5387 * ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/5388 * ui: redesign followups 5 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5396 * feat: Remove Header & Other Minor UI Updates by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/5392 * custom components for nav, gallery header, and app info by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5400 * fix default panel width by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5403 * replace gear instead of adding below if custom nav component by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5404 * logo override by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5406 * ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/5401 * fix(ui): fix panel resize bug by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5407 * Update Readme w/ New Brand Images by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/5402 * {release} v3.6.0rc3 by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5408 * [Optimization] Use torch.bfloat16 on cuda systems by @lstein in https://github.com/invoke-ai/InvokeAI/pull/5410 * ui: slightly reposition floating bars. by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/5409 * fix(ui): perf improvements by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5411 * ui: redesign followups 6 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5414 * only GET intermediates if that setting is an option by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5416 * fix text color for lora card by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5417 * Improve speed of applying TI embeddings by @RyanJDick in https://github.com/invoke-ai/InvokeAI/pull/5422 * ui: redesign followups 7 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5423 * {release} v3.6.0rc4 by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5424 * up and down button support in gallery navigation by @rohinish404 in https://github.com/invoke-ai/InvokeAI/pull/5389 * ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/5428 * [feat] Bake in sdxl-vae-fp16-fix model on checkpoint conversion by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4468 * fix(ui): fix gallery nav math by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5432 * Update diffusers to the lastest version by @Malrama in https://github.com/invoke-ai/InvokeAI/pull/5431 * feat(ui): use config for all numerical params by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5433 * fix(ui): fix add node autoconnect by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5434 * Updated icons + Minor UI Tweaks by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/5427 * ui: redesign followups 8 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5445 * {release} v3.5.0rc5 by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5446 * do not show toast if 403 is triggered by forbidden image by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/5447 * Enable correct probing of LoRA latent-consistency/lcm-lora-sdxl by @lstein in https://github.com/invoke-ai/InvokeAI/pull/5449 * fix(ui): use less brutally strict workflow validation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5463 * fix(ui): use memoized selector for workflow watcher by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5464 * feat: pin deps by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5465 * ui: redesign followups 9 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5460 * Report ci disk space + minor docker fixes by @ebr in https://github.com/invoke-ai/InvokeAI/pull/5461 * ui: redesign followups 10 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5466 * {release} v3.6.0rc6 by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5467 * ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/5430 * fix(ui): reduce reconnect requests by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5451 * Bugfix textual inversion crash by @lstein in https://github.com/invoke-ai/InvokeAI/pull/5450 * Enhancement: Workflow Library Styling by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/5470 * 3.6 Docs updates by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5412 * feat(ui): more context in storage errors by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5471 * feat(ui): update assets by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5472 * feat(ui): misc ui by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/5474 * ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/5481 * Allow bfloat16 to be configurable in invoke.yaml by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5469 * Release/v3.6.0 by @Millu in https://github.com/invoke-ai/InvokeAI/pull/5485

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.5.1…v3.6.0

Invoke 3.6.1

Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!

🌟 What’s New in 3.6.1

  • UI/UX Overhaul Improvements
    • Based on community feedback, updates have been made to the new UI/UX See “Things to Know” below when upgrading from Invoke 3.4
  • Depth-Anything is now supported and is the default depth processor in Invoke
  • Remix image - similar to Use All, but allows you to create a new image by setting all parameters except the Seed
  • “About” menu can be found in settings. Displays Invoke & dependency versions
  • Ideal Size node is now a default node
  • Fixed LoRA renaming bug

‼️ Things to Know:

  • When upgrading from 3.4 via the updater script, the UI will not render and will display an error: {"detail":"Not Found"}. To fix this error, open the developer console from the invoke.bat / invoke.sh menu and run: pip install --use-pep517 --upgrade --force-reinstall InvokeAI==v3.6.1
  • Currently known issues:
    • Scan for Models is currently disabled due to the Model Manager refactor

Invoke might revert to CPU (NVIDIA GPU only)

  • Some users have experienced torch reverting to the CPU rather than their GPU. To fix this follow these steps:
  1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
  2. Run: pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  3. If you run into an error with typing_extensions, run: pip install -U typing-extensions
  4. If there is an error with fsspec, run pip install -U fsspec==2023.5.0

💿 Installation and Upgrading:

To install version 3.6.1, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have Invoke version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting “Updated Invoke” to upgrade, or you can download and run the installer in your existing Invoke installation location.

🚨 Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.6.1.zip

💻 Developer Changes

There are a number of important changes for contributors to be aware of.

Frontend/UI

The biggest change is that the frontend build is no longer included in main. If you run the app off a clone of the repo, you’ll need to build the frontend to use the UI. See the “Impact to Contributors” section on this PR https://github.com/invoke-ai/InvokeAI/pull/5253.

Model Manager

The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.6.0…v3.6.1

Invoke 3.6.2

Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!

🌟 What’s New in 3.6.2

  • UI/UX Overhaul Improvements
    • Based on community feedback, updates have been made to the new UI/UX See “Things to Know” below when upgrading from Invoke 3.4
  • Depth-Anything is now supported and is the default depth processor in Invoke
  • Remix image - similar to Use All, but allows you to create a new image by setting all parameters except the Seed
  • “About” menu can be found in settings. Displays Invoke & dependency versions
  • Ideal Size node is now a default node
  • Fixed LoRA renaming bug
  • Updated Workflow saving behavior

‼️ Things to Know:

  • When upgrading from 3.4 via the updater script, the UI will not render and will display an error: {"detail":"Not Found"}. To fix this error, open the developer console from the invoke.bat / invoke.sh menu and run: pip install --use-pep517 --upgrade --force-reinstall InvokeAI==v3.6.2
  • Currently known issues:
    • Scan for Models is currently disabled due to the Model Manager refactor

Invoke might revert to CPU (NVIDIA GPU only)

  • Some users have experienced torch reverting to the CPU rather than their GPU. To fix this follow these steps:
  1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
  2. Run: pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  3. If you run into an error with typing_extensions, run: pip install -U typing-extensions
  4. If there is an error with fsspec, run pip install -U fsspec==2023.5.0

💿 Installation and Upgrading:

To install version 3.6.2, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have Invoke version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting “Updated Invoke” to upgrade, or you can download and run the installer in your existing Invoke installation location.

🚨 Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.6.2.zip

💻 Developer Changes

There are a number of important changes for contributors to be aware of.

Frontend/UI

The biggest change is that the frontend build is no longer included in main. If you run the app off a clone of the repo, you’ll need to build the frontend to use the UI. See the “Impact to Contributors” section on this PR https://github.com/invoke-ai/InvokeAI/pull/5253.

Model Manager

The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.6.0…v3.6.2

Invoke 3.6.3

Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!

🌟 What’s New in 3.6.3

  • Significantly improved generation speeds
  • Workflow Library improvements
  • New Unified Canvas Hotkeys - Ctrl + Mouse Scroll can now change the brush size!
  • Installer & Updater improvements
  • Model Manager updates to model conversion and saving
  • Faster image saving - see “Other” in Things to Know

‼️ Things to Know:

Possible Update Issues

  • When upgrading from 3.4 via the updater script, the UI will not render and will display an error: {"detail":"Not Found"}. To fix this error, download the installer and re-run it in the same location as your existing installation.

Invoke might revert to CPU (NVIDIA GPU only)

  • Some users have experienced torch reverting to the CPU rather than their GPU. To fix this follow these steps:
  1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
  2. Run: pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  3. If you run into an error with typing_extensions, run: pip install -U typing-extensions
  4. If there is an error with fsspec, run pip install -U fsspec==2023.5.0

Other

  • To take advantage of the image saving speed increase for existing installations, set your png_compress_level to 1 in your invoke.yaml file.
  • Graph data was not being used and is longer saved in the database. You may experience an unusual pause during updating as this is data is deleted.
  • Currently known issues:
    • Scan for Models is currently disabled due to the Model Manager refactor

💿 Installation and Upgrading:

To install version 3.6.3, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have Invoke version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting “Updated Invoke” to upgrade, or you can download and run the installer in your existing Invoke installation location.

🚨 Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.6.3.zip

💻 Developer Changes

There are a number of important changes for contributors to be aware of.

Model Manager

The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.6.2…3.6.3

Invoke 3.7.0

Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry-leading web interface and also serves as the foundation for multiple commercial products.

You can learn more Invoke and our mission by visiting https://www.invoke.com/about, or joining our Discord server!

🌟 What’s New in 3.7.0

Workflow Editor Improvements

  • Workflow Linear View - Workflows are now able to be used in a sleek Linear View interface that hides the workflow and focuses on the image being generated! To enable this, from a workflow, click the “Use in Linear View” button next to the model name in the left sidebar.
  • Workflow Linear View inputs are now able to be re-ordered by dragging and dropping.

Other Changes

  • DWPose is now the default OpenPose processor in Invoke - see Things to Know
  • Improved Seamless Tiling! Now even more seamless
  • Update diffusers version to 0.26.3
  • Various bug fixes

‼️ Things to Know:

Possible Update Issues

  • When upgrading from 3.4 via the updater script, the UI will not render and will display an error: {"detail":"Not Found"}. To fix this error, download the installer and re-run it in the same location as your existing installation.

Invoke might revert to CPU (NVIDIA GPU only)

  • Some users have experienced torch reverting to the CPU rather than their GPU. To fix this follow these steps:
  1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
  2. Run: pip install --force-reinstall torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
  3. If you run into an error with typing_extensions, run: pip install -U typing-extensions
  4. If there is an error with fsspec, run pip install -U fsspec==2023.5.0

Other

  • In some cases, the OpenPose processor might not automatically switch to DWPose. To choose DWPose, use the “Show Advanced” caret to open the processor settings and choose DW Openpose as the processor.
  • Currently known issues:
    • Scan for Models is currently disabled due to the Model Manager refactor

💿 Installation and Upgrading:

To install version 3.7, please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

If you already have Invoke version 3.x installed, you can update by running invoke.sh / invoke.bat and selecting “Updated Invoke” to upgrade, or you can download and run the installer in your existing Invoke installation location.

🚨 Please ensure your generation queue has no pending items before upgrading. Pending generations may fail after an upgrade. 🚨

Download the installer: InvokeAI-installer-v3.7.0.zip

💻 Developer Changes

There are a number of important changes for contributors to be aware of.

Model Manager

The Model Manager is partway through a redesign, to make it more capable and maintainable. The redesign will support a much better user experience for downloading, installing and managing models. The changes are in the repo, but implemented separately from the user-facing app.

⚙️ Contributing:

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out to imic on Discord!

New Contributors

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.6.3…3.7.0

v4.0.0

🚨 4.0.0 has some major changes. Please read the patch notes. 🚨

🚨 🚨 🚨 Yes - Those patch notes 👇 🚨 🚨 🚨

🎉 What’s New in 4.0.0

💻 Simplified Installation, Updating and Configuration

We’ve simplified and streamlined installation, making it much faster and more reliable:

💖 New Model Manager

The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:

  • All model installation happens via the UI (no configure script)
  • Queued model downloads
  • Per-model preview images
  • Per-model default settings - choose a model’s default VAE, Scheduler, CFG Scale, etc
  • User-defined trigger phrases for concepts/LoRAs and models - access by typing the < key in any prompt box
  • API key support for model marketplaces
  • 🚨 Autoimport removed - use Scan Folder instead

#️⃣ Model Hashing

When you first run v4.0.0, it may take a few minutes to start up as it does a one-time hash of all of your model files.

Do not panic.

Hashes provide a stable identifier for a model that is the same across every platform.

🚨 If you don’t care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random setting in invokeai.yaml.

🎨 Canvas Improvements

The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple “passes”, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.

The compositing settings on canvas allow for control over the gradient denoising process.

Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.

🐛 Known Issue

🚨 Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.

📈 Fixes and Enhancements

Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:

  • Bulk downloads (download a selection of images or a whole board) @StefanTobler
  • Canvas Brush Size Scroll can now be inverted @joshistoast
  • Images in the Canvas Staging Area can now be discarded individually @joshistoast
  • Numerous fixes and UI enhancements @joshistoast
  • Numerous greybeard node things @dunkeroni
  • Iterate nodes now iterate in order @cgi-joe
  • Sane workflow sorting @clsn
  • Image dimensions overlay in the gallery @rohinish404
  • Localization fixes @rohinish404
  • New translations B N, @Harvester62, @Pfannkuchensack, @Bethanielle, @Vasyanator, @GGSSKK, & @Sufi2425
  • Updated torch and diffusers deps @Malrama
  • Docs updates @skunkworxdark, @gogurtenjoyer
  • LoRA probe fix @skunkworxdark

🎁 Bonus: Invoke Training (Beta)

As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:

  • Textual Inversion Training
  • LoRA Training
  • Dreambooth Training
  • Pivotal Tuning Training

Learn more on the Invoke Training repo, as well as our YT video on getting started

💾 Installation and Upgrading

🚨 To install or upgrade to version 4.0, download the zip file from the release notes (“Assets” section), unzip it, and follow the installation instructions. For upgrades, select the same installation location.

Download Installer

🤓 Developer Changes

v4.0.0 is versioned as a major release due to breaking changes:

  • The internal nodes API has been refactored to provide a stable public API. 🚨 Node authors should review the migration guide.
  • The internal graph execution engine is drastically simplified, resulting in more efficient and performant processing. This carries on from the changes in v3.6.0 in which graphs are no longer stored in the database.

🤝 Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!

📝 What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.7.0…v4.0.0

v4.0.1

🚨 4.0.0 has some major changes. Please read the patch notes. 🚨

🚨 🚨 🚨 Yes - Those patch notes 👇 🚨 🚨 🚨

🎉 What’s New in 4.0

💻 Simplified Installation, Updating and Configuration

We’ve simplified and streamlined installation, making it much faster and more reliable:

💖 New Model Manager

The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:

  • All model installation happens via the UI (no configure script)
  • Queued model downloads
  • Per-model preview images
  • Per-model default settings - choose a model’s default VAE, Scheduler, CFG Scale, etc
  • User-defined trigger phrases for concepts/LoRAs and models - access by typing the < key in any prompt box
  • API key support for model marketplaces
  • 🚨 Autoimport removed - use Scan Folder instead

#️⃣ Model Hashing

When you first run v4.0.0, it may take a few minutes to start up as it does a one-time hash of all of your model files.

Do not panic.

Hashes provide a stable identifier for a model that is the same across every platform.

🚨 If you don’t care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random setting in invokeai.yaml.

🎨 Canvas Improvements

The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple “passes”, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.

The compositing settings on canvas allow for control over the gradient denoising process.

Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.

🐛 Known Issue

🚨 Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.

📈 Fixes and Enhancements

Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:

  • Bulk downloads (download a selection of images or a whole board) @StefanTobler
  • Canvas Brush Size Scroll can now be inverted @joshistoast
  • Images in the Canvas Staging Area can now be discarded individually @joshistoast
  • Numerous fixes and UI enhancements @joshistoast
  • Numerous greybeard node things @dunkeroni
  • Iterate nodes now iterate in order @cgi-joe
  • Sane workflow sorting @clsn
  • Image dimensions overlay in the gallery @rohinish404
  • Localization fixes @rohinish404
  • New translations B N, @Harvester62, @Pfannkuchensack, @Bethanielle, @Vasyanator, @GGSSKK, & @Sufi2425
  • Updated torch and diffusers deps @Malrama
  • Docs updates @skunkworxdark, @gogurtenjoyer
  • LoRA probe fix @skunkworxdark

4.01 Fixes

  • Minor updates that resolve performance issues on the canvas.
  • Some installation/updating fixes to improve experience.

🎁 Bonus: Invoke Training (Beta)

As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:

  • Textual Inversion Training
  • LoRA Training
  • Dreambooth Training
  • Pivotal Tuning Training

Learn more on the Invoke Training repo, as well as our YT video on getting started

💾 Installation and Upgrading

🚨 To install or upgrade to version 4.0, download the zip file from the release notes (“Assets” section), unzip it, and follow the installation instructions. For upgrades, select the same installation location.

Download Installer

Models don’t show up after upgrading

Follow these steps. If you are still missing some models, please create an issue on GitHub or ask for help on discord.

🤓 Developer Changes

v4.0.0 is versioned as a major release due to breaking changes:

  • The internal nodes API has been refactored to provide a stable public API. 🚨 Node authors should review the migration guide.
  • The internal graph execution engine is drastically simplified, resulting in more efficient and performant processing. This carries on from the changes in v3.6.0 in which graphs are no longer stored in the database.

🤝 Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/3.7.0…v4.0.1

v4.0.2

🚨 v4 has some major changes. Please read the patch notes. 🚨

🚨 🚨 🚨 Yes - Those patch notes 👇 🚨 🚨 🚨

🎉 What’s New in 4.0.2

This is a patch release includes these changes:

  • Fix errors related to character encodings during install and startup
  • UI error on first launch of v4, requiring reset of UI
  • Cancel batch button not working
  • Improvements to Scan Folder
  • FAQ to fix some models not migrating to v4
  • Removed unused or wonky GPU options in installer
  • Root dir detection via venv path
  • Handful of cosmetic UI fixes

It also includes one notable feature:

  • IP Adapter safetensor support

💻 Simplified Installation, Updating and Configuration

We’ve simplified and streamlined installation, making it much faster and more reliable:

💖 New Model Manager

The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:

  • All model installation happens via the UI (no configure script)
  • Queued model downloads
  • Per-model preview images
  • Per-model default settings - choose a model’s default VAE, Scheduler, CFG Scale, etc
  • User-defined trigger phrases for concepts/LoRAs and models - access by typing the < key in any prompt box
  • API key support for model marketplaces
  • 🚨 Autoimport removed - use Scan Folder instead

#️⃣ Model Hashing

When you first run v4, it may take a few minutes to start up as it does a one-time hash of all of your model files.

Do not panic.

Hashes provide a stable identifier for a model that is the same across every platform.

🚨 If you don’t care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random setting in invokeai.yaml.

🎨 Canvas Improvements

The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple “passes”, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.

The compositing settings on canvas allow for control over the gradient denoising process.

Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.

🐛 Known Issue

🚨 Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.

📈 Fixes and Enhancements

4.0.2

Fixes
  • Fix errors related to character encodings during install and startup
  • UI error on first launch of v4, requiring reset of UI
  • Cancel batch button not working
  • Improvements to Scan Folder
  • FAQ to fix some models not migrating to v4
  • Removed unused or wonky GPU options in installer
  • Root dir detection via venv path
  • Handful of cosmetic UI fixes
Features
  • IP Adapter safetensor support

4.0.1

Fixes
  • Minor updates that resolve performance issues on the canvas.
  • Some installation/updating fixes to improve experience.

4.0.0

Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:

  • Bulk downloads (download a selection of images or a whole board) @StefanTobler
  • Canvas Brush Size Scroll can now be inverted @joshistoast
  • Images in the Canvas Staging Area can now be discarded individually @joshistoast
  • Numerous fixes and UI enhancements @joshistoast
  • Numerous greybeard node things @dunkeroni
  • Iterate nodes now iterate in order @cgi-joe
  • Sane workflow sorting @clsn
  • Image dimensions overlay in the gallery @rohinish404
  • Localization fixes @rohinish404
  • New translations B N, @Harvester62, @Pfannkuchensack, @Bethanielle, @Vasyanator, @GGSSKK, & @Sufi2425
  • Updated torch and diffusers deps @Malrama
  • Docs updates @skunkworxdark, @gogurtenjoyer
  • LoRA probe fix @skunkworxdark

🎁 Bonus: Invoke Training (Beta)

As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:

  • Textual Inversion Training
  • LoRA Training
  • Dreambooth Training
  • Pivotal Tuning Training

Learn more on the Invoke Training repo, as well as our YT video on getting started

💾 Installation and Upgrading

🚨 To install or upgrade to version 4.0, download the zip file from the release notes (“Assets” section), unzip it, and follow the installation instructions. For upgrades, select the same installation location.

Download Installer

Models don’t show up after upgrading

Follow these steps. If you are still missing some models, please create an issue on GitHub or ask for help on discord.

🤓 Developer Changes

v4.0.0 is versioned as a major release due to breaking changes:

  • The internal nodes API has been refactored to provide a stable public API. 🚨 Node authors should review the migration guide.
  • The internal graph execution engine is drastically simplified, resulting in more efficient and performant processing. This carries on from the changes in v3.6.0 in which graphs are no longer stored in the database.

🤝 Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.1…v4.0.2

v4.0.4

🚨 v4 has some major changes. Please read the patch notes. 🚨

Patch Nodes for v4.0.4

This patch release includes the following changes:

  • Add fit bounding box to image when sending image to canvas
  • Small handful of canvas bugs fixed
  • Refiner models displayed in model manager
  • Fix OOM on Windows (see this FAQ entry for more detail)
  • Restore initial image recall for img2img

💾 Installation and Updating

To install or update to v4.0.4, download the installer and follow the installation instructions. To update, select the same installation location.

🎉 What’s New in Invoke v4

💻 Simplified Installation, Updating and Configuration

We’ve simplified and streamlined installation, making it much faster and more reliable:

💖 New Model Manager

The model manager is rewritten in v4.0.0, both frontend and backend. This builds a foundation for future model architectures and brings some exciting new user-facing features:

  • All model installation happens via the UI (no configure script)
  • Queued model downloads
  • Per-model preview images
  • Per-model default settings - choose a model’s default VAE, Scheduler, CFG Scale, etc
  • User-defined trigger phrases for concepts/LoRAs and models - access by typing the < key in any prompt box
  • API key support for model marketplaces
  • 🚨 Autoimport removed - use Scan Folder instead

#️⃣ Model Hashing

When you first run v4, it may take a few minutes to start up as it does a one-time hash of all of your model files.

Do not panic.

Hashes provide a stable identifier for a model that is the same across every platform.

🚨 If you don’t care about this, you can press Ctrl+C to interrupt the process and disable hashing by setting hashing_algorithm: random setting in invokeai.yaml.

🎨 Canvas Improvements

The canvas uses a new method for compositing called gradient denoising. This eliminates the need for multiple “passes”, greatly reducing generation time on the canvas. This method also provides substantially improved visual coherence between the masked regions and the rest of the image.

The compositing settings on canvas allow for control over the gradient denoising process.

Major research & experimentation for this novel denoising implementation was led by @dunkeroni, and @blessedcoolant was responsible for managing integration into the canvas UI.

🐛 Known Issue

🚨 Inpainting models on Canvas sometimes kinda give up and output mush. We have a fix en-route, but it will need to wait for 4.1.0.

📈 Fixes and Enhancements

4.0.4

  • Add fit bounding box to image when sending image to canvas
  • Small handful of canvas bugs fixed
  • Refiner models displayed in model manager
  • Fix OOM on Windows (see this FAQ entry for more detail)
  • Restore initial image recall for img2img

4.0.2

  • Fix errors related to character encodings during install and startup
  • UI error on first launch of v4, requiring reset of UI
  • Cancel batch button not working
  • Improvements to Scan Folder
  • FAQ to fix some models not migrating to v4
  • Removed unused or wonky GPU options in installer
  • Root dir detection via venv path
  • Handful of cosmetic UI fixes
  • IP Adapter safetensor support

4.0.1

  • Minor updates that resolve performance issues on the canvas.
  • Some installation/updating fixes to improve experience.

4.0.0

Many small bug fixes, resolved papercuts, and warm fuzzies. Shouting out just a few notable goodies from the community:

  • Bulk downloads (download a selection of images or a whole board) @StefanTobler
  • Canvas Brush Size Scroll can now be inverted @joshistoast
  • Images in the Canvas Staging Area can now be discarded individually @joshistoast
  • Numerous fixes and UI enhancements @joshistoast
  • Numerous greybeard node things @dunkeroni
  • Iterate nodes now iterate in order @cgi-joe
  • Sane workflow sorting @clsn
  • Image dimensions overlay in the gallery @rohinish404
  • Localization fixes @rohinish404
  • New translations B N, @Harvester62, @Pfannkuchensack, @Bethanielle, @Vasyanator, @GGSSKK, & @Sufi2425
  • Updated torch and diffusers deps @Malrama
  • Docs updates @skunkworxdark, @gogurtenjoyer
  • LoRA probe fix @skunkworxdark

🎁 Bonus: Invoke Training (Beta)

As of v4.0.0, all references to training in the core invoke script now point to the Invoke Training Repo. Invoke Training offers a simple user interface for:

  • Textual Inversion Training
  • LoRA Training
  • Dreambooth Training
  • Pivotal Tuning Training

Learn more on the Invoke Training repo, as well as our YT video on getting started

Models don’t show up after upgrading

Follow these steps. If you are still missing some models, please create an issue on GitHub or ask for help on discord.

🤓 Developer Changes

v4.0.0 is versioned as a major release due to breaking changes:

  • The internal nodes API has been refactored to provide a stable public API. 🚨 Node authors should review the migration guide.
  • The internal graph execution engine is drastically simplified, resulting in more efficient and performant processing. This carries on from the changes in v3.6.0 in which graphs are no longer stored in the database.

🤝 Contributing

As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please refer to How to Contribute or reach out in #dev-chat on Discord!

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.2…v4.0.4

v4.1.0

Invoke v4.1.0 brings a many fixes and enhancements. The big ticket is Style and Composition IP Adapter.

🧪 Style and Composition IP Adapter (beta)

IP Adapter uses an image as a prompt. Images have two major components - their style and their composition - and you can choose either or both when using IP Adapter.

Use the new IP Adapter Method dropdown to select Full, Style, or Composition. The setting is applied per IP Adapter. You may need to delete and re-add active IP Adapters to see the dropdown.

No IP Adapter IP Adapter Image Full IP Adapter Style Only Composition Only

“a fierce wolf in an alpine forest”, all using same seed - note how the Full method turns the wolf into a mouse-canine hybrid

Shout-out to @blessedcoolant for this feature!

📈 Patch Nodes for v4.1.0

Enhancements

  • Backend and nodes implementation for regional prompting and regional IP Adapter (UI in v4.2.0)
  • Secret option in Workflow Editor to convert a graph into a workflow. See #6181 for how to use it.
  • Assortment of UI papercuts
  • Favicon & page title indicate generation status @jungleBadger
  • Delete hotkey and button work with gallery selection @jungleBadger
  • Workflow editor perf improvements
  • Edge labels in workflow editor
  • Updated translations @Harvester62, @symant233, @Vasyanator
  • Updated docs @sarashinai
  • Improved torch device and precision handling

Fixes

  • multipleOf for invocations (for example, the Noise invocation’s width and height have a step of 8)
  • Poor quality “fried” refiner outputs
  • Poor quality inpainting with gradient denoising and refiner
  • Canvas images appearing in the wrong places
  • The little eye defaulting to off in canvas staging toolbar
  • Premature OOM on windows (see shared GPU memory FAQ)
  • ~1s delay between queue items
  • Wonky model manager forms navigating away from UI @clsn

Invocation API

  • New method to get the filesystem path of an image: context.images.get_path(image_name: str, thumbnail: bool) @fieldOfView

Internal

  • Improved knip config @webpro
  • Updated python deps @Malrama

💾 Installation and Updating

To install or update to v4.1.0, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data will not be touched.

Missing models after updating from v3 to v4

See this FAQ.

🐛 Known Issues

  • Inpainting models on Canvas sometimes kinda give up and output mush. The fix didn’t make it in to v4.1.0, we will aim to release a patch in by the weekend.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.4…v4.1.0

v4.2.0

Since the very beginning, Invoke has been innovating where it matters for creatives. Today, we’re excited to do it again with Control Layers.

Invoke 4.2 brings a number of enhancements and fixes, with the addition of a major new feature - Control Layers.

🧪 Control Layers

Integrating some of the latest in open-source research, creatives can use Control Adapters, Image Prompts, and regional guidance to articulate and control the generation process from a single panel. With regional guidance, you can compose specific regions to apply a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region. Control Adapters (ControlNet & T2I Adapters) and an Initial Image are visualized on the new Control Layers canvas.

You can read more about how to use Control Layers here - Control Layers

📈 Patch Nodes for v4.2.0

Enhancements

  • Control Layers
  • Add TCD scheduler @l0stl0rd
  • Image Viewer updates — You can easily switch to the Image Viewer on the Generations tab by tapping the Z hotkey, or double clicking on any image in the gallery.

Major Changes

Also known as the “who moved my 🧀?” section, this list details where certain features have moved.

  • Image to Image: The Image to Image pipeline can be executed using Control Layers by adding an Initial Image layer.
  • Control Adapters and IP Adapters: These have been moved to the Control Layers tab — with the added benefit of being able to visualize your control adapter’s processed images easily!

Fixes

  • Fixed inpainting models on canvas @dunkeroni
  • Fixed IP Adapter starter models
  • Fixed bug where temp files (tensors, conditioning) aren’t cleaned up properly
  • Fixed trigger phrase form submit @joshistoast
  • Fixed SDXL checkpoint inpainting models not installing
  • Fixed installing models on external SSDs on macOS
  • Fixed Control Adapter processors’ image size constraints being overly restrictive

💾 Installation and Updating

To install or update to v4.2.0, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data will not be touched.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.1.0…v4.2.0

v4.2.1

This patch release brings a handful of fixes, plus docs and translation updates.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.1

  • Fixed seamless not being perfectly seamless sometimes
  • Fixed Control Adapter processor cancellation jank
  • Fixed Depth Anything processor drop-down jank
  • Fixed Control Adapter layers preventing interactions with layers below them (e.g. cannot move a Regional Guidance layer)
  • Fixed two issues with model cover images
    • When editing a model, the cover image disappeared, but reappeared on refresh
    • When converting a model to diffusers, the cover image was lost forever
  • Fixed NSFW checker for new installs
  • Prevent errors when using T2I adapter
    • May not invoke when image dimensions are not a multiple of 64
    • Control Adapter model select differentiates between ControlNet and T2I Adapter models
    • Reworked Invoke button tooltip describing why you may not Invoke when there is a configuration issue
  • Fixed translations for canvas layer select
  • Fixed Invoke button not showing loading state while queuing
  • Docs update @gogurtenjoyer
  • Translation updates @Harvester62 @Vasyanator @Pfannkuchensack @flower-elf @gallegonovato

💾 Installation and Updating

To install or update to v4.2.1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.0…v4.2.1

v4.2.2

This release brings many fixes and enhancements, including two long-awaited features: undo/redo in workflows and load workflow from any image.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.2

✨ Undo/redo in Workflows

Undo/redo redo now available in the workflow editor. There’s some amount of tuning to be done with how actions are grouped.

For example, when you move a node around, do we allow you to undo each pixel of movement, or do we group the position changes as one action? When you are typing a prompt, do we undo each letter, word, or the whole change at once?

Currently, we group like changes together. It’s possible some things are grouped when they shouldn’t be, or should be grouped but are not. Your feedback will be very useful in tuning the behaviour so it un-does the right changes.

✨ Load Workflow from Any Image

Starting with v4.2.2, graphs are embedded in all images generated by Invoke. Images generated in the workflow editor also have the enriched workflow embedded separately. The Load Workflow button will load the enriched workflow if it exists, else it will load the graph.

You’ll see a new Graph tab in the metadata viewer showing the embedded graph.

Graph vs Workflow

Graphs are used by the backend and contain minimal data. Workflows are an enrich data format that includes a representation of the graph plus extra information, including things like:

  • Title, description, author, etc
  • Node positions
  • Custom node and field labels

This new feature embeds the graph in every image - including images generated on the Generation or Canvas tabs.

Canvas Caveat

This functionality is available only for individual canvas generations - not the full composition. Why is that?

Consider what goes into a full canvas composition. It’s the product of any number of graphs, with any amount of drawing and erasing between each graph execution. It’s not possible to consolidate this into a single graph.

When you generate on canvas, your images for the given bounding box are added to a staging area, which allows you to cycle through images and commit or discard the image. The staging area also allows you to save a candidate generation. It is these images that can be loaded as a workflow, because they are the product of a single graph execution.

👷 Other Fixes and Enhancements

  • Min/max LoRA weight values extended (-10 to +10) @H0onnn
  • Denoising strength and layer opacity are retained when sending image to initial image @steffy-lo
  • SDXL T2I Adapter only blocks invoking when dimensions aren’t multiple of 32 (was erroneously 64)
  • Improved UX when manipulating edges in workflows
  • Connected inputs on nodes collapse, hiding the nonfunctional UI component
  • Use ctrl/cmd-shift-v to paste copied nodes with input edges
  • Docs updates @hsm207
  • Fix: visible seams when outpainting
  • Fix: edge case that could prevent workflows from loading if user hadn’t opened the workflows tab yet
  • Fix: minor jank/inefficiency with control adapter auto-process (control layers only)
  • Internal: utility to create graph objects without going crazy
  • Internal: rewritten connection validation logic for workflows with full test coverage
  • Internal: rewritten edge connection interactions
  • Internal: revised field type format

💾 Installation and Updating

To install or update to v4.2.2, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.1…v4.2.2

v4.2.2post1

This release brings many fixes and enhancements, including two long-awaited features: undo/redo in workflows and load workflow from any image.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.2post1

v4.2.2 had a critical bug related to notes nodes & missing templates in workflows. That is fixed in v4.2.2post1.

✨ Undo/redo in Workflows

Undo/redo redo now available in the workflow editor. There’s some amount of tuning to be done with how actions are grouped.

For example, when you move a node around, do we allow you to undo each pixel of movement, or do we group the position changes as one action? When you are typing a prompt, do we undo each letter, word, or the whole change at once?

Currently, we group like changes together. It’s possible some things are grouped when they shouldn’t be, or should be grouped but are not. Your feedback will be very useful in tuning the behaviour so it un-does the right changes.

✨ Load Workflow from Any Image

Starting with v4.2.2, graphs are embedded in all images generated by Invoke. Images generated in the workflow editor also have the enriched workflow embedded separately. The Load Workflow button will load the enriched workflow if it exists, else it will load the graph.

You’ll see a new Graph tab in the metadata viewer showing the embedded graph.

Graph vs Workflow

Graphs are used by the backend and contain minimal data. Workflows are an enrich data format that includes a representation of the graph plus extra information, including things like:

  • Title, description, author, etc
  • Node positions
  • Custom node and field labels

This new feature embeds the graph in every image - including images generated on the Generation or Canvas tabs.

Canvas Caveat

This functionality is available only for individual canvas generations - not the full composition. Why is that?

Consider what goes into a full canvas composition. It’s the product of any number of graphs, with any amount of drawing and erasing between each graph execution. It’s not possible to consolidate this into a single graph.

When you generate on canvas, your images for the given bounding box are added to a staging area, which allows you to cycle through images and commit or discard the image. The staging area also allows you to save a candidate generation. It is these images that can be loaded as a workflow, because they are the product of a single graph execution.

👷 Other Fixes and Enhancements

  • Min/max LoRA weight values extended (-10 to +10) @H0onnn
  • Denoising strength and layer opacity are retained when sending image to initial image @steffy-lo
  • SDXL T2I Adapter only blocks invoking when dimensions aren’t multiple of 32 (was erroneously 64)
  • Improved UX when manipulating edges in workflows
  • Connected inputs on nodes collapse, hiding the nonfunctional UI component
  • Use ctrl/cmd-shift-v to paste copied nodes with input edges
  • Docs updates @hsm207
  • Fix: visible seams when outpainting
  • Fix: edge case that could prevent workflows from loading if user hadn’t opened the workflows tab yet
  • Fix: minor jank/inefficiency with control adapter auto-process (control layers only)
  • Internal: utility to create graph objects without going crazy
  • Internal: rewritten connection validation logic for workflows with full test coverage
  • Internal: rewritten edge connection interactions
  • Internal: revised field type format

💾 Installation and Updating

To install or update to v4.2.2post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.1…v4.2.2post1

v4.2.3

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.3

  • Spellcheck is re-enabled on prompt boxes

  • DB maintenance script removed from launcher (it currently does not work)

  • Reworked toasts. When a toast of a given type is triggered, if another toast of that type is already being displayed, it is updated instead of creating another toast. The old behaviour was painful in situations where you queue up many generations that all immediately fail, or install a lot of models at once. In these situations, you’d get a wall of toasts. Now you get only 1.

  • Fixed: Control layer checkbox correctly indicates that it enables or disables the layer

  • Fixed: Disabling Regional Guidance layers didn’t work

  • Fixed: Excessive warnings in terminal when uploading images

  • Fixed: When loading a workflow, if an image, board or model for an input for that workflow no longer exists, the workflow will execute but error.

    For example, say you save a workflow that has a certain model set for a node, then delete the model. When you load that workflow, the model is missing but the workflow doesn’t detect this. You can run the workflow, and it will fail when it attempts to use the nonexistent model.

    With this fix, when a workflow is loaded, we check for the existence of all images, boards and models referenced by the workflow. If something is missing, that input is reset.

  • Docs updates @hsm207

  • Translations updates @gallegonovato @Harvester62 @dvanzoerlandt

💾 Installation and Updating

To install or update to v4.2.3, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.2post1…v4.2.3

v4.2.4

v4.2.4 brings one frequently requested feature and a host of fixes and improvements, mostly focused on performance and internal code quality.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

Image Comparison

The image viewer now supports comparing two images using a Slider, Side-by-Side or Hover UI.

To enter the comparison UI, select a compare image using one of these methods:

  • Right click an image and click Select for Compare.
  • Hold alt (option on mac) while clicking a gallery image to select it as the compare image.
  • Hold alt (option on mac) and use the arrow keys to select the comparison image.

Press C to swap the images and M to cycle through the comparison modes. Press Escape or Z to exit the comparison UI and return to the single image viewer.

When comparing images of different aspect ratios or sizes, the compare image will be stretched to fit the viewer image. Disable the toggle button at the top-left to instead contain the compare image within the viewer image.

https://github.com/invoke-ai/InvokeAI/assets/4822129/4bcfb9c4-c31c-4e62-bfa4-510ab34b15c9

📈 Patch Nodes for v4.2.4

Enhancements

  • The queue item detail view now updates when it finishes. The finished (completed, failed or canceled) session is displayed.
  • Updated translations. @Harvester62 @Vasyanator @BrunoCdot @gallegonovato @Atalanttore @hugoalh
  • Docs updates. @hsm207 @cdpath

Fixes

  • Fixed problem when using a latents from the blend latents node for denoising with certain schedulers which made images drastically different, even with an alpha of 0.
  • Fixed unnecessarily strict constraints for ControlNet and IP Adapter weights in the Control Layers UI. This prevented layers with weights outside the range of 0-1 from recalling.
  • Fixed error when editing non-main models (e.g. LoRAs).
  • Fixed the SDXL prompt concat flag from not being set when recalling prompts.
  • Fixed model metadata recall not working when a model has a different key. This can happen if the model was uninstalled and reinstalled. When recalling, we fall back on the model’s name, base and type, if the key doesn’t match an existing model.

Performance improvements

Big thanks to @lstein for these very impactful improvements!

  • Substantially improved performance when moving models between RAM and VRAM. For example, an SDXL model RAM -> VRAM -> RAM roundtrip tested at ~0.8s, down from ~3s. That’s about 75% faster!
  • Fixed bug with VRAM lazy offloading which caused inefficient VRAM cache usage.
  • Reduced VRAM requirements when using IP Adapter.

Internal changes

  • Modularize the queue processor.
  • Use pydantic models for events instead of plain dicts.
  • Improved handling of pydantic invocation unions.
  • Updated ML dependencies. @Malrama

💾 Installation and Updating

To install or update to v4.2.4, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.3…v4.2.4

v4.2.6

v4.2.6 includes a handful of fixes and improvements, plus three major changes:

  • Gallery updates
  • Tiled upscaling via MultiDiffusion
  • Checkpoint models work without conversion to diffusers

We’ve made some changes to the gallery, adding features, improving the performance of the app and reducing memory usage. The changes also fix a number of bugs relating to stale data - for example, a board not updating as expected after moving an image to it.

Thanks to @chainchompa and @maryhipp for working on this major effort.

Pagination & Selection

Infinite scroll is dead, long live infinite scroll!

The gallery is now paginated. Selection logic has been updated to work with pagination. An indicator shows how many images are selected and allows you to clear the selection entirely. Arrow keys still navigate.

https://github.com/invoke-ai/InvokeAI/assets/4822129/128c998a-efac-41e5-8639-b346da78ca5b

The number of images per page is dynamically calculated as the panel is resized, ensuring the panel is always filled with images.

Boards UI Refresh

The bulky tiled boards grid has been replaced by a scrollable list. The boards list panel is now a resizable, collapsible panel.

https://github.com/invoke-ai/InvokeAI/assets/4822129/2dd7c316-36e3-4f8d-9d0c-d38d7de1d423

Search for boards by name and images by metadata. The search term is matched against the image’s metadata as a string. We landed on full-text search as a flexible yet simple implementation after considering a few methods for search.

https://github.com/invoke-ai/InvokeAI/assets/4822129/ebe2ecfe-edb4-4e09-aef8-212495b32d65

Archived Boards

Archive a board to hide it from the main boards list. This is purely an organizational enhancement. You can still interact with archived boards as you would any other board.

https://github.com/invoke-ai/InvokeAI/assets/4822129/7033b7a1-1cb7-4fa0-ae30-5e1037ba3261

Image Sorting

You can now change the sort for images to show oldest first. A switch allows starred images to be placed in the list according to their age, instead of always showing them first.

https://github.com/invoke-ai/InvokeAI/assets/4822129/f1ec68d0-3ba5-4ed0-b1e8-8e8bc9ceb957

Tiled Upscaling via MultiDiffusion

MultiDiffusion is a fairly straightforward technique for tiled denoising. The gist is similar to other tiled upscaling methods - split the input image up in to tiles, process each independently, and stitch them back together. The main innovation for MultiDiffusion is to do this in latent space, blending the tensors together continuously. This results in excellent consistency across the output image, with no seams.

This feature is exposed as a Tiled MultiDiffusion Denoise Latents node, currently classified as a beta version. It works much the same as the OG Denoise Latents node. You can find an example workflow in the workflow library’s default workflows.

We are still thinking about to expose this in the linear UI. Most likely, we expose this with very minimal settings. If you want to tweak it, use the workflow.

Thanks to @RyanJDick for designing and implementing MultiDiffusion.

How to use it

This technique is fundamentally the same as normal img2img. Appropriate use of conditioning and control will greatly improve the output. The one hard requirement is to use the Tile ControlNet model.

Besides that, here are some tips from our initial testing:

  • Use a detail-adding or style LoRAs.
  • Use a base model best suited for the desired output style.
  • Prompts make a difference.
  • The initial upscaling method makes a difference.
  • Scheduler makes a difference. Some produce softer outputs.

VRAM Usage

This technique can upscale images to very large sizes without substantially increasing VRAM usage beyond what you’d see for a “normal” sized generation. The VRAM bottlenecks then become the first VAE encode (Image to Latents) and final VAE decode (Latents to Image) steps.

You may run into OOM errors during these steps. The solution is to enable tiling using the toggle on the Image to Latents and Latents to Image nodes. This allows the VAE operations to be done piecewise, similar to the tiled denoising process, without using gobs of VRAM.

There’s one caveat - VAE tiling often introduces inconsistency across tiles. Textures and colors may differ from tile to tile. This is a function of diffusers’ handling of VAE tiling, not the new tiled denoising process. We are investigating ways to improve this.

Takeaway: If your GPU can handle non-tiled VAE encode and decode for a given output size, use that for best results.

Checkpoint models work without conversion to diffusers

The required conversion of checkpoint format models to diffusers format has long been a pain point. The diffusers library now supports loading single-file (checkpoint) models directly, and we have removed the mandatory checkpoint-to-diffusers conversion step.

The main user-facing change is that there is no longer a conversion cache directory.

Major thanks to @lstein for getting this working.

📈 Patch Nodes for v4.2.6

Enhancements

  • When downloading image metadata, graphs or workflows, the JSON file includes the image name and type of data. Thanks @jstnlowe!
  • Add clear_queue_on_startup config setting to clear problematic queues. This is useful for a rare edge case where your queue is full of items that somehow crash the app. Set this to true, and the queue will clear before it has time to attempt to execute the problematic item. Thanks @steffy-lo!
  • Performance and memory efficiency improvements for LoRA patching and model offloading.
  • Addition of a simplified model installation methods to the Invocation API: download_and_cache_model, load_local_model and load_remote_model. These methods allow models to be used without needing them to be added to the model manager. For example, we are now using these methods to load ESRGAN models.
  • Support for probing and loading SDXL VAE checkpoint.
  • Updated gallery UI.
  • Checkpoint models work without conversion to diffusers.
  • When using a VAE in tiled mode, you may now select the tile size.

Fixes

  • Fix handling handling of 0-step denoising process.
  • If a control image’s processed version is missing when the app loads, it is now re-processed.
  • Fixed an issue where a model’s size could be misreported as 0, possibly causing memory issues.
  • Fixed an issue where images - especially large images - may fail to delete.

Performance improvements

  • Improved LoRA patching.
  • Improved RAM <-> VRAM model transfer performance.

Internal changes

  • The DenoiseLatentsInvocation has had its internal methods split up to support tiled upscaling via MultiDiffusion. This included some amount of file shuffling and renaming. The invokeai package’s exported classes should still be the same. Please let us know if this has broken an import for you.
  • Internal cleanup, intending to eliminate circular import issues. There’s a lot left to do for this issue, but we are making progress.

💾 Installation and Updating

To install or update to v4.2.6, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.4…v4.2.6

v4.2.6post1

v4.2.6post1 fixes issues some users may experience with memory management and sporadic black image outputs.

Please see the v4.2.6 release for full release notes.

💾 Installation and Updating

To install or update to v4.2.6post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6…v4.2.6post1

v4.2.7

v4.2.7 includes gallery improvements and some major features focused on upscaling.

Upscaling

We’ve added a dedicated upscaling tab, support for custom upscaling models, and some new nodes.

Thanks to @RyanJDick (backend implementation), @chainchompa (frontend) and @maryhipp (frontend) for working on this!

Dedicated Upscaling Tab

The new upscaling tab provides a simple and powerful UI to Invoke’s MultiDiffusion implementation. This builds on the workflow released in v4.2.6, allowing for memory-efficient upscaling to huge output image sizes.

We’re pretty happy with the results!

image

4x scale, 4x_NMKD-Siax_200k upscale model, Deliberate_v5 SD1.5 model, KDPM 2 scheduler @ 30 steps, all other settings default

Requirements

You need 3 models installed to use this feature:

  • An upscale model for the first pass upscale
  • A main SD model (SD1.5 or SDXL) for the image-to-image
  • A tile ControlNet model of the same model architecture as your main SD model

If you are missing any of these, you’ll see a warning directing you to the model manager to install them. You can search the starter models for upscale, main, and tile to get you started.

image

Tips

  • The main SD model architecture has the biggest impact on VRAM usage. For example, SD1.5 @ 2k needs just under 4GB, while SDXL @ 2k needs just under 9GB. VRAM usage increases a small amount as output size increases - SD1.5 @ 8k needs ~4.5GB while SDXL @ 8k needs ~10.5GB.
  • The upscale and main SD model choices matter. Choose models best suited to your input image or desired output characteristics.
  • Some schedulers work better than others. KDPM 2 is a good choice.
  • LoRAs - like a detail-adding LoRA - can make a big impact.
  • Higher Creativity values give the SD model more leeway in creating new details. This parameter controls denoising start and end percentages.
  • Higher Structure values tell the SD model to stick closer to the input image’s structure. This parameter controls the tile ControlNet.

Custom Upscaling Models

You can now install and use custom upscaling models in Invoke. The excellent spandrel library handles loading and running the models.

spandrel can do a lot more than upscaling - it supports a wide range of “image to image” models. This includes single-image super resolution like ESRGAN (upscalers) but also things like GFPGAN (face restoration) and DeJPEG (cleans up JPEG compression artifacts).

A complete list of supported architectures can be found here.

Note: We have not enabled the restrictively-licensed architectures, which are denoted with a + symbol in the list.

Installing Models

We’ve added a few popular upscaling models to the Starter Models tab in the Model Manager - search for “upscale” to find them.

image

You can install models found online via the Model Manager, just like any other model. OpenModelDB is a popular place to get these models. For most of them, you can copy the model’s download link and paste in into the Model Manager to install.

Nodes

Two nodes have been added to support processing images with spandrel - be that upscaling or any of the other tasks these models support.

image
  • Image-to-Image - Runs the selected model without any extra processing.
  • Image-to-Image (Autoscale) - Runs the selected model repeatedly until the desired scale is reached. This node is intended for upscaling models specifically, providing some useful extra functionality:
    • If the model overshoots the target scale, the final image will be downscaled to the target scale with Lanczos resampling.
    • As a convenience, the output image width and height can be fit to a multiple of 8, as is required for SD. This will only resize down, and may change the aspect ratio slightly.
    • If the model doesn’t actually upscale the image, the scale parameter will be ignored.

Thanks to @maryhipp and @chainchompa for continued iteration on the gallery!

  • Cleaner boards UI.
  • Improved boards and image search UI.
  • Fixed issues where board counts don’t update when images are moved between boards.
  • Added a “Jump” button to allow you to skip pages of the gallery

<video src=https://github.com/user-attachments/assets/b834cc36-995a-464e-af3f-68cf3b38818f>

Other Changes

  • Enhancement: When installing starter models, the description is carried over. Thanks @lstein!
  • Enhancement: Updated translations.
  • Fix: Model unpatching when running on CPU, causing bad/no outputs.
  • Fix: Occasional visible seams on images with smooth textures, like skies. MultiDiffusion tiling now uses gradient blending to mitigate this issue.
  • Fix: Model names overflow the model selection drop-downs.
  • Internal: Backend SD pipeline refactor (WIP). This will allow contributors to add functionality to Invoke more easily. This will be behind a feature flag until the refactor is complete and tested. Thanks to @StAlKeR7779 for leading the effort, with major contributions from @dunkeroni and @RyanJDick.

Installation and Updating

To install or update to v4.2.7, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6post1…v4.2.7

v4.2.7post1

🚨 v4.2.7post1 resolves an issue with Windows installs. 🚨

v4.2.7 includes gallery improvements and some major features focused on upscaling.

Upscaling

We’ve added a dedicated upscaling tab, support for custom upscaling models, and some new nodes.

Thanks to @RyanJDick (backend implementation), @chainchompa (frontend) and @maryhipp (frontend) for working on this!

Dedicated Upscaling Tab

The new upscaling tab provides a simple and powerful UI to Invoke’s MultiDiffusion implementation. This builds on the workflow released in v4.2.6, allowing for memory-efficient upscaling to huge output image sizes.

We’re pretty happy with the results!

image

4x scale, 4x_NMKD-Siax_200k upscale model, Deliberate_v5 SD1.5 model, KDPM 2 scheduler @ 30 steps, all other settings default

Requirements

You need 3 models installed to use this feature:

  • An upscale model for the first pass upscale
  • A main SD model (SD1.5 or SDXL) for the image-to-image
  • A tile ControlNet model of the same model architecture as your main SD model

If you are missing any of these, you’ll see a warning directing you to the model manager to install them. You can search the starter models for upscale, main, and tile to get you started.

image

Tips

  • The main SD model architecture has the biggest impact on VRAM usage. For example, SD1.5 @ 2k needs just under 4GB, while SDXL @ 2k needs just under 9GB. VRAM usage increases a small amount as output size increases - SD1.5 @ 8k needs ~4.5GB while SDXL @ 8k needs ~10.5GB.
  • The upscale and main SD model choices matter. Choose models best suited to your input image or desired output characteristics.
  • Some schedulers work better than others. KDPM 2 is a good choice.
  • LoRAs - like a detail-adding LoRA - can make a big impact.
  • Higher Creativity values give the SD model more leeway in creating new details. This parameter controls denoising start and end percentages.
  • Higher Structure values tell the SD model to stick closer to the input image’s structure. This parameter controls the tile ControlNet.

Custom Upscaling Models

You can now install and use custom upscaling models in Invoke. The excellent spandrel library handles loading and running the models.

spandrel can do a lot more than upscaling - it supports a wide range of “image to image” models. This includes single-image super resolution like ESRGAN (upscalers) but also things like GFPGAN (face restoration) and DeJPEG (cleans up JPEG compression artifacts).

A complete list of supported architectures can be found here.

Note: We have not enabled the restrictively-licensed architectures, which are denoted with a + symbol in the list.

Installing Models

We’ve added a few popular upscaling models to the Starter Models tab in the Model Manager - search for “upscale” to find them.

image

You can install models found online via the Model Manager, just like any other model. OpenModelDB is a popular place to get these models. For most of them, you can copy the model’s download link and paste in into the Model Manager to install.

Nodes

Two nodes have been added to support processing images with spandrel - be that upscaling or any of the other tasks these models support.

image
  • Image-to-Image - Runs the selected model without any extra processing.
  • Image-to-Image (Autoscale) - Runs the selected model repeatedly until the desired scale is reached. This node is intended for upscaling models specifically, providing some useful extra functionality:
    • If the model overshoots the target scale, the final image will be downscaled to the target scale with Lanczos resampling.
    • As a convenience, the output image width and height can be fit to a multiple of 8, as is required for SD. This will only resize down, and may change the aspect ratio slightly.
    • If the model doesn’t actually upscale the image, the scale parameter will be ignored.

Thanks to @maryhipp and @chainchompa for continued iteration on the gallery!

  • Cleaner boards UI.
  • Improved boards and image search UI.
  • Fixed issues where board counts don’t update when images are moved between boards.
  • Added a “Jump” button to allow you to skip pages of the gallery

<video src=https://github.com/user-attachments/assets/b834cc36-995a-464e-af3f-68cf3b38818f>

Other Changes

  • Enhancement: When installing starter models, the description is carried over. Thanks @lstein!
  • Enhancement: Updated translations.
  • Fix: Model unpatching when running on CPU, causing bad/no outputs.
  • Fix: Occasional visible seams on images with smooth textures, like skies. MultiDiffusion tiling now uses gradient blending to mitigate this issue.
  • Fix: Model names overflow the model selection drop-downs.
  • Internal: Backend SD pipeline refactor (WIP). This will allow contributors to add functionality to Invoke more easily. This will be behind a feature flag until the refactor is complete and tested. Thanks to @StAlKeR7779 for leading the effort, with major contributions from @dunkeroni and @RyanJDick.

Installation and Updating

To install or update to v4.2.7post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6…v4.2.7post1

v4.2.8

v4.2.8 brings Prompt Templates to Invoke, new schedulers and a number of minor fixes and enhancements.

Prompt Templates

Prompt templates are often used for commonly-used style keywords, letting you focus on subject and composition in your prompts - but you can use them in other creative ways.

Thanks to @maryhipp for implementing Prompt Templates!

Creating a Prompt Template

Create a prompt template from an existing image generated with Invoke. We’ll add the positive and negative prompts from the image’s metadata as the template, and the image will be used as a cover image for the template.

You can also create a prompt template from scratch, uploading a cover image.

How it Works

Add a positive and/or negative prompt to your template. Use the {prompt} placeholder in the template to indicate where your prompt should be inserted into the template:

  • Template: highly detailed photo of {prompt}, award-winning, nikon dslr
  • Prompt: a super cute fennec fox cub
  • Result: highly detailed photo of a super cute fennec fox cub, award-winning, nikon dslr

If you omit the placeholder, the template will be appended to the end of your prompt:

  • Template: turtles
  • Prompt: i like
  • Result: i like turtles

Default Prompt Templates

We’re shipping a number of templates with the app, many of which were contributed by community members (thanks y’all!). We’ll update these as we continue developing Invoke with improvements and new templates.

Import and Export

You can import templates from other SD apps. We support CSV and JSON files with these columns/keys:

  • name
  • prompt or positive_prompt
  • negative_prompt

Export your prompt templates to share with others. When you export prompt templates, only your own templates are exported.

Preview and Flatten

Use the Preview button to see the prompt that will be used for generation. Flatten the prompt template to bake it into your prompts.

Compatible with Dynamic Prompts

You can use dynamic prompt in prompt templates, and they will work with dynamic prompts in your positive prompt box.

Other Changes

  • Enhancement: Added DPM++ 3M, DPM++ 3M Karras, DEIS Karras, KDPM 2 Karras, KDPM 2 Ancestral Karras and UniPC Karras schedulers @StAlKeR7779
  • Enhancement: Updated translations - Italian is 100%! Thanks @Harvester62!
  • Enhancement: Grounded SAM node (text prompt image segmentation) @RyanJDick
  • Enhancement: Update DepthAnything to V2 (small variant only) @blessedcoolant
  • Fix: Image downloads with correct filename
  • Fix: Delays with events (progress images will be smoother)
  • Fix: Jank with board selection when hiding or deleting boards
  • Fix: Error deleting images on systems without a “trash bin”
  • Fix: Upscale metadata included in SDXL Multidiffusion upscales @maryhipp
  • Fix: invoke.sh works with symlinks @max-maag
  • Internal: Continued work on the modular backend refactor @StAlKeR7779

Installation and Updating

To install or update to v4.2.8, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.7post1…v4.2.8

v4.2.9

FLUX

Please note these nodes are still in the prototype stage and are subject to change. This Node API is not stable!

We are supporting both FLUX dev and FLUX schnell at this time in workflows only. These will be incorporated into the rest of the UI in future updates. At this time, this is an initial and developing implementation - we’re bringing this in with the intent of long-term stable support for FLUX.

Default workflows can be found in your workflow tab: FLUX Text to Image and FLUX Image to Image. Please note that we have not added FLUX to the linear UI yet, LoRAs and Img2Img are not yet supported, but will be added soon.

Required Dependencies

Screenshot 2024-09-05 at 4 48 24 PM

In order to run FLUX on Invoke, you will need to download and install several models. We have provided options in the Starter Models (found in your Model Manager tab) for quantized and unquantized versions of both FLUX dev and FLUX schnell. Selecting these will automatically download the dependencies you need, listed below. These dependencies are also available for adhoc download in Starter Models list. Currently invoke only supports unquantized models, and bitsandbytes nf4 quantized models.

  • T5 encoder
  • CLIP-L encoder
  • FLUX transformer/unet
  • FLUX VAE

Considerations

FLUX is a large model, and has significant VRAM requirements. The full models require 24gb of VRAM on Linux — Windows PCs are less efficient, and thus need slightly more, making it difficult to run the full models.

To compensate for this, the community has begun to develop quantized versions of the DEV model - These are models with a slightly lower quality, but significant reductions in VRAM requirements.

Currently, Invoke is only supporting NVidia GPUs. You may be able to work out a way to get an AMD GPU to generate, however we’ve not been able to test this, and so can’t provide committed support for it. FLUX on MPS is not supported at this time.

Please note that the FLUX Dev model is a non-commercial license. You will need a commercial license to use the model for any commercial work.

Below are additional details on which model to use based on your system:

  • FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
  • FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM
  • FLUX dev starter model: non-commercial, >32GB RAM, ≥24GB VRAM, linux OS
  • FLUX schnell starter model: commercial, >32GB RAM, ≥24GB VRAM, linux OS

Running the Workflow

You can find a new default workflow in your workflows tab called FLUX Text to Image. This can be run with both FLUX dev and FLUX schnell models, but note that the default step count of 30 is the recommendation for FLUX dev. If running FLUX schnell, we recommend you lower your step count to 4. You will not be able to successfully run this workflow without the models listed above as required dependencies installed.

  • Navigate to the Workflows tab.
  • Press the Workflow Library button at the top left of your screen.
  • Select Default Workflows and choose the FLUX workflow you’d like to use.

The exposed fields will require you to select a FLUX model ,T5 encoder, CLIP Embed model, VAE, prompt, and your step count. If you are missing any models, use the “Starter Models” tab in the model manager to download and install FLUX Dev or Schnell.

Screenshot 2024-09-04 141124

We’ve also added a new default workflow named Flux Image to Image. This can be run vary similarly to the workflow described above with the additional ability to provide a base image.

Screenshot 2024-09-04 140846

Other Changes

  • Enhancement: add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp
  • Enhancement: FLUX memory management improvements by @RyanJDick
  • Feature: Add FLUX image-to-image and inpainting by @RyanJDick
  • Feature: flux preview images by @brandonrising
  • Enhancement: Add install probes for T5_encoder and ClipTextModel by @lstein
  • Fix: support checkpoint bundles containing more than the transformer by @brandonrising

Installation and Updating

To install or update to v4.2.9, download the installer and follow the [installation instructions](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/).

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.8…v4.2.9

v5.0.0

Invoke 5.0 Release: Our Biggest Update Yet — A new canvas with Layer & Flux support

Invoke 5.0 Update Video

Today, we’re thrilled to launch the most significant update to Invoke this year: Invoke 5.0. This release introduces Control Canvas, a new tool to generate, iterate, and refine images with unprecedented control and flexibility. We’re also announcing integration with the hugely popular Flux models.

You may notice that some things have moved around. We’ve created a few resources for you to get to know the new Control Canvas and everything new in Invoke 5.0:

Control Canvas

creation-tools-home Building on our Unified Canvas, the new Control Canvas integrates generation, iteration, and refinement into a single workspace. Highlights include:

  • Raster Layers: Easily draw, paint, and manipulate shapes or guides that can be moved and resized independently, giving you more flexibility in editing.
  • Editable Control Layers: Directly edit Control Layers (e.g., ControlNets) on the canvas, offering precise control over generation tasks without needing to re-upload images.
  • Canvas Layer Recall: Save and recall the canvas state of your creation, ensuring that works can be regenerated and saved with the proper metadata.

Flux.1 Model Integration

We’ve partnered with Black Forest Labs to integrate the highly popular Flux.1 models into Invoke. You can now:

  • Use Flux’s schnell model for commercial or non-commercial projects and dev models for non-commercial projects in your studio.
  • Generate with text-to-image, image-to-image, inpainting, and LoRA support, with more features coming soon.

If you are looking to use Flux [dev] for commercial purposes, you’ll need to obtain a commercial license from Black Forest Labs. This is included as an add-on option in our Professional Edition. If you are interested in learning more, you can get in touch with us.

  • Note: Flux Model support is currently only available for local inference on Windows and Linux systems.

Installation and Updating

To install or update to v5.0.0, download the installer and follow the installation instructions To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.9…v5.0.0

v5.0.1

This minor release includes support for Kohya FLUX LoRAs and a handful of fixes and enhancements.

Be sure to review the v5 release notes if you haven’t already upgraded to v5.

Enhancements

  • Support for Kohya FLUX LoRAs with lora_te1 layers (i.e. CLIP LoRA layers)
  • Default scheduler is now dpmpp_3m_k
  • First round of v5 translations @Harvester62 @Vasyanator @Atalanttore @Ery4z @rikublock
  • Improved FLUX img2img/inpainting
    • ❗ This is a breaking change. The trajectory_guidance_strength field on FluxDenoiseInvocation was removed in favor of a simpler solution that doesn’t need the extra field. More details in #6938.
  • Revised hotkeys focus tracking
  • Recall hotkeys work when viewer is closed and gallery is open
  • Added enter and esc hotkeys to apply and cancel filters and transforms
  • Model default settings support for FLUX guidance parameter

Fixes

  • Hide brush preview while staging
  • Prevent error toasts from being so large they cannot be closed
  • UI crash with TypeError: i.map is not a function
  • Canvas erroneously interactable after refresh while staging

Internal

  • More efficient image selection handling
  • More efficient newly-generated image handling
  • Revised resizable panels logic

Installation and Updating

To install or update to v5.0.1, download the installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.0.0…v5.0.1

v5.0.2

This minor release fixes a regression with FLUX LoRAs introduced in v5.0.1.

Be sure to review the v5 release notes if you haven’t already upgraded to v5.

Fixes

  • Fix regression with FLUX LoRAs introduced in v5.0.1.

Enhancements

  • Show CLIP Vision models in the model manager UI, allowing for them to be deleted in case of corruption.

Installation and Updating

To install or update to v5.0.2, download the installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.0.1…v5.0.2

v5.1.0

This release includes drawing tablet support and middle-mouse panning for Canvas, support for GGUF FLUX models, and an assortment of other fixes and enhancements.

❗ We are investigating an issue with Apple Silicon devices and SDXL. Users with Apple Silicon devices may wish to hold off on updating until we resolve this. See the Known Issues section below for more information.

Be sure to review the v5 release notes if you haven’t already upgraded to v5.

Enhancements

  • Canvas tablet support for both touch and pen input devices (e.g. drawing tablets).
  • Pressure sensitivity for pen input devices. This can be disabled in the Canvas settings.
  • Revised Canvas layout to better fit on smaller screens.
  • Improved image context menu layout. Thanks @joshistoast!
  • Improved Apple Pencil support.
  • Added long-press trigger for context menus. The canvas itself does not have a long-press trigger, but the same actions are accessible via menu button at the top-right corner of the canvas.
  • Middle-mouse button panning on Canvas. Thanks to @FloeHetling for getting this moving!
  • Support for GGUF FLUX models.
  • Create a new “session”, resetting all settings and Canvas to their defaults (except for model selection). These functions are in the menu next to the cancel button.
  • Crop to bbox on Canvas. You can crop an individual layer, or the whole canvas. Accessed via right-click menus.
  • Allow for a broader range of guidance values for flux models. Thanks @rikublock!
  • Updated translations. Thanks @rikublock @Ery4z @Vasyanator @Harvester62 @Phrixus2023!

Fixes

  • Duplicating a regional guidance layer with a reference image causes an error during graph building, preventing generation from working.
  • Recalling LoRAs can create duplicate LoRAs.
  • Fixed color picker tool edge case where wrong color could be detected while moving the cursor quickly.

Perf

  • Throttled color picker sampling to improve performance.

Internal

  • Bump all UI deps to latest.
  • Bump torch and xformers versions to latest.
  • Gracefully handle promise rejections in the UI’s metadata handlers.
  • Updated docker-compose.yml to use GHCR latest image. Thanks @jkbdco!

Docs

  • Added Ollama node to community nodes. Thanks @Jonseed!
  • Added FLUX support to docs. Thanks @aakropotkin!

Known Issues

Apple Silicon devices will output mushy noise on SDXL unless Regional Guidance or an IP Adapter is used.

The issue appears to be related to us bumping torch to v2.4.1, which was needed for GGUF FLUX support. The SDXL generation code wasn’t changed in this release. We are investigating the issue.

As a workaround, users can set their attention type to torch-sdp in their invoke.yaml configuration file. This will result in some increased memory utilization, but allow for generations to proceed as normal.

Installation and Updating

To install or update to v5.1.0, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.0.2…v5.1.0

v5.1.1

This release fixes a number of issues and updates the workflows list to function like the style presets list.

❗ We are investigating an issue with Apple Silicon devices and SDXL. Users with Apple Silicon devices may wish to hold off on updating until we resolve this. See the Known Issues section below for more information and a workaround.

Be sure to review the v5 release notes if you haven’t already upgraded to v5.

Enhancements

  • New workflow list UI. Thanks @maryhipp!
  • Added explanatory tooltips to images and assets tabs in gallery. Thanks @maryhipp!
  • Added prompt template message to first-run blurb. Thanks @maryhipp!
  • Add button to rename board (in addition to double-click). Thanks @maryhipp!
  • Updated translations. Thanks @Harvester62 @Ery4z!
  • Updated example config file comment to prevent footguns.

Fixes

  • Updates to get FLUX working on MPS on torch 2.5.0 / nightlies. Thanks @Vargol!
  • Fixed incorrect events being tracked for some buttons. Notably, this prevents accidental deletion of layers when the Canvas is busy.
  • Fixed UI jank with Canvas number input components.
  • Fixed issue where invalid edges could be pasted.
  • Fixed issue preventing touch and Apple Pencil from being able to interact with the Canvas.

Internal

  • Fixed Nix flake. Thanks @aakropotkin!

Docs

  • Add Enhance Detail community node. Thanks @skunkworxdark!
  • Fixed typos. Thanks @nnsW3!
  • Added FLUX support to docs. Thanks @aakropotkin!

Known Issues

Apple Silicon devices will output mushy noise on SDXL unless Regional Guidance or an IP Adapter is used.

The issue appears to be related to us bumping torch to v2.4.1, which was needed for GGUF FLUX support. The SDXL generation code wasn’t changed in this release. We are investigating the issue.

As a workaround, users can set their attention type to torch-sdp in their invoke.yaml configuration file:

attention_type: torch-sdp

This will result in some increased memory utilization, but allow for generations to proceed as normal.

Installation and Updating

To install or update to v5.1.1, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.1.0…v5.1.1

v5.2.0

This release includes a number of enhancements and fixes. Standouts include:

  • Support for FLUX ControlNets.
  • Improved UX for common img2img flows in Canvas.
  • Boards may be sorted by name or date.
  • Starter model bundles in Model Manager.
  • Support for bulk image uploads.

** 🚨 Important Update Note: **

During update, you’ll now be selecting between different configurations of the Invoke environment that are optimized and dependent on your hardware (specifically, torch-sdp vs. xformers) — If you have previously used xformers, and update to the version for 3xxx and 4xxx NVidia cards, you’ll often experience an error after the update.

You can uninstall xformers before running the update by -

  • Running the Invoke batch script from your existing installation
  • Selecting the “developer console” option
  • Typing pip uninstall xformers

We’ve also added a fix for Apple Silicon users for the mushy noise issue and updated the installer to skip xformers for GPUs where it causes a performance hit.

Be sure to review the v5 release notes if you haven’t already upgraded to v5.

FLUX ControlNets

We now support both XLabs and InstantX ControlNets for FLUX. We’ve found the Union Pro model substantially outperforms the other models and added it to the starter models. Other models work, but outputs are not as good.

You can use FLUX ControlNets in both Workflows and the Linear UI. We will be adding a union mode control to the Linear UI in a a future release. You can select the union mode in Workflows today.

Canvas img2img Flow

We’ve made a number of changes to support the common img2img flow in the Canvas:

  • Transform now supports 3 modes:
    • fill (old behaviour): The layer is stretched to fit the generation bbox exactly. Its aspect ratio is not maintained.
    • contain (new default): The layer is stretched to fit the generation bbox, retaining its aspect ratio.
    • cover: The layer is scaled up so that its smallest dimension fits the bbox exactly, retaining its aspect ratio.
  • Add a layer context menu item to fit the layer to the bbox using the contain mode.
  • Update theNew Canvas from Image image context menu item to streamline the img2img flow. It now resizes the bbox to match the image’s aspect ratio, respecting the currently-selected models’ optimal size. The image will fit exactly in the box. You can click this and then immediately Invoke to do img2img.

Installer Updates

The performance of torch-sdp attention is substantially faster than xformers on 30xx and 40xx series GPUs. We’ve made two changes to ensure you generate with the best settings:

  • Add an installer option for 30xx & 40xx series GPUs, which does not install xformers.
  • When the attention type is set to auto (the default), and you do have xformers installed, we choose the best option of torch-sdp and xformers, based on your GPU.

Apple Silicon Fix

The mushy noise issue on Apple Silicon is related to sliced attention. We’ve temporarily forced all MPS devices to use torch-sdp. Memory usage is a bit higher, but you won’t get mushy noise (unless, of course, you prompt for it 😅).

This appears to be a torch bug, and we’ll revert this change once it is resolved.

All Changes

Enhancements

  • Support for FLUX ControlNets.
  • Improved UX for common img2img flows in Canvas.
  • Installer option to select best packages based on GPU model, and internal logic to select the best attention type based on GPU model.
  • Updated workflow list menu UI, restoring New Workflow confirmation dialog.
  • Support for bulk image uploads, button in gallery to upload images.
  • Boards maybe sorted by name or date.
  • Recall FLUX guidance parameter. Thanks @rikublock!
  • Add starter model bundle for each supported architecture to model manager.
  • The Layers and Gallery tabs now remember which one you last selected.
  • Added an indicator to the Layers and Gallery tabs when dragging an image to indicate you that you can hover over the tabs to change them.
  • Updated translations. French is now 100%! Thanks gallegonovato, @Harvester62, @Ery4z!

Fixes

  • Workaround for Apple Silicon SDXL noisy mush issue.
  • Fixed misc UI jank in workflow list menu UI.
  • Fixed longstanding issue where workflows were marked as unsaved immediately after loading.
  • Fixed canvas layer preview not updating if layer is disabled.
  • Fixed an edge case where entity isn’t visible until interacting with canvas.

Perf

  • Reworked gallery rendering and context menu for a ~100% perf boost when rendering the gallery.
  • Rendering optimizations for canvas.

Installation and Updating

To install or update to v5.2.0, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.1.1…v5.2.0

v5.3.0

This release includes a number of enhancements and fixes. Standouts include:

  • Object selection in Canvas (via Segment Anything).
  • Support for XLabs FLUX IP Adapters.
  • Installer improvements, which should resolve all xformers-related issues.
  • Improved FLUX support for MPS (Apple Silicon) devices.
  • Reworked context menus, now with sub-menus.

Be sure to review the v5 release notes if you haven’t already upgraded to v5.

Canvas Select Object

Click the Canvas to add Include and Exclude points to select an object in a layer, then use the selection as an Inpaint Mask.

In the next example, we want to select the fox and the ground, but the model has a hard time with this, as it is trained to select single objects. We are able to work around this by selecting the background and inverting the selection.

https://github.com/user-attachments/assets/29fd08a3-10f0-4cb6-bc7b-699ff0a0094d

As demonstrated, applying the selection masks the layer with the selection. You can also save the selection as a Raster Layer, Control Layer or Regional Guidance. This enables some useful workflows.

Internally, this uses Segment Anything v1. We’ll upgrade to SAM v2, which substantially improves object selection, once support for it lands in transformers.

FLUX IP Adapters

We now support XLabs FLUX IP Adapters. There’s only the one model right now, which we’ve added to the starter models.

Workflow & Input Image Output

Internally, IP Adapter requires CFG to work. For now, this is only exposed via workflows. Negative conditioning is also now available in workflows. We’re exploring a sane way to expose this in the linear UI.

Negative conditioning requires a CFG value >1. Leave CFG at 1 to disable it and ignore negative conditioning.

Note: CFG doubles denoising time, and negative conditioning requires a good additional chunk of VRAM.

All Changes

Enhancements

  • Object selection in Canvas (via Segment Anything).
  • Support for XLabs FLUX IP Adapters.
  • Support for CFG and Negative Conditioning for FLUX in workflows only.
  • Added RealVisXL5 to SDXL Starter Model Bundle.
  • Model manager in-place checkbox remembers your choice. Thanks @rikublock!
  • Better tooltips in the model manager, including a starter bundle contents.
  • Support for sub-menus, which are now used in the Image context menu and various Canvas context menus.
  • Support more conversions between Canvas layers. Both in-place and copy conversions are supported. For example, convert between Inpaint Masks and Regional Guidance, or copy a Raster Layer into a new Inpaint Mask.
  • Model descriptions are now displayed in model selection drop-downs. You can disable this in settings.
  • We now track whether or not enabled model default settings are currently set. When they are not, we list the defaults that differ in a tooltip on the little button next to the main model drop-down.
  • Always show staging images when staging starts, even if user hid them last staging session.
  • Canvas alerts (e.g. the Sending to Gallery alert overlaid on the canvas) are moved to the top-left corner of the center panel.
  • Updated translations. Thanks @rikublock, @Vasyanator, @Harvester62, @Ery4z!

Fixes

  • Improved FLUX support for MPS devices. Thanks @Vargol!
  • FLUX denoise node erroneously required controlnet_vae field.
  • Uninstall xformers before installation, fixing issues w/ xformers version mismatch.
  • Fixed installer text output. Thanks @max-maag!
  • Fixed ROCm PyPI indices. Thanks @max-maag!
  • Normalize solid infill alpha values from 0-1 to 0-255 when building canvas graphs. This issue didn’t cause any problems, because VAE ignores the alpha channel, but it should be fixed regardless.
  • View/Hide Boards button cut off with certain translations.
  • Fix a longstanding issue where nodes could mutate the objects (images, tensors, conditioning data) in disk-backed in-memory caches. For example, node 1 might retrieve image X and do some in-place operation on it to create image Y. Node 2 retrieves image X but gets image Y instead. This issue only affected the in-memory caches; mutations were not pushed to disk. The invocation API now returns clones of all objects, so nodes can safely mutate their inputs in-place.

Internal

  • Reworked logging implementation.
  • Directory traversal issue when deleting images.

Documentation

  • Updated dev install docs. Thanks @hippalectryon-0!

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.2.0…v5.3.0

v5.3.1

This release expands layer merging capabilities in the Canvas and makes a number of other fixes and enhancements.

Layer Merging

We’ve expanded layer merging capabilities to all layer types and made one change to the existing Merge Visible operation.

Merge Down (new)

The selected layer and the one immediately below it are merged into a single layer. The two input layers are deleted as part of this process.

Merge Visible (changed)

Previously, in Invoke, Merge Visible deleted all the layers that were merged together.

This has been changed to match image editors like Affinity Photo and PS, where the merged layer is added as a new layer, leaving all other layers untouched.

Merging Regional Guidance

When merging Regional Guidance, the resultant merged Regional Guidance has no prompt or reference image. It’s not possible to merge those settings. Keep this in mind when doing a Merge Down, which will delete the two input Regional Guidance layers.

Merging Control Layers

When merging Control Layers, the resultant merged Control Layer has no model or control settings. Like Regional Guidance, it’s not possible to merge the settings, and Merge Down will delete the two input layers.

Control Layers require some special handling. For example, we display them with a “transparency effect” to help you visualize how they stack up. When merging them, we first apply a similar effect to the layer. Specifically, we use the “lighter” blend mode, optimizing for control images with black backgrounds.

All Changes

Enhancements

  • Layer merging improvements.
  • Updated “What’s New” popover.
  • Inpaint Mask and Regional Guidance may be saved to assets.
  • Support for HF tokens in the model manager UI. This prepares Invoke’s MM for SD3.5, which requires you to authenticate with HF to download the model. You can paste your token into the MM to allow the the model to download, instead of needing to configure it with the HF CLI.
  • Add a safeguard for invocation classes, requiring they implement the invoke method and have a correct output annotation.

Fixes

  • Canvas alerts prevent clicks on the metadata viewer tabs.
  • Save as-ing a Filtered layer may result in the wrong image data being used during generation.
  • More resilient Filter handling.
  • SDXL T2I OpenPose model gets input images with the correct channel order, which makes this model work correctly. Thanks @dunkeroni!
  • T2I Adapters now work with any output size supported by the main model. Previously, they required output sizes to be in multiples of 32 or 64. Thanks @dunkeroni!
  • Recall of seamless metadata settings. Thanks @rikublock!
  • Fixed broken link in installer. Thanks @hippalectryon-0!
  • Fixed (another) broken link in installer. Thanks @ventureOptimism!

Internal

  • Bump diffusers, accelerate and huggingface-hub dependencies to latest versions.
  • Refactored CanvasCompositorModule to support new merge capabilities.
  • Remove version pins for several packages including torch and numpy. This is in preparation for an updated installer and also makes it easier for advanced users to customize the versions of various packages.
  • Canvas inpaint and outpaint graphs output one less intermediate image, saving one expensive PNG encode (and your disk space).
  • Added caches for generation mode calculations in Canvas, providing a substantial reduction in intermediate images created and faster Invoke-to-Queue times.

Docs

  • Updated patchmatch docs. Thanks @nirmal0001!
  • Updated FAQ. Thanks @JPPhoto!
  • Updated dev docs. Thanks @hippalectryon-0!

Translations

  • Updated German. Thanks @rikublock & @Atalanttore!
  • Updated Chinese (Simplified). Thanks @qyouqme & @youo0o0!
  • Updated Italian. Thanks @Harvester62 & @dakota2472!
  • Updated French. Thanks @Ery4z!

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.3.0…v5.3.1

v5.4.1

This release includes support for SD 3.5, plus a number of fixes and improvements.

We pulled v5.4.0 due to issues with model installation and loading. This release (v5.4.1) resolves those issues and includes all changes from v5.4.0.

Enhancements

  • Support for SD 3.5 Medium and Large. You can download them in the starter models.
  • Moved Denoising Strength slider to top of Layers panel with updated UI and info popover.
  • Update viewer styling to have a menubar-ish header. Also used for image comparison.
  • Layer preview tooltip with larger sized preview.
  • Empty state for Control Layers, guiding user to upload an image, drag an image from gallery or start drawing.
  • “Simple” mode for control layers filtering, triggered when you select a control model. This automatically selects and processes the default filter for that model, if a default exists. If the user clicks Advanced, they get the full filter settings UI. Control Layers now start with no model selected, implicitly directing users into this flow.
  • Updated default control weight to 0.75 and end step % to 0.75. Updated label for the Balanced control mode, indicating it is the recommended setting.
  • The FLUX Denoise node now has a flag to skip adding noise to input latents. This is useful for switching models mid-generation and other fancy denoising techniques. Thanks @JPPhoto!
  • Migrated the UI’s drag-and-drop functionality to a new library, improving the overall performance of dnd. It also allows us to support external dnd. For example, dragging images from the OS directly into the canvas is now possible (not implemented in this release).
  • Canvas layers may be sorted via drag-and-drop.
  • When graph building fails, you will see an error toast. Previously, it only logged a message to the browser’s JS console.
  • Updated model warnings on Upscaling tab to not be misleading.
  • More FLUX LoRA format support.
  • Show alert over canvas/viewer with invocation progress event messages.
  • Tweak gallery image selection styles to better differentiate between hovered and selected.
  • Output Only Masked Regions renamed to Output only Generated Regions and now enabled by default.

Fixes

  • Canvas progress images do not clear when canceling generation after at least one image has been staged.
  • Tooltips on boards list stay open when scrolling, potentially causing the whole app to scroll.
  • Saving canvas to gallery does not create a new image in gallery.
  • Applying a filter could erase or otherwise change a layer’s data unexpectedly, causing a range of user-facing generation issues.
  • Unable to queue graphs with the Segment Anything node when its inputs were provided by connection.
  • Unable to load a workflow from file when using the three-dots menu.
  • pip downloads torch twice. This didn’t cause any application issues - just a waste of time and bandwidth. We pinned torch to <2.5.0 to prevent pip’s dependency resolver from getting confused.
  • mediapipe install issue on Windows, related to its latest release. We pinned mediapipe to a known working version.
  • CLIP Vision error when using FLUX IP Adapter.
  • Fit bbox to layers math could result in a slightly-too-large bbox.
  • Outdated link to FLUX IP Adapter. Thanks @RadTechDad!
  • Force a compatible precision for FLUX VAEs to prevent black outputs.

Translations

We have had some issues communicating with “walk-in” translators on Weblate, resulting in translations being changed when they are already correct. To mitigate this, we are trying a more restricted Weblate translation setup. Access to contribute translations must be granted by @Harvester62. Please @ them in the #translators channel on discord to get access.

We apologize for any inconvenience this change may cause. Thanks to all our translators for their continued efforts!

  • Updated German. Thanks @rikublock!
  • Updated Chinese (Simplified). Thanks @youo0o0!
  • Updated Italian. Thanks @Harvester62 & @dakota2472!
  • Updated Spanish. Thanks gallegonovato (weblate user)!
  • Updated Vietnamese. Thanks Linos (weblate user)!
  • Updated Japanese. Thanks @GGSSKK!

Internal

  • Simplified parameter schema declarations. Thanks @rikublock!
  • Simplified dnd image and gallery rendering, resulting in improved performance.

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.3.1…v5.4.1

v5.4.2

This release includes support for FLUX IP Adapter v2 and image “batching” for Workflows.

Image Batching for Workflows

The Workflow Editor now supports running a given workflow for each image in a collection of images.

Add an Image Batch node, drag some images into its image collection, and connect its output to any other node(s). Invoke will run the workflow once for each image in the collection.

Here are a few examples to help build intuition for the feature. Click the arrow to expand each example.

Example 1 - Single Batch -> Single Node

The simplest case is using a batch image output with a single node. Here’s a workflow that resizes 5 images to 200x200 thumbnails.

Workflow

image

Results

image

This batch queues 5 graphs, each containing a single resize node with one of the 5 images in the batch list. Note the images are 200x200 pixels.

Example 2 - Single Batch -> Multiple Nodes

You can also use a batch image output with multiple nodes. This contrived workflow resizes the image to a 200x200 thumbnail, like the previous example, the pastes the thumbnail on the full size image.

Workflow

image

Results

image

This batch also queues 5 graphs, each of which contains one resize and one paste node. In each graph, the nodes get the image of the 5 images in the batch collection. The batch node can connect to any number of other nodes. For each queued graph, all connected nodes will get the same image.

Example 3 - Multiple Batches (Product Batching)

When multiple batches are used, they are combined such that all permutations are queued (e.g. the product of batches is taken).

Workflow

image

Results

image

In this case, the product of the two batches is 15 graphs. Each image of the 3-image batch is used as the base image, and a thumbnail of each tiger is pasted on top of it. We’ll call this “product” batching.

Zipped Batching

The batching API supports “zipped” batches, where the batch collections are merged into a single batch.

For example, imagine two batches of 5 images. As described in the “product” example above, you’d get 5 images * 5 images = 25 graphs. Zipped batching would instead take the first image from each of the two batches and use them together in the first graph, then take the second two images for the second graph, and so on.

Zipped batching is not supported in the UI at this time.

Versus Iterate Nodes

We support similar functionality to batching with Iterate nodes, so why add batching? In short, Iterate nodes have some technical issues which are addressed by batching.

Why `Iterate` Nodes are Scary

They result in unbounded graph complexity and size. If you don’t know what these words mean, but they sound kinda scary, congrats! You are on the right track. They are indeed scary words.

  • When using Iterate nodes, the graph is expanded with new nodes and edges during execution. Pretty scary.
  • We cannot know ahead of time how much the graph will expand, because iterate nodes’ collections are dynamic. Terrifying.
  • Multiple iterate nodes combine via Cartesian product, resulting in combinatorial explosion. Your graph could be running at the heat death of the universe. Existential dread.

Batch collections are defined up front and don’t expand the graph. We know exactly the complexity we are getting into before the graph executes. Sanity restored!

Batching also more intuitive - we run exactly this graph, once for each image.

Unlike Iterate nodes, Image Batch nodes’ collections cannot be provided by other nodes in the graph. The collection must be defined up-front, so you cannot replace Iterate with Image Batch for all use-cases.

Nevertheless, we suggest using batching where possible.

Other Notes

  • We’ve added Image Batch nodes first because it images are the highest-impact field type, but the batching API supports arbitrary field types. In fact, the Canvas uses both int and str fields internally. We’ll save nodes for other field types for a future enhancement.
  • If you want to batch over a board, you’ll need to drag all images from the board into the batch collection. We’ll explore a simpler way to use a board’s images for a batch in a future enhancement.
  • It is not possible to combine all outputs from a batch within the same workflow.

Other Changes

Enhancements

  • Support for FLUX IP Adapter v2. We’ve optimized internal handling for v2, and you may find FLUX IP Adapter v1 results are degraded. Update to v2 to fix this.
  • Updated image collection inputs for nodes. You may now drag images into collections directly.
  • Brought some of @dwringer’s often-requested composition nodes into Invoke’s core nodes. They have been renamed to not conflict with your existing install of the node pack. Thanks for your work on these very useful nodes @dwringer!
  • Show tab-specific info in the Invoke button’s tooltip.
  • Update the New from Image context menu actions. The actions that resize the image after creating a new Canvas are clearly named.
  • Change the Reset Canvas button, which was too similar to the Delete Image button, into a menu with more options:
    • New Canvas Session: Resets all generation settings, resets all layers, and enables Send to Canvas.
    • New Gallery Session: Resets all generation settings, resets all layers, and enables Send to Gallery.
    • Reset Generation Settings: Resets all generation settings, leaving layers alone.
    • Reset Canvas Layers: Resets all layers, leaving generation settings alone.
  • New Support Videos button in the bottom-left corner of the app, which lists and links to videos on our YouTube channel.

Fixes

  • Added padding to the metadata recall buttons in the metadata viewer, so they aren’t cut off by other UI elements.
  • The progress bar stopped throbbing in the last release. We apologize for this oversight. Throbbing has been restored.
  • Addressed some edge cases that could cause the UI to crash with an error about an entity not found.
  • Updated grid size for SD3.5 models to 16px. Thanks for the heads up @dunkeroni.

Internal

  • Removed a node with a GPL-3 dependency (easing-functions), which had been contributed in the step_param_easing node that used it. While this node has been deprecated, please let us know if you were using this node, and the use-cases, so that we can better design inputs where these are found helpful.

Translations

We have had some issues communicating with “walk-in” translators on Weblate, resulting in translations being changed when they are already correct. To mitigate this, we are trying a more restricted Weblate translation setup. Access to contribute translations must be granted by @Harvester62. Please @ them in the #translators channel on discord to get access.

Our Weblate also has an account issue and is currently locked. This is unrelated to the access restriction changes.

We apologize for any inconvenience this change may cause. Thanks to all our translators for their continued efforts!

  • Updated Chinese (Simplified). Thanks @youo0o0!
  • Updated Italian. Thanks @Harvester62!
  • Updated Spanish. Thanks gallegonovato (weblate user)!

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.4.1…v5.4.2

v5.4.3

This minor release adds initial support for FLUX Regional Guidance, arrow key nudge on Canvas, plus an assortment of fixes and enhancements.

Changes

Enhancements

  • Add 1-pixel nudge to the move tool on Canvas. Use the arrow keys to make fine adjustments to a layer’s position. Thanks @hippalectryon-0!
  • Change the default infill method from patchmatch to lama. You can use patchmatch.
  • Add empty state for Global Reference Images and Regional Guidance Reference Images, similar to the empty state for Control Layers. A blurb directs users to upload an image or drag an image from gallery to set the image.
  • FLUX performance improvements (~10% speed-up).
  • Added ImagePanelLayoutInvocation to facilitate FLUX IC-LoRA workflows.
  • FLUX Regional Guidance support (beta). Only positive prompts are supported; negative prompts, reference images and auto-negative are not supported for FLUX Regional Guidance.
  • Canvas layers now have a warning indicator that indicates issues with the layer that could prevent invoking or cause a problem.
  • New Layer from Image functions added to Canvas Staging Area Toolbar. These create a new layer without dismissing the rest of the staged images.
  • Improved empty state for Regional Guidance Reference Images.
  • Added missing New from... image context menu actions: Reference Image (Regional) and Reference Image (Global)
  • Added Vietnamese to language picker in Settings.

Fixes

  • Soft Edge (Lineart, Lineart Anime) Control Layers default to the Soft Edge filter correctly.
  • Remove the nonfunctional width and height outputs from the Image Batch node. If you want to use width and height in a batch, route the image from Image Batch to an Image Primitive node, which outputs width and height.
  • Ensure invocation templates have fully parsed before running studio init actions.
  • Bumped transformers to get a fix for Depth Anything artifacts.
  • False negative edge case with picklescan.
  • Invoke queue actions menu’s Cancel Current action erroneously cleared the entire queue. Thanks @rikublock!
  • New Reference Images could inadvertently have the last-used Reference Image populated on creation.
  • Error when importing GGUF models. Thanks @JPPhoto!
  • Canceling any queue item from the Queue tab also erroneously canceled the currently-executing queue item.

Internal

  • Add redux actions for support video modal.
  • Tidied various things related to the queue. Thanks @rikublock!

Docs

  • General tidy across many docs pages. Thanks @magnusviri!
  • Fixed a few broken links. Thanks @emmanuel-ferdman!

Translations

  • Updated Italian translations. Thanks @Harvester62!
  • Updated Vietnamese translations. Thanks @Linos1391!

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.4.2…v5.4.3

v5.5.0

This release brings support for FLUX Control LoRAs to Invoke, plus a few other fixes and enhancements.

It’s also the first stable release alongside the new Invoke Launcher!

Invoke Launcher ✨

image

The Invoke Launcher is a desktop application that can install, update and run Invoke on Windows, macOS and Linux.

It can manage your existing Invoke installation - even if you previously installed with our legacy scripts.

Download the launcher to get started

Refer to the new Quick Start guide for more details. There’s a workaround for macOS, which may not let you run the launcher.

FLUX Control LoRAs

Despite having “LoRA” in the name, these models are used in Invoke via Control Layers - like ControlNets. The only difference is that they do not support begin and end step percentages.

So far, BFL has released Canny and Depth models. You can install them from the Model Manager.

Other Changes

Enhancements

  • Support for FLUX Control LoRAs.

  • Improved error handling and recovery for Canvas, preventing Canvas from getting stuck if there is a network issue during some operations.

  • Reduced logging verbosity when default logging settings are used.

    Previously, all Uvicorn logging occurred at the same level as the app’s logging. This logging was very verbose and frequent, and made the app’s terminal output difficult to parse, with lots of extra noise.

    The Uvicorn log level is now set independently from the other log namespaces. To control it, set the log_level_network property in invokeai.yaml. The default is warning. To restore the previous log levels, set it to info (e.g. log_level_network: info).

Fixes

  • Image context menu actions to create a Regional and Global Reference Image layers were reversed.
  • Missing translation strings.
  • Canvas filters could execute twice. Besides being inefficient, on slow network connections, this could cause an error toast to appear even when the filter was successful. They now only execute once.
  • Model install error when the path contains quotes. Thanks @Quadiumm!

Internal

  • Upgrade docker image to Ubuntu 24.04 and use uv for package management.
  • Fix dynamic invocation values causing non-deterministic OpenAPI schema. This allows us to add a CI check to ensure the OpenAPI schema and TypeScript types are always in sync. Thanks @rikublock!

Translations

  • Updated Italian. Thanks @Harvester62!
  • Updated German. Thanks @rikublock!
  • Updated Vietnamese. Thanks @Linos1391!
  • Updated French. Thanks @Ery4z!

Installing and Updating

As mentioned above, the new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.4.3…v5.5.0

v5.6.0

This release brings major improvements to Invoke’s memory management, new Blur and Noise Canvas filters, and expanded batch capabilities in Workflows.

Memory Management Improvements (aka Low-VRAM mode)

The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.

Despite the focus on low-VRAM GPUs and the colloquial name “Low-VRAM mode”, most users benefit from these improvements to Invoke’s memory management.

Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn’t have enough VRAM to hold full models.

Low-VRAM mode involves 4 features, each of which can be configured or fine-tuned:

  • Partial model loading
  • Dynamic RAM and VRAM cache sizes
  • Working memory
  • Keeping a copy of models in RAM

Most users should only need to enable partial loading by adding this line to their invokeai.yaml:

enable_partial_loading: true

🚨 Windows users should also disable the Nvidia sysmem fallback.

For more details and instructions for fine-tuning, see the Low-VRAM mode docs.

Thanks to @RyanJDick for designing and implementing these improvements!

Workflow Batches

We’ve expanded the capabilities for Batches in Workflows:

  • Float, integer and string batch data types
  • Batch collection generators
  • Grouped (aka zipped) batches

Float, integer and string batch data types

There’s a new batch node for each of the new data types. They work the same as the existing image batch node.

image

You can add a list of values directly in the node, but you’ll probably find generators to be a nicer way to set up your batch.

Batch collection generators

These are essentially nodes that run in the frontend and generate a list of values to use in a batch node. Included in the release are these generators:

  • Arithmetic Sequence (float, integer): Generate a sequence of count numbers, starting from start, that increase or decrease by step.
  • Linear Distribution (float, integer): Generate a distribution of count numbers, starting with start and ending with end.
  • Uniform Random Distribution (float, integer): Generation a random distribution of count numbers from min to max. You can set a seed for reproducible sequences.
  • Parse String (float, integer, string): Split the input on the specified string, parsing each value as a float, integer or string. You can load the input from a .txt file. Use \n as the split string to split on new lines.

You’ll notice the different handle icon for batch generators. These nodes cannot connect to non-batch nodes, which run in the backend.

Grouped (aka zipped) batches

When you use multiple batches, we run the graph once for every possible combination of values in the batch collections. In mathematical terms, we “take the Cartesian product” of all batch collections.

Consider this simple workflow that joins two strings: image

We have two batch collections, each with two strings. This results in 2 * 2 = 4 runs, one for each possible combination of the strings. We get these outputs:

  • “a cute cat”
  • “a cute dog”
  • “a ferocious cat”
  • “a ferocious dog”

But what if we wanted to group or “zip” up the two string collections into a single collection, executing the graph once for each pair of strings? This is now possible - we can set both nodes to the same batch group:

image

This results in 2 runs, one for each “pair” of strings. We get these outputs:

  • “a cute cat”
  • “a ferocious dog”

You can use grouped and ungrouped batches arbitrarily - go wild! The Invoke button tooltip lets you know how many executions you’ll end up with for the given batch nodes.

Keep in mind that grouped batch collections must have the same size, else we cannot zip them up into one collection. The Invoke button grey out and let you know there is a mismatch.

Details and technical explanation

On the backend, we first zip each group’s batch collections into a single collection. Ungrouped batch collections remain as-is.

Then, we take the product of all batch collections. If there is only a single collection (i.e. a single ungrouped batch node, or multiple batch nodes all with the same group), the product operation outputs the single collection as-is.

There are 5 slots for groups, plus a 6th ungrouped option:

  • None: Batch nodes will always be used as separate collections for the Cartesian product operation.
  • Groups 1 - 5: Batch nodes within a given group will first be zipped into a single collection, before the the Cartesian product operation.

All Changes

Fixes

  • Fix issue where excessively long board names could cause performance issues.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
  • Fix link to Scale setting’s support docs.
  • Fix image quality degradation when inpainting an image repeatedly.
  • Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.

Enhancements

  • Support float, integer and string batch data types.
  • Add batch data generators.
  • Support grouped (aka zipped) batches.
  • Reduce peak memory during FLUX model load.
  • Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
  • Reworked error handling when installing models from a URL.
  • Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
  • Add a small handful of nodes designed to support inpainting in workflows. See #7583 for more details and an example workflow.

Internal

  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!

Docs

  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).
  • Add Low-VRAM mode docs.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We’ve just updated the launcher to v1.3.2. Review the launcher releases for a changelog. To update the launcher itself, download the latest version from the quick start guide - the download links there are kept up to date.

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.5.0…v5.6.0

v5.6.1

This release includes a handful of minor improvements and fixes.

  • Improvements to memory management defaults, resulting in fewer OOMs.
  • Expanded FLUX LoRA compatibility.
  • On-demand model cache clearing via button on the Queue tab.
  • Canvas Adjust Image filter (i.e. levels, hue, etc). Thanks @dunkeroni!
  • Button to cancel all queue items except current. Thanks @rikublock!
  • Copy Canvas/Bbox as image via Canvas right-click menu.
  • Paste image into Canvas/Bbox via normal paste hotkey. You will be prompted for where the image should be placed.
  • Allow Collect nodes to be connected directly to Iterate nodes.
  • Allow Any type node inputs to accept collections. For example, the Metadata Item node’s value field now accepts collections.
  • Improved error messages when invalid graphs are queued.
  • LoRA Loader node LoRA collection input is now optional, supporting @skunkworxdark’s metadata nodes. Thanks @skunkworxdark!
  • Fixed issues where staging area got stuck if one image failed to load (e.g. if it was deleted).
  • Updated translations. Thanks @Harvester62, @Linos1391, @rikublock, @Ery4z!

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.6.0…v5.6.1

v5.6.2

This minor release includes the following enhancements and fixes:

  • Make the Upscaling tab’s Scheduler and CFG Scale settings independent from the Canvas tab. We’ve found that the best Scheduler and CFG Scale settings for Canvas rarely work well for Upscaling, and vice-versa. Separating the settings prevents your Canvas settings from causing bad upscale results.
  • Fixed issue with Multiply Image Channel node loading images with different channel counts. Thanks @dunkeroni!
  • Fixed typos in docs. Thanks @maximevtush!
  • Fixed issue where the app scrolls out of view, especially when using the launcher. Again. Hopefully.
  • Update internal build toolchain dependencies.
  • Updated translations. Thanks @Harvester62, @Linos1391, @Ery4z!

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.6.1…v5.6.2

v5.7.0

This release upgrades the Workflow Editor’s Linear View to a more fully-featured Form Builder. It also includes many other fixes and enhancements, including the adoption of @skunkworxdark’s excellent metadata nodes into Invoke’s core nodes.

The launcher has recently been updated to v1.4.1, fixing a minor memory leak.

Form Builder

Nodeologists may now create more sophisticated UIs for their workflows using the Form Builder. This replaces the older Linear View feature.

In addition to Node Fields, you may add Heading, Text, Container and Divider elements to the form. Some form elements are configurable. For example, Containers support row or column layouts, and certain Node Field types can render different UI components.

Here’s a brief demo of the Form Builder, touching on the core functionality:

Your existing workflows with the Linear View fields will automatically be migrated to the new format.

We’ll be iterating on the Form Builder and extending its capabilities in future updates.

Other Changes

@skunkworxdark’s Metadata Nodes ship with Invoke

We are pleased to bring this popular node pack into the core Invoke repo! Thanks to @skunkworxdark for allowing us to adopt these nodes, and for their continued support of the project.

After you update to v5.7.0, if you have the node pack installed as custom nodes, you will see an error when on start up. It’s saying that you already have these nodes installed. Everything should work fine - but you’ll need to delete the node pack to get rid of the error.

Enhancements

  • Increase default VAE tile size to 1024, reducing “grid” artifacts in images generated on the Upscaling tab.
  • Failed or canceled queue items may be retried via the queue tab.
  • Canvas color picker now supports transparency.
  • Canvas color picker shows RGBA values next to it.
  • Minor redesign/improved styles throughout the Workflow Editor.
  • When attempting to load a workflow while you have unsaved changes, a dialog will appear asking to you confirm. Previously it would just load the workflow and you’d lose any unsaved work.
  • When a node has an invalid field, its title will be error-colored.
  • Less ginormous image field component in nodes.
  • Node fields now have editable descriptions.
  • Double-click a node to zoom to it.
  • Click the bullseye icon in a Form Builder node field to zoom to the node.
  • ❗Minor Breaking Change: Board fields now have an Auto option in the drop-down. When set to Auto, the auto-add board will be used for that board field. Auto is the new default. Workflows that previously had None (Uncategorized) selected will now have Auto selected.
  • Add Dynamic Prompts (Random) and Dynamic Prompts (Combinatorial) modes to the String Generator node.
  • Add Image Generator node with Images from Board mode. Select a board and category to run a batch over its images.

Fixes

  • Canvas mouse cursor disappears when certain layer types and tools are selected.
  • Canvas color picker doesn’t work when certain layer types are selected.
  • Sometimes mask layers don’t render until you zoom or pan.
  • When using shift-click to draw a straight line, if the canvas was moved too much between the clicks, the line got cut off.
  • Incorrect node suggestions when dropping an edge into empty space.
  • When loading a workflow with fields that reference unavailable models, the fields were not always reset correctly.
  • If an image collection field referenced images that were deleted, it was impossible to delete them without emptying the whole collection.
  • Lag/stutters in the Add Node popover.
  • When deleting a board and its images, we didn’t check if any of the deleted images were used in an image collection field, potentially leading to errors when attempting to use a nonexistent image.

Internal

  • Upgraded reactflow to v12. This major release provides no new user-facing features, but does feature improved performance.
  • Upgraded @reduxjs/toolkit to latest. A new utility allows for more efficient cache management and yields a minor perf improvement to gallery load times.
  • Numerous performance improvements throughout the workflow editor. Many code paths were revised and components restructured to improve performance. Some CSS transitions were disabled for performance reasons.
  • Substantial performance improvement for batch queuing logic (i.e. the stuff that happens between clicking Invoke and the progress bar starts moving).
  • Improved custom node loading. For each node pack, if an error occurs while loading it, importing of that pack’s nodes will stop and Invoke will skip to the next node pack. This may result in only some nodes from a pack loading, but the app will still run. Previously, any error prevented Invoke from starting up.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

The launcher has recently been updated to v1.4.1, fixing a minor memory leak.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.6.2…v5.7.0

v5.7.1

Enhancements

Improved workflow usability

  • Fixed an issue where descriptions were cut off and increasing spacing between node fields in the form builder.
  • Auto-linking was added to headings, text elements, workflow descriptions, and node field descriptions.
  • The workflow menu was restructured by replacing the “New Workflow” button on the left panel with a workflow menu, while the old menu location now serves as a button to open workflow settings.
  • “New Workflow” button was added to the workflow library list for easier access.

Updated Translations Big thanks to @hironow @Ery4z

Fixes

  • Fixed an issue where the Invoke button on the Canvas tab did not display a loading spinner due to the request being reset too early, preventing RTKQ from tracking the loading state.
  • The enqueue request is now awaited before resetting tracking, ensuring proper feedback.
  • Additional logging messages were added to provide consistent JS console logs across tabs.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.7.0…v5.7.1

v5.7.2

This release adds a setting to reduce peak VRAM usage and improve performance, plus a few other fixes and enhancements.

Memory Management Improvements

By default, Invoke uses pytorch’s own memory allocator to load and manage models in VRAM. CUDA also provides a memory allocator, and on many systems, the CUDA allocator outperforms the pytorch allocator, reducing peak VRAM usage. On some systems, this may improve generation speeds.

You can use the new pytorch_cuda_alloc_conf setting in invokeai.yaml to opt-in to CUDA’s memory allocator:

pytorch_cuda_alloc_conf: "backend:cudaMallocAsync"

If you do not add this setting, Invoke will continue to use the pytorch allocator (same as it always has).

There are other possible configurations you can use for this setting, dictated by pytorch. Refer to the new section in the Low-VRAM mode docs for more information.

Other Changes

  • You may now upload WEBP images to Invoke. They will be converted to PNGs for use within the application. Thanks @keturn!
  • Added “pull bbox” button to the Regional and Global Reference Image layer’s empty state.
  • More conservative estimates for VAE VRAM usage. This aims to reduce the slowdowns and OOMs on the VAE decode step.
  • Fixed “single or collection” field type rendering in the Workflow Editor. This was causing fields like IP Adapter’s images and ControlNet’s control weights from displaying a widget.
  • Fixed the download button in the Workflow Library list, which was downloading the active workflow instead of the workflow for which the button was clicked.
  • Loosened validation for ControlNet begin and end step percentages. Thanks @JPPhoto!
  • Enqueuing a batch (i.e. what happens when you click the Invoke button) is now a non-blocking operation, allowing the app to be more responsive immediately after clicking Invoke. To enable this improvement, we migrated from using a global mutex for DB access with long-lived SQLite cursors to WAL mode with short-lived SQLite cursors. This is expected to afford a minor (likely not noticeable) performance boost in the backend in addition to the responsiveness improvement.
  • Smaller docker builds. Thanks @keturn!
  • Updated translations. Thanks @Harvester62 @Linos1391 @rikublock!

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.7.1…v5.7.2

v5.8.0

This release introduces an upgraded Workflow Library and FLUX Redux support, among other fixes and enhancements.

Workflow Library

We’ve redesigned the Workflow Library to provide a smoother interface for browsing workflows.

  • Larger modal to display workflows in a grid
  • Browse by tag (default workflows only)
  • Search by name/description/tags
  • Opened at works correctly
  • Workflows may have thumbnails

FLUX Redux

This release includes support for FLUX Redux in Workflows and Canvas.

FLUX Redux is an add-on model for FLUX. It works similarly to IP Adapter or an “instant” LoRA, where an input image guides the generation’s style and composition. It can provide some degree of character consistency.

To use it on Canvas, add a Global Reference Image layer and drag a reference image onto the layer - same as you would for IP Adapter - and select the FLUX Redux from the model drop-down.

You can also use it in Regional Guidance layers. Add a Reference Image to the layer and select FLUX Redux from the model drop-down.

Other Changes

  • You may override the min and max constraints for float and integer fields added to the Form Builder. This is useful when fields are set to render as sliders and/or to add guardrails to your form fields.
  • Support for uploading WEBP images. They are converted to PNG after uploading.
  • Improvements to workflow loading, including checks on every load to ensure unsaved changes are not lost.
  • Fixed an issue where workflows were not marked as having unsaved changes when its form was edited.
  • Form Builder text and heading elements render line breaks correctly.
  • Fixed issue where some Form Builder elements didn’t fill their containers correctly.
  • Invalid node fields now display errors in the field’s tooltip.
  • Fixed issue where duplicate edges could be created when re-connecting an existing edge.
  • Focused UI regions are highlight (configurable in Settings, off by default). Thanks @joshistoast!
  • Updated the display names of model-specific nodes and default workflows to include the model. For example, Main Model Loader is now Main Model - SD1.5.
  • Internal changes to custom node loading.
  • Updated translations. Thanks @rikublock @Linos1391 @Harvester62!

Download the models from the Starter Models tab in the Model Manager.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.7.2…v5.8.0

v5.8.1

This release fixes a bug with retry functionality that could result in an endless loop of errors.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.8.0…v5.8.1

v5.9.0

This release adds FLUX Fill support in Workflows and Canvas, beta support for the LLaVA OneVision VLLM family of models, and a selection of minor fixes and enhancements.

FLUX Fill

FLUX Fill provides high quality inpainting and outpainting, improving on these tasks over the other FLUX models. It’s a “main” model, like FLUX dev or schnell.

To use it, download it from Starter Models and then select it from the main model drop-down on Canvas. It’s not compatible* with Text to Image or Image to Image - you’ll get an error if you try to Invoke without an inpaint mask or some empty regions in your bbox.

*Technically, it can do Text to Image and Image to Image - but the quality is very poor. We’ve opted to disallow this on Canvas.

LLaVA OneVision VLLM

This multimodal model generates text from text, image and/or video* inputs. You can use it to generate prompts and and describe images. You can use it in Workflows with the LLaVA OneVision VLLM node.

The 0.5B variant of the model is available for download from Starter Models.

*Invoke does not support video inputs.

Other Changes

  • Support for custom string field drop-downs in Workflow Builder. Add a node’s string field to the Builder and choose the dropdown component to see it in action.
  • The About modal now shows the app’s runtime settings. It includes a list of explicitly-set settings (i.e. the contents of invokeai.yaml), so it is possible to see what runtime settings are app defaults and which are user-defined.
  • Improved UX for missing or unexpected fields in Workflows.
  • De-wonkified LoRA node names (they got wonkified in v5.8.0).
  • Better error messages when scanning models with picklescan.
  • Fixed issue where shift-clicking to draw on Canvas ignored Clip to Bbox setting.
  • Fixed issue with Image Viewer where the image could overflow the viewer.
  • Fixed overflow with looooong node titles.
  • Fixed a minor visual bug in string generator nodes.
  • Internal: First iteration of improved model probing API.
  • Internal: Improved testing system for model-related tests.
  • Internal: Port LLaVA OV models to use new API.
  • Internal: Cleaned up a lot of model-related code.
  • Internal: Support hot reload for custom nodes. Thanks @keturn!
  • Updated translations. Thanks @rikublock @Harvester62 @Linos1391!

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.8.1…v5.9.0

v5.9.1

This release includes bugfixes and internal changes.

Changes

  • Enhancement: Disable the denoising strength slider for FLUX Fill, which ignores the strength parameter.
  • Fix: Error when mask blur is set to 0.
  • Fix: Issue with inpaint/outpainting where the output images were not masked correctly, causing what should be transparent areas (i.e. alpha 0/255) to be very slightly not-transparent (i.e. alpha 1/255). This threw off layer bounds calculations and caused gradual degradation across repeated inpainting/outpainting operations in unmasked areas.
  • Fix: Error when installing certain FLUX finetunes.
  • Internal: Continued iteration on model manager’s internal API.
  • Internal: CI workflows now use uv, dropped nonfunctional CUDA/ROCm workflows (we only have CPU runners anyways).

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.9.0…v5.9.1

v5.10.0

This release focuses on internal improvements with a number of enhancements and fixes.

The biggest enhancement is support for CogView4, a permissively-licensed model that is fairly close to FLUX in terms of quality.

🚨 Achtung! 🚨

There are important installation notes to be aware of in this release, which includes major updates to Invoke’s core components.

  1. You must use the latest installer/launcher (v1.5.0). If you’re using an older launcher version, the update may fail.

    To fix this, download the latest installer/launcher from https://invoke.com/downloads.

  2. If the installation fails, use repair mode to fix it.

    The installation may fail due to Python environment conflicts with log messages like those in this screenshot.

    To fix this, retry the installation with repair mode enabled, which will reinstall the bundled Python and resolve most installation issues.

    Enable repair mode by ticking this checkbox on the Review step of the install, then click Install.

  3. Form Builder reset on first launch.

    When you start Invoke for the first time after updating to v5.10.0, your Form Builder will be reset, losing any unsaved changes.

    Before updating, save your current workflow. After updating, re-load it manually.

Python 3.12 & PyTorch 2.6.0 support

Invoke now supports Python 3.12 and PyTorch 2.6.0. Many major dependencies have also be bumped to their latest version.

Changes

Enhancements

  • Support for CogView4 in Canvas and Workflows. Like FLUX, it works best with detailed, narrative prompts. You can download the model from the Starter Models tab in the Model Manager. It’s pretty chunky at ~30GB overall, with similar hardware requirements to FLUX.
  • Save Canvas/Bbox to Gallery buttons now save basic metadata with the image (prompts, model, seed).
  • Models now have their file sizes recorded and displayed in the Model Manager. Thanks @keturn!
  • New capabilities for FLUX Redux to control how much influences the generation. On Canvas, this is controlled by the new Image Influence setting for both Global and Regional Reference Images. There are more controls in Workflows. Thanks @skunkworxdark!
  • Added nodes to convert metadata into collection types. Thanks @skunkworxdark!
  • Improved undo/redo on Workflows.
  • Updated docs. Thanks @chantellmocha!
  • Updated translations. Thanks @rikublock @Harvester62 @Linos1391 @RyoK0220!

Fixes

  • Fixed error when loading workflows that has invalid edges. This can occur if an installation is missing a custom node.
  • When left/right arrow keys are pressed while focused on a tab UI element, do not switch between images.
  • Restored missing “Using torch device” message that should display on startup.
  • ONNX models (e.g. DW OpenPose) now have their sizes calculated correctly. This fixes an issue where these models didn’t work fully with the model manager.
  • Fixed issue where the Canvas Color Picker didn’t grab alpha values correctly.
  • Fixed Canvas layer drop indicator line color (was bright red).
  • Send to Canvas image actions now work when Canvas is uninitialized. For example, if the UI loads on the Workflows tab and the user has not yet clicked the Canvas tab, the Canvas will not be initialized.
  • Increased padding when fitting layers to canvas to prevent the floating tool panel and other buttons from covering up the edges of the layers.
  • Fixed issue where, after a Canvas reset, if no prompt is entered, generating will re-use the prompt that was last used before the reset.
  • Fixed issue where some network queries weren’t reset correctly. This could have caused a minor memory leak.

Internal

  • Support for python 3.12. This necessitates the use of repair mode during installation, as described in the 🚨 callout above.
  • Bump many dependencies to latest, including torch.
  • Remove many unused dependencies.
  • Remove legacy scripts from the codebase.
  • Ported LoRA model configs to the new classification API. This is an internal change.
  • Merged workflow Form Builder and Node Editor state and logic. Undo/redo on the Workflows tab now works for both Node Editor and the Form Builder, and the way actions are grouped in the undo/redo history is improved. This causes the loss of Form Builder state on first run, as described in the 🚨 callout above. Unfortunately, there’s no way to prevent this data loss without significant effort.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.9.1…v5.10.0

v5.10.1

🚨 Achtung! 🚨

If you already updated to v5.10.0, you can skip this section. If you are on v5.9.1 or older, please review this section before updating.

There are important installation notes to be aware of in this release, which includes major updates to Invoke’s core components.

  1. You must use the latest installer/launcher (v1.5.0). If you’re using an older launcher version, the update may fail.

    To fix this, download the latest installer/launcher from https://invoke.com/downloads.

  2. If the installation fails, use repair mode to fix it.

    The installation may fail due to Python environment conflicts with log messages like those in this screenshot.

    To fix this, retry the installation with repair mode enabled, which will reinstall the bundled Python and resolve most installation issues.

    Enable repair mode by ticking this checkbox on the Review step of the install, then click Install.

  3. Form Builder reset on first launch.

    When you start Invoke for the first time after updating to v5.10.0, your Form Builder will be reset, losing any unsaved changes.

    Before updating, save your current workflow. After updating, re-load it manually.

Changes

  • Support partial loading for LLaVA and SigLIP (FLUX Redux) models, reducing VRAM requirements for users with Nvidia GPUs.
  • Reduce peak CPU RAM usage during initial load of LLaVA and SigLIP models. This allows users with at least 24GB CPU RAM to run the LLaVA 7B model without crashing during load. With partial loading now working for the model, most users should be able to run the model - though it can take a few minutes if you don’t have a GPU with 24GB VRAM.
  • Revert a recent change to model installation, which could result in some models being misidentified as LoRAs.
  • The data viewer component, used to display JSON (e.g. metadata, workflows, node outputs) now wraps lines.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.10.0…v5.10.1

v5.11.0

This release’s largest change is a new and improved model drop-down component.

🚨 Achtung! 🚨

If you already updated to v5.10.0, you can skip this section. If you are on v5.9.1 or older, please review this section before updating.

There are important installation notes to be aware of in this release, which includes major updates to Invoke’s core components.

  1. You must use the latest installer/launcher (v1.5.0). If you’re using an older launcher version, the update may fail.

    To fix this, download the latest installer/launcher from https://invoke.com/downloads.

  2. If the installation fails, use repair mode to fix it.

    The installation may fail due to Python environment conflicts with log messages like those in this screenshot.

    To fix this, retry the installation with repair mode enabled, which will reinstall the bundled Python and resolve most installation issues.

    Enable repair mode by ticking this checkbox on the Review step of the install, then click Install.

  3. Form Builder reset on first launch.

    When you start Invoke for the first time after updating to v5.10.0, your Form Builder will be reset, losing any unsaved changes.

    Before updating, save your current workflow. After updating, re-load it manually.

Changes

  • New model drop-down component, aiming to improve the user experience with selecting models. It’s currently enabled only for the main model drop-down.
  • Added button to reset an existing HF token to the Model Manager tab.
  • Support for FLUX LoRAs trained in invoke-training.
  • Nodes that output images, including nodes that output image collections, should always update the gallery.
  • Fixed issue where drag-and-drop didn’t scroll when used in a scrollable container (for example, when you have a lot of layers or form builder elements).
  • Internal: Updated frontend dependencies.
  • Internal: Optional output_meta field added to BaseInvocationOutput. This field is not currently exposed in the Workflow Editor. In the future, it may be exposed to facilitate attaching additional metadata to invocation outputs.
  • Internal: Support code for a generation via Imagen3/ChatGPT 4o. These API models are currently unavailable in the Community Edition, but we may be able to change that in the future.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.10.1…v5.11.0

v5.12.0

This release includes support for Nvidia 50xx GPUs, a way to relate models (e.g. LoRAs with a specific main model), new IP Adapter methods and other smaller changes..

Changes

  • Bumped PyTorch dependency to v2.7.0, which means Invoke now supports Nvidia 50xx GPUs.
  • New model relationship feature. In the model manager tab, you may “link” two models. At this time, the primary intended use case is to link LoRAs to main models. When you have the main model selected, the linked LoRAs will be at the top of the LoRA list. Thanks @xiaden!
  • New IP Adapter methods Style (Strong) and Style (Precise). The previous style method is renamed to Style (Simple). Thanks @cubiq!
  • Fixed GGUF quantization on MPS. Thanks @Vargol!
  • Updated translations. Thanks @Harvester62 @rikublock @Linos1391 @RyoK0220!
  • Internal: Invocation model changes, which aim to reduce occurrences of ValidationError errors.
  • Internal: Addressed pydantic deprecation warning.
  • Internal: Re-enabled new model classification API with safeguards.

🚨 Stricter Rules for Nodes, including Custom Nodes

This section is for node authors, whose nodes may be affected by the stricter rules.

Default values for node fields are now validated as the app starts up.

For example, this node defines my_image as an ImageField, but it provides a default value of None, which is not a valid ImageField:

@invocation("my_invocation")
class MyInvocation(BaseInvocation):
my_image: ImageField = InputField(default=None)
def invoke(self, context: InvocationContext) -> ImageOutput:
...

This node will error on app startup:

# 😱 Error on startup!
InvalidFieldError: Default value for field "my_image" on invocation "my_invocation" is invalid, 1 validation error for MyInvocation
my_image
Input should be a valid dictionary or instance of ImageField [type=model_type, input_value=None, input_type=NoneType]

There are two ways to fix this, depending on the node author’s intention.

1. If the field is truly optional, update the type annotation.

Using the example invocation from above, make the type annotation for my_image a union with None:

@invocation("my_invocation")
class MyInvocation(BaseInvocation):
my_image: ImageField | None = InputField(default=None)
def invoke(self, context: InvocationContext) -> ImageOutput:
...

ImageField | None is equivalent to Optional[ImageField]. Either works.

2. If the field is not optional, remove the default or provide a valid default value.

Using the example invocation from above, simply remove default=None:

@invocation("my_invocation")
class MyInvocation(BaseInvocation):
my_image: ImageField = InputField()
def invoke(self, context: InvocationContext) -> ImageOutput:
...

This node has an integer field that must be greater than 10, but the provided default value of 5. This will error:

@invocation("my_other_invocation")
class MyOtherInvocation(BaseInvocation):
my_number: int = InputField(default=5, gt=10)
def invoke(self, context: InvocationContext) -> IntegerOutput:
...

Either remove the default, or provide a default value greater than 10.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.11.0…v5.12.0

v5.13.0

This release adds advanced Inpainting mask controls and a selection of other minor enhancements.

Changes

  • Canvas Inpaint Masks have additional per-mask settings. Enable them via right-click menu on the mask layer. Thanks @dunkeroni for working on these very useful features!
    • Noise Level adds image-space noise to the masked region before it is denoised. This can add natural variation and detail to the region. The added noise is generated using the global seed parameter as the RNG seed.
    • Denoise Limit caps the amount of denoising done on the masked region. You can inpaint multiple regions of the image simultaneously, but with different amounts of variation. This greatly simplifies a workflow where you want to make variations on an image, but want different parts of the image to vary more or less.
  • When selecting aspect ratios, give special handling to SDXL’s trained sizes to reduce artifacts. Thanks @dunkeroni!
  • Improved Canvas scroll-to-zoom handling, including smoother scaling on touchpads and snapping to common zoom levels.
  • Added button to pull the bbox content into an empty Control Layer.
  • Added ability to delete all images from the Uncategorized board via button in its right-click menu.
  • Prompt boxes remember their size.
  • Support installing HF repo subfolders via Model Manager’s HuggingFace tab.
  • Faster Heuristic Resize algorithm, used in New Layer from Image (Resize) functionality.
  • Allow LoRA patcher to skip unknown layers instead of erroring. Thanks @keturn!
  • Log a warning when a node has an unregistered output class.
  • Updated Compel to get better handling for long prompts.
  • Updated translations.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.12.0…v5.13.0

v5.14.0

Changes

  • Fix error when using inpainting models. Thanks @dunkeroni!
  • Fix issue where Canvas didn’t fit layers to the viewport correctly.
  • When fitting layers, Canvas animates the transition to the new zoom and scale to make it less jarring.
  • During internal Canvas operations like compositing, a small spinner renders in the bottom-right corner of the Canvas to indicate that it is indeed doing something.
  • Updated translations. Thanks @Harvester62 @RyoK0220 @hironow!

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.13.0…v5.14.0

v5.15.0

Changes

  • Support for AI Toolkit FLUX LoRAs. Thanks @keturn!
  • Fixed AttributeError: module 'cv2.ximgproc' has no attribute 'thinning' error, which could occur when using Control Layers.
  • Added SDXL IP Adapter Plus to starter models.
  • Gallery search supports image creation dates. Thanks @dunkeroni!
  • Improved JSON formatting. Thanks @j-brooke!
  • Fixed (hopefully) a rare ValidationError during generation/dequeuing, as seen in #7950.
  • Support for CUDA devices in slots 2 and above. Thanks @heathen711!
  • Internal: Use warning instead of warn, fixing deprecation message. Thanks @emmanuel-ferdman!
  • Improved memory usage behaviour, reducing peak RAM usage due to memory fragmentation.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.14.0…v5.15.0

v6.0.0

Invoke’s next major release, 6.0, brings many new features plus an improved UI and user experience.

Core UI Enhancements

We’ve retired the 5.0 layout and introduced the foundations for a task-specific, flexible, and persistent interface. It should still feel very familiar!

  • Launchpads for Guided Experience: New to a feature? The Generate, Upscale, Workflow, and Model tabs now feature “Launchpads” to guide you through getting started.

  • Customizable Layout: Drag, drop, and resize panels to create the workspace that works for you on each tab. Your custom layout, including your last panel used, is saved automatically, and we have plans to expand customization options in the future for power users.

  • Reference Image Prompt Zone: Updated and intuitive global reference image management, now in the left-hand settings panel.

  • Improved Performance: Tabs now load on demand, resulting in a faster and more responsive experience within each tab.

  • The Infinite Scroll is back, and now shows all images at once in a single grid. We anticipate that the loss of pagination please some, and anger others. We’re considering and evaluating ways we can solve for the problem of simple chunk-based navigation here — Share feedback as you use it!

Enhanced Canvas Experience

The Canvas has been enhanced further to continue to serve as a powerhouse for artists and creators, with a host of new features and workflow improvements.

  • Rule of Thirds Guide: Enable a composition guide to help frame your creations. Enabled in Canvas settings.

  • Bounding Box to Reference Image: Instantly create a new reference image from the contents of the bounding box, using the button next to the Reference image drop zone.

  • New “Edit” button in image viewer, creating a new canvas with the selected image.

  • New “Control Layer Resized” drop zone for adding an automatically optimized control layer to an existing canvas.

  • Overlay Control - Toggle the visibility of all non-raster layers (like Control Layers and Inpaint Masks) with a single click or the Shift+H hotkey.

  • Improved staging toolbar and navigation, with preview image controls available directly on the toolbar.

  • Save All Canvas Generations: A new option allows you to automatically save every image generated on the canvas to your gallery. Available in the Canvas settings.

  • Export Canvas to PSD (Photoshop) - Accessed in the Raster Layers header) - You can now directly export your canvas, with all raster layers intact, to a .psd file for seamless integration with Adobe Photoshop and other compatible image editors.

Other Features and Improvements

  • Full UI integration for FLUX Kontext Dev, allowing you to guide generations with a reference image. This includes support for the dev and quantized (gguf) variants. (Note: No support currently for fp8)

  • LoRA Picker Overhaul: The LoRA list now uses the new model picker, automatically filtering for LoRAs compatible with your selected base model, with the option to display preview images and related models configured within the Model Manager.

  • OMI LoRA Support and additional AI Toolkit LoRA support for FLUX.

  • Countless bug fixes, enhancements and performance improvements.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to pytorch, users on older GPUs (2XXX series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs as torch updates are released within the installer, but in the meantime users have found success manually downgrading torch. Head to Discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v5.15.0…v6.0.0

v6.0.1

This patch release fixes a number of bugs.

Check out the v6.0.0 release notes if you haven’t already! It’s a big one.

Changes

  • Fix an issue that could result in images getting stuck as placeholders.
  • Fix an issue where you could drag a panel tab and end up with stacked panels.
  • Fix an issue w/ certain languages hard-crashing the UI.
  • Render the staging area in a virtualized list to prevent slowdowns when many images are staged.
  • Alter the request frequency and prefetching logic for gallery to reduce network requests during scrolling, but keep the same UX.
  • Clearer error message when model probing fails.
  • Do not attempt to download models when there isn’t enough disk space for them.
  • Potential fix for rare UI state persistence issues.
  • Introduce global, thread-safe locking for all DB operations. We hope that this will fix these errors:
    • sqlite3.InterfaceError: bad parameter or other API misuse
    • pydantic_core._pydantic_core.ValidationError: 1 validation error for GraphExecutionState
      JSON input should be string, bytes or bytearray [type=json_type, input_value=None, input_type=NoneType]
      For further information visit https://errors.pydantic.dev/2.11/v/json_type

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.0.0…v6.0.1

v6.0.2

This patch release fixes drag-and-drop from the gallery.

Check out the v6.0.0 release notes if you haven’t already! It’s a big one.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.0.1…v6.0.2

v6.1.0

This minor release includes a handful of fixes and enhancements.

Check out the v6.0.0 release notes if you haven’t already! It’s a big one.

In Invoke v5, we had a toggle near the Invoke button that let you choose either Send to Gallery or Send to Canvas. Here’s what it looked like in v5:

Screen Recording 2025-07-17 at 5 11 19 pm

  • Send to Gallery: Generations go directly to the Gallery. The Staging Area is bypassed completely. You can change settings, Invoke, change more settings, Invoke again, and so on, building up a large queue of generations.
  • Send to Canvas: Generations go to the Canvas Staging Area. You cannot change some settings until you accept or discard all pending images.

This toggle was a major stumbling block for new users, causing a lot of confusion. It was removed in v6, replaced by a Save All Images to Gallery setting. This new setting didn’t work the same as Send to Gallery. Yes, it saved images to the Gallery, but it didn’t bypass the Staging Area - and Canvas would still be locked down.

We received a ton of feedback that the Send to Gallery mode enabled a critical workflow for many users. In v6.1.0, we are restoring that functionality with what we hope to be a less confusing UX.

The Canvas Save All Images to Gallery setting now replicates Send to Gallery mode. Generations queued up while the setting is enabled will bypass the Canvas Staging Area entirely. The Canvas isn’t locked down when you generate like this.

We renamed the setting to Save New Generations to Gallery to better describe what it does: image

You’ll see an alert on Canvas when it is enabled: image

Enhancements

  • Added hotkey to star/unstar images (.). You must be focused in the Gallery to use the hotkey.
  • Added button and hotkey (shift+b) to fit the Canvas Bbox to visible Inpaint Masks, with padding to account for mask blur.
  • Added button and hotkey (shift+v) to invert the selected Inpaint Mask layer.
  • Added auto-layout functionality to the Workflow Editor to reposition nodes based on a configurable graph layout algorithm. It’s in the bottom-left column of buttons. Thanks @skunkworxdark !
  • When importing LoRAs, Invoke checks for an a metadata file and image alongside the LoRA. If present, we parse the metadata and copy the image in.
  • Expose Tile Size, Tile Overlap and Tile Control model in Upscaling tab.
  • Show related embeddings in prompt trigger menu.
  • Update model picker styling.
  • Style nodes when they have errors or warnings in workflow editor.
  • Improved performance in workflow editor.
  • Updated translations. Thanks @Linos1391 @Harvester62 @RyoK0220 @rikublock!

Fixes

  • Ignore disabled ref images when determine if the user can click Invoke.
  • Aspect ratios out of order.
  • Error when uploading image with uppercase file extensions (e.g. .JPG vs .jpg).
  • Prevent dragging on empty space in workspace tabs, which can bork the layout.
  • Canvas Staging Area auto-switch could fail when too many images were in the list, plus other minor Staging Area jank.
  • SDXL Negative Style Prompt not recorded in metadata.
  • Unstyled error boundary screen.
  • Rare error encountered during rehydration of UI state.

Internal/Dev

  • Fix docker UI build.
  • Updated manual install docs for v6. Thanks @JPPhoto!
  • Export FLUX Conditioning classes from package. Thanks @JPPhoto!
  • Upgraded many frontend packages.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.0.2…v6.1.0

v6.2.0

This minor release includes a handful of fixes and enhancements.

Check out the v6.0.0 release notes if you haven’t already! It’s a big one.

Enhancements

  • Restored Cancel and Clear All functionality, which was removed in v6. The button for this is in the hamburger menu next to the Invoke button.
  • When resetting Canvas Layers, an empty Inpaint Mask layer is added.
  • Restored the Viewer toggle hotkey z.
  • Updated translations. Thanks @Harvester62 !

Fixes

  • Fixed useInvocationNodeContext must be used within an InvocationNodeProvider error that could crash the Workflow Editor.
  • Fixed issue where scrolling on Canvas could result in zooming in the wrong direction, especially when using a mouse scrollwheel.

Internal/Dev

  • Minor perf improvement in Workflow Editor, reducing re-renders of the Auto Layout popover.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.1.0…v6.2.0

v6.3.0

This minor release includes a handful of fixes and enhancements.

Support for multiple reference images for FLUX Kontext

You may now use multiple ref images when using FLUX Kontext on the Generate, Canvas and Workflows tabs.

On the Generate and Canvas tabs, the images are concatenated in image-space before being encoded.

This is done using the new Flux Kontext Image Prep node, which you can use in Workflows. Use it to resize an image to one of Kontext’s preferred sizes. If multiple images are added to its collection, they are concatenated horizontally. Pass the output of this node into a single Kontext Conditioning node, and then pass its output into the Denoise node.

If, for some reason, you want to use latent-space concatenation, you can do it like this:

  • Add a Flux Kontext Image Prep for each image
  • Pass each of those to its own Kontext Conditioning
  • Collect the Kontext Conditioning nodes
  • Pass the output collection to the Denoise node

The images will be concatenated in latent-space by the Denoise node. It will not resize the images to Kontext preferred dimensions. For best results, use the Flux Kontext Image Prep node, as described above, to prep your ref images.

Studio state is stored in the database

Studio state includes all generation settings, Canvas layers, Workflow editor state - in other words, everything that you lose when you click Reset Web UI. Studio state does not include models, images, boards, saved workflows, etc.

Previously, this data was stored in the web browser or Launcher’s built-in UI. In v6.3.0, it is stored in the database, allowing your Studio state to follow you across browsers and devices.

For example, let’s say you were working in Canvas from the Launcher’s UI. You need to switch computers, so you enable Server Mode in the launcher and open Invoke on the other computer.

Previously, your Studio would load up with default settings on the other computer. In v6.3.0, you will instead pick up right where you left off on the first computer.

On the first launch of Invoke after updating to v6.3.0, we will migrate Studio state stored in the browser to the database, so you shouldn’t lose anything.

Added setting to disable picklescan

Invoke uses picklescan to scan certain unsafe model types for malware during installation and loading.

Sometimes, picklescan is unable to scan models because their internal structure is broken. It is possible that these unscannable models will still work fine, and have no malware, but until now, there was no way to tell Invoke to ignore detections or scan errors.

You may now dangerously, unsafely opt-out of model scanning by adding this line to your invokeai.yaml config file:

# 😱 scary!
unsafe_disable_picklescan: true

We strongly suggest you do not disable picklescan. Disable it at your own risk.

Enhancements

  • Support for multiple reference images for FLUX Kontext on Generate, Canvas and Workflows tabs. Ref images are concatenated in image space.
  • New Flux Kontext Image Prep node. Use it to resize an image to one of Kontext’s preferred sizes. If multiple images are added to its collection, they are concatenated horizontally.
  • When Snap to Grid on Canvas is disabled, hold Ctrl/Cmd to temporarily enable course snapping. Hold Ctrl/Cmd+Shift to temporarily enable fine snapping. Thanks @Ar7ific1al !
  • Update styling and layout for image comparison.
  • Added visual indicator on node fields when they are added to the form. The field names are in blue with a small link icon.
  • Added setting to disable picklescan.
  • Added FLUX.1 Krea dev to starter models (full-fat and quantized).
  • Added a not-broken anime upscaler model to starter models.
  • Studio state is stored on the server.
  • Add hotkey shift+n to fit bbox to layers. It does the same thing as the button in the Canvas toolbar.
  • Add a button to the ref image display to use that image’s size for generation. This is useful for FLUX Kontext, where you often want to generate at the same/similar size as a reference image.
  • Updated translations. Thanks @Harvester62 @Linos1391 !

Fixes

  • Fix issue where model filenames with periods were not handled correctly.
    • This fixes the error DuplicateModelException: A model with path 'flux/main/FLUX.safetensors' is already installed.
  • Fix issue where model installation required 2x the disk space the model actually needed. We now move - not copy - from download temp dir to final destination.
  • Metadata not recorded for API model generations.
  • Queue count badge not hidden when left panel is collapsed.
  • Fix an issue where canceling a queue item didn’t clear its progress image.
  • Fix an issue where viewer could briefly show the last-selected image between the last progress image being received, and its output image rendering.
  • Add handling for a rare race condition where we get socket events for a queue item after it has completed.
  • Add handling for a common race condition where queue status network requests complete after queue events optimistically update the counts, often resulting in a the little yellow queue count badge being incorrect.
  • Fix an issue where intermediate images could trigger changes to gallery view.
  • Progress image not hiding when a generation fails or is canceled, when gallery auto-switch is disabled.
  • Awkward flash of incorrectly-sized image when starting image comparison.
  • Fix an issue where gallery auto-scroll could not work during an image loading race condition.
  • Prevent creating a new canvas while staging, which could bork your existing canvas session.
  • Fix an issue where the Reset Canvas Layers button also reset the bbox.
  • Hide Reset Canvas Layers button when not on the canvas.
  • Fix visual overflow with very long board names.

Internal/Dev

  • UI logging now includes the source code filename of the logger, making troubleshooting much easier for UI bugs.
  • All redux state is modeled with zod schemas. Rehydrated state is validated against the schemas before it makes it into the browser, preventing some (very rare) errors.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.2.0…v6.3.0

v6.4.0

This release includes a handful of fixes and enhancements.

Enhancements

Shout-out to @csehatt741 for knocking out some great QoL improvements. Thank you!

  • Canvas Bbox visibility can be toggled with shift+o. Thanks @csehatt741!
  • Nodes with execution errors are highlighted red. Thanks @csehatt741!
  • Prevent a field from being added to Workflow Builder forms multiple times. Thanks @csehatt741!
  • Support recall of CLIP Skip metadata. Thanks @csehatt741!
  • Fixed some issues with model install paths.
  • Tweaked state persistence strategy - now debounced to 300ms instead of throttled to 2000ms. This should reduce stutters while doing things like panning around the Canvas.
  • SDXL Style prompts have been removed from the Generate, Canvas and Upscaling tabs. This rarely-used setting was unintuitive at best. You can still use it in Workflows, but we are removing this footgun from the linear UI tabs.
  • Prompt and seed metadata may now be recalled on the Upscaling tab.
  • The buttons to download potentially very large starter model bundles show a confirmation dialog before starting the download. Thanks @csehatt741!
  • Merged layers are inserted in the right spot in the layers panel. Thanks @csehatt741!
  • Added button to image context menu and viewer toolbar to locate an image in the gallery. The image’s board is selected and image scrolled to. Thanks @csehatt741!
  • Support FLUX PEFT LoRAs with base_model.model key prefix.
  • Improved VAE encode VRAM usage.
  • Updated translations. Thanks @Harvester62!

Fixes

  • Minor bug when concatenating Kontext ref images in latent space that could result in some images not being “seen”.
  • Fit to Bbox functionality could result in the layer being sized correctly but positioned incorrectly when the bbox was not aligned to the 64px grid.
  • Allow use of mouse in node title editable inputs.

Internal/Dev

  • Fix AMD docker image build issue related to disk space. Thanks @heathen711!

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.3.0…v6.4.0

v6.5.0

This release includes a handful of fixes and enhancements.

Enhancements

  • Add a optional Shuffle button to float and integer fields in Workflow Builder forms. Thanks @csehatt741!
  • Canvas color picker non longer changes the alpha of the color.
  • When the bbox aspect ratio is locked, resizing the bbox from the Canvas will respect the locked status of the aspect ratio. Hold shift to temporarily invert the locked status:
    • When the aspect ratio is locked, holding shift while resizing the bbox will allow you to freely resize the bbox.
    • When the aspect ratio is not locked, holding shift while resizing the bbox will maintain the last aspect ratio of the bbox.
  • When a node field is added to a Workflow Builder form, the + button to add it will now show a - and let you remove the field. Thanks @csehatt741!
  • When changing a selection of image’s board, the current board is hidden from the board drop-down. The items in the drop down are now sorted alphabetically. Thanks @csehatt741!
  • When using a model that doesn’t support reference images, they will be hidden. You can now Invoke without needing to disable them.
  • When using a model that doesn’t support explicit width and height settings, they will be hidden.

Fixes

  • Rare issue with HF tokens that could cause an error when downloading models from a protect HF repo immediately after setting the token in Invoke’s Model Manager.
  • Fix an issue with float field precision in the Workflow editor.
  • Fix an error AttributeError: module 'cv2.ximgproc' has no attribute 'thinning'. Affected users should use the Launcher’s Repair Mode to get the fix, otherwise the error will persist.
  • Disable the color picker when using middle mouse to pan the Canvas.
  • Minor issue related to gallery multi-select where the last-selected image didn’t show in the viewer.
  • Prevent dragging and dropping a node field into the Workflow Builder if it has already been added once.
  • Fix an issue where the last progress image for a Canvas generation would get stuck on the Viewer tab.
  • Fix an issue where certain image loading errors in Canvas were not logged correctly.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.4.0…v6.5.0

v6.5.1

This is a patch release, fixing a few high priority bugs.

Fixes

  • Hard crash when generating with FLUX on Windows.
  • Super tiny progress images on Canvas.
  • Assorted Canvas issues, mostly around transparency.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.5.0…v6.5.1

v6.6.0

This is a minor release, adding a few QoL improvements and fixes.

Enhancements

  • Canvas Color Picker has foreground and background colors. Switch between them with x. Press d to reset them to black and white. Thanks @csehatt741!
  • You can set a default weight setting for LoRAs in the Model Manager. When you add the LoRA, it will start at the default weight. Thanks @csehatt741!
  • Canvas Brush/Eraser width renders an in-line slider when there is enough space instead of showing the slider in a popover.
  • Updated translations. Thanks @Harvester62!

Fixes

  • Always delete LoRAs when recalling all metadata. Thanks @csehatt741!
  • Incompatible LoRAs being enabled prevents you from clicking Invoke.
  • Fixed an issue where it was possible to drag a tab panel to another location in the UI on Chrome and Launcher (Firefox was unaffected).
  • Internal file organization fix for docker builds.
  • Fix an issue where progress images were super tiny (again).
  • Fix an issue where no fallback was rendered in the viewer when no image is selected.
  • Fix an issue where a single middle-mouse click on Canvas would activate the View tool (i.e. drag-to-pan), and you had to click again to deactivate it.
  • Fix an issue in the Viewer where the last-generated image would briefly show after the current generation finishes.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.5.1…v6.6.0

v6.7.0

This minor release includes improved object selection on Canvas, layer adjustments, prompt history and a handful of other enhancements and fixes.

Select Object v2

We’ve made some major improvements to object selection.

  • Segment Anything v2 is now supported. You can choose between SAM1 and SAM2. We’ve found that SAM2 is much faster than SAM1, but often does not perform as well, so we left SAM1 as an option.
  • You may now draw a box around the target object. The box doesn’t need to be exact - sometimes, you can get better results by making it a bit smaller than the target object. Points are still supported and can be used independently or as a refinement for a box.
  • Holding shift while clicking creates an exclude point if you have include selected. If you have selected exclude, holding shift will instead create an include point.
  • You can now provide a text prompt instead of a box and points. Use very simple language for best results. Internally, this uses Grounding DINO to identify the target.

Raster Layer Adjustments

Right click a Raster Layer to add adjustments. Adjustments are non-destructive, though you can accept them to bake them into the layer.

You can adjust brightness, contrast, saturation, temperature, tint, and sharpness, or use the curves editor to adjust each channel independently.

Thanks @dunkeroni for implementing this very useful feature.

Prompt History

There’s a new button in the Positive Prompt box for prompt history. Your last 100 unique prompts are stored for easy recall. You can search them, delete individual prompts, or clear the whole list.

Enhancements

  • Improved object selection on Canvas.
  • Raster layer adjustments. Thanks @dunkeroni!
  • Support for mathematical expressions in number input fields. Currently, these are only enabled for fields in the Workflow Editor (including Builder Forms). Thanks @csehatt741!
  • Prompt history for Positive Prompt.
  • Queue list now sorts with newest on top. You can reverse the sort if you want, to restore the previous sorting. Thanks @csehatt741!
  • Updated translations. Thanks @Harvester62 @Linos1391!

Fixes

  • Fixed an issue that prevented you from using LoRA weights outside the range -1 to 2.
  • Fixed an issue where LoRA settings could be lost on refresh.
  • Fixed an issue where LoRAs with weights outside the range -1 to 2 were not able to be recalled from metadata.
  • Fixed an issue where popovers like the Canvas Settings popover were obscured by other UI elements.
  • Fixed a path traversal vulnerability affecting the bulk downloads API.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.6.0…v6.7.0

v6.8.0

This minor release includes a handful of fixes and enhancements.

Fixes

  • When accepting raster layer adjustments, the opacity of the layer was “baked” in.
  • Corrected help text for non-in-place model installation. Previously, the help text said that a non-in-place model install would copy the model files. This is incorrect; it moves them into the Invoke-managed models dir.
  • Failure to queue generations with an error like Failed to Queue Batch / Unknown Error.

Enhancements

  • Added a crop tool. For now, it is only enabled for Global Ref Images.
    • Click the crop icon on the Ref Image preview to open the tool.
    • Adjust the crop box and click apply to save the cropped image for that ref image.
    • To revert, open the crop tool, click Reset, then Apply to revert to the original image.
    • We’ll explore integrating this new tool elsewhere in the app in a future update.
  • Improved Model Manager tab UI. Thanks @joshistoast!
  • Keyboard shortcuts to navigate prompt history. Use alt/option+up/down to move through history.
  • Support for the NOOB-IPA-MARK1 IP Adapter. Thanks @Iq1pl!

Internal

  • Support for dynamic model drop-downs in Workflow Editor. This change greatly reduces the amount of frontend code changes needed to support a new model type. Node authors may need to update their nodes to prevent warnings from being displayed. However, there are no breakages expected. See #8577 for more details.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.7.0…v6.8.0

v6.8.1

This patch release fixes the Exception in ASGI application startup error that prevents Invoke from starting.

The error was introduced by an upstream dependency (fastapi). We’ve pinned the fastapi dependency to the last known working version.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.8.0…v6.8.1

v6.9.0

This release focuses on improvements to Invoke’s Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.

On first run after installing this release, Invoke will do some data migrations:

  • Run-of-the mill database updates.
  • Update some model records to work with internal Model Manager changes, described below.
  • Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model’s UUID. Models outside the Invoke-managed models directory are not moved.

If you see any errors or run into any problems, please create a GH issue or ask for help in the #new-release-discussion channel of the Invoke discord.

Model Installation Improvements

Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.

Unknown Models

Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.

As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.

If the model still doesn’t work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.

Invoke-managed Models Directory

Previously, as a relic of times long past, Invoke’s internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn’t have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.

As of this release, Invoke’s internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.

On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won’t be touched.

We understand this change may seem user-unfriendly at first, but there are good reasons for it:

  • This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
  • It reinforces that the internal models directory is Invoke-managed:
    • Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
    • Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
  • It obviates the need to move models around when changing their type and base.

Refactored Model Identification system

Several months ago, we started working on a new API to improve model identification (aka “probing” or “classification”). This process involves analyzing model files to determine what kind of model it is.

As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.

Model Identification Test Suite

Besides the business logic improvements, model identification is now fully testable!

When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.

Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.

This allows us to strip out the weights from model files, leaving only the model’s “skeleton” as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.8.1…v6.9.0

v6.10.0

InvokeAI v6.10.0

This is the first InvokeAI Community Edition release since the closure of the commercial venture, and we think you will be pleased with the new features and capabilities. This release introduces backend support for the state-of-the-art Z-Image Turbo image generation models, and multiple frontend improvements that make working with InvokeAI an even smoother and more pleasurable experience.

The Z-Image Turbo Model Family

Z-Image Turbo (ZiT) is a bilingual image generation model that manages to combine high performance with a small footprint and excellent image generation quality. It excels in photorealistic image generation, renders both English and Chinese text accurately, and is easy to steer. The full model will run easily on consumer hardware with 16 GB VRAM, and while quantized versions will run on significantly smaller cards, with some loss of precision.

With this release InvokeAI runs almost all released versions of ZiT, including diffusers, safetensors, GGUF, FP8 and quantified versions. However, be aware that the FP8 scaled weights models are not yet fully supported and will produce image artifacts. In addition, InvokeAI supports text2image, image2image, ZiT LoRA models, controlnet models, canvas functions and regional guidance. Image Prompts (IP) are not supported by ZiT, but similar functionality is expected when Z-Image Edit is publicly released.

To get started using ZiT, go to the Models tab and from the Launchpad select the Z-Image Turbo bundle to install all the available ZiT related models and dependencies (rougly 35 GB in total). Alternatively, you can select individual models from the Starter Models tab, and search for “Z-Image.” The full and Q8 models will run on a 16 GB card. For cards with 6-8 GB of VRAM, choose the smaller quantized model, Z-Image Turbo GGUF Q4_K. Note that when using one of the quantized models, you will also need to install the standalone Qwen3 encoder and one of the Flux VAE models. This will be handled for you when you install a ZiT starter model.

When generating with these models it is recommended to use 8-9 steps and a CFG of 1. Be aware that due to ZiTs strong prompt following it does not generate as much image diversity as other models you may be used to. One way to increase image diversity is to create a custom workflow that adds noise to the Z-Image Text Encoder using @Pfannkuchensack’s Image Seed Variance Enhancer Node.

In addition to the default Euler scheduler for ZiT we offer the more accurate but slower Heun scheduler, and a faster but less accurate LCM scheduler. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.

A big shout out to @Pfannkuchensack for his critical contributions to this effort.

New Workflow Features

We have two new improvements to the Workflow Editor:

  • Workflow Tags: It is now possible to add multiple arbitrary text tags to your workflows. To set a tag on the current workflow, go to Details and and scroll down to Tags. Enter a comma-delimited of tags that describe your workflow, such as “image, bounding box”, and save. The next time you browse your workflows, you will see a series of checkboxes for all the unique tags in your workflow connection. Select the tag checkboxes individually or in combination to filter the workflows that are displayed. This feature was contributed by @Pfannkuchensack.
  • Prompt Template Node: Another @Pfannkuchensack workflow contribution is a new Prompt Template node, which allows you to apply any of the built-in or custom prompt style templates to a prompt before passing it onward to generation.

Prompt Weighting Hotkeys

@joshistoast has added a neat feature for adjusting the weighting of words and phrases in the prompt. Simply select a word or phrase in the prompt textbox and press Ctrl-Up Arrow to increase the weight of the selection (by adding ”+” marks) or Ctrl-Down Arrow to decrease the weighting.

Limitations: The prompt weighting does not work properly with numeric weights, nor with prompts that contain the .add() or .blend() functions. This will be fixed in the next point release.

Hotkey Editor

Speaking of hotkeys, @Pfannkuchensack and @joshistoast contributed a new user interface for editing hotkeys. Any of the major UI functions, such as kicking off a generation, opening or closing panels, selecting tools in the canvas, gallery navigation, and so forth, can now be assigned a key shortcut combination. You can also assign multiple hotkeys to the same function.

To access the hotkey editor, go to the Settings (gear) menu in the bottom left, and select Hotkeys.

Bulk Operations in the Model Manager

You can now select multiple models in the Model Manager tab and apply bulk operations to them. Currently the only supported operation is to Delete unwanted models, but this feature will be expanded in the future to allow for model exporting, archiving, and other functionality.

This feature was contributed by @joshistoast, based on earlier work by @Pfannkuchensack .

Masked Area Extraction in the Canvas

It is now possible to extract an arbitrary portion of all visible raster layers that are covered by the Inpaint Mask. The extracted region is composited and added as a new raster layer. This allows for greater flexibility in the generation and manipulation of raster layers.

Thanks to @DustyShoe for this work.

PBR Maps

@blessedcoolant added support for PBR maps, a set of three texture images that can be used in 3D graphics applications to define a material’s physical properties, such as glossiness. To generate the PBR maps, simply right click on any image in the viewer or gallery, and select “Filters -> PBR Maps”. This will generate PBR Normal, Displacement, and Roughness map images suitable for use with a separate 3D rendering package.

New FLUX Model Schedulers

We’ve also added new schedulers for FLUX models (both dev and schnell). In addition to the default Euler scheduler, you can select the more accurate but slow Heun scheduler, and the faster but less accurate LCM scheduler. Look for the selection under “Advanced Options” in the Text2Image settings panel, or in the FLUX Denoise node in the workflow editor. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.

Thanks to @Pfannkuchensack for this contribution.

SDXL Color Compensation

When performing SDXL image2image operations, the color palette changes subtly and the discrepancy becomes increasingly obvious after several such operations. @dunkeroni has contributed a new advanced option to compensate for this color drift when generating with SDXL models.

Option to Release VRAM When Idle

InvokeAI tends to grab as much GPU VRAM as it needs and then hold on to it until the model cache is manually cleared or the server is restarted. This can cause an annoyance for people who need the VRAM for other tasks. @lstein added a new feature that will automatically clear the InvokeAI model cache and release its VRAM after a set period of idleness. To activate this feature, add the configuration option model_cache_keep_alive_min to the invokeai.yaml configuration file. It takes a floating point number corresponding to the number of minutes of idleness before VRAM is released. For example, to release after 5 minutes of idleness, enter:

model_cache_keep_alive_min: 5.0

Setting this value to 0 disables the feature. This is also the default if the configuration option is absent.

Bugfixes

Multiple bugs were caught and fixed in this release and are listed in the detailed changelog below.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

New Contributors

Translation Credits

Many thanks to Riccardo Giovanetti (Italian) and RyoKoba (Japanese) who contributed their time and effort to providing translations of InvokeAI’s text.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.9.0…v6.10.0

v6.11.0

InvokeAI v6.11.0

This is a feature release of InvokeAI which provides support for the new FLUX.2 Klein image generation and edit models as well as a few small improvements and bug fixes. Before we get to the details, consider taking our 2026 User Engagement Survey. We want to know who you are, how you use InvokeAI, and what new features we can add to make the software even better.

Support for FLUX.2 Klein models

The FLUX2 Klein family of models (F2K) are fast, high quality image generation and editing models. Invoke provides support for multiple versions, including both the fast-but-less-precise 4 billion (4B) and the slower-but-more-accurate 9 billion (9B) models, as well as quantized versions of these models suited for systems with limited VRAM. These models are small and fast; the fastest can render images in seconds with just four steps.

In addition to the usual features (txt2img, img2img, inpainting, outpainting) F2K offers a unique image editing feature which allows you to make targeted modifications to an image or set of images using prompts like “Change the goblet in the king’s right hand from silver to gold,” or “Transfer the style from image 1 to image 2”.

Suggested hardware requirements are:

FLUX.2 Klein 4B - 1024×1024

  • GPU: Nvidia 30xx series or later, 12GB+ VRAM (e.g. RTX 3090, RTX 4070). FP8 version works with 8GB+ VRAM.*
  • Memory: At least 16GB RAM.
  • Disk: 10GB for base installation plus 20GB for models (Diffusers format with encoder).

FLUX.2 Klein 9B - 1024×1024

  • GPU: Nvidia 40xx series, 24GB+ VRAM (e.g. RTX 4090). FP8 version works with 12GB+ VRAM.
  • Memory: At least 32GB RAM.
  • Disk: 10GB for base installation plus 40GB for models (Diffusers format with encoder).

Getting Started with F2K

After updating InvokeAI, you will find a new FLUX.2 Klein starter pack for in the Starter Models section of the Model Manager. This will download three files: the Q4 quantized version of F2K 4B, which is suitable to run on low-end hardware, and two supporting files: the FLUX.2 VAE, and a quantized version of the FLUX.2 Qwen3 text encoder.

After installing the bundle, select the “FLUX.2 Klein 4B (GGUF Q4)” model in theGeneration section of Invoke’s left panel. Also go to the Advanced section at the bottom of the panel and select the F2K VAE and text encoder models that were installed with the starter bundle. (If you don’t select these, you will get an warning message on the first generation that tells you to do this.) Recommended generation settings are:

  • Steps: 4-6
  • CFG: 1-2

Modestly increasing the number of steps may increase accuracy somewhat. If you work with the Base versions of F2K (available from HuggingFace), increase the steps to >20 and the CFG to 3.5-5.0.

Text2img, img2img, inpainting and outpainting will all work as usual. InvokeAI does not currently support F2K LoRAs or ControlNets (there have not been many published so far). In addition, only the Euler sampler is currently available. Support for LoRAs and additional schedulers will be added in a future release.

Prompting with FLUX.2

Like ZiT, F2K’s text encoder works best when you provide it with long prose prompts that follow the framework Subject + Setting + Details + Lighting + Atmosphere. For example: “An elderly king is standing on a low dais in front of a crowded and chaotic banquet hall bursting with courtiers and noblemen. He is shown in profile, facing his noblemen, holding high a jeweled chalice of wine to toast the unification of his fifedoms. This is a cinematic shot set that conveys historical grandeur and a medieval vibe.

F2K does not perform any form of prompt enhancement, so what you write is what the model sees. See FLUX.2 Prompting Guide for more guidance.

Image Editing

F2K provides an image editing mode that works like a souped-up version of Image Prompt (IP) Adapters. Drag-and-drop or upload an image to the Reference Image section of the Prompt panel. Then instruct the model on modifications you wish to make using active verbs. You may issue multiple instructions in the same prompt.

  • Change the king’s chalice from silver to gold. Give him a crown, and grow him a salt-and-pepper beard.
  • Change the image style to a scifi/fantasy vibe.
  • Use an anime style and give the noblemen and courtiers brightly-colored robes.

F2K editing supports multiple reference images, letting you transfer visual elements (subjects, style and background) from one to another. When prompting over multiple images, refer to them in order as “image 1,” “image 2,” and so forth.

  • Give the king in image 1 the crown that appears in image 2.
  • Transfer the style of image 1 to image 2.

Dealing with multiple reference images is tricky. There is no way to adjust the weightings of each image, and so you will have to be explicit in the prompt about which visual elements you are combining. If you cannot get the effect you are looking for by modifying the prompt, you may find success by changing the order of images.

Also be aware that each image significantly increases the model’s VRAM usage. If you run into memory errors, use a smaller (quantized) model, or reduce the number and size of the reference images.

Other Versions of F2K Available in the Model Manager

To find additional supported versions of F2K, type “FLUX.2” into the Starter Models search box. This will show you the following types of files:

  • FLUX.2 Klein 4B/9B (Diffusers) These are the full-size all-in-one diffusers versions of F2K which come bundled with the VAE and text encoder.
  • FLUX.2 Klein 4B/9B These are standalone versions of the full-size F2K which require installation of separate VAE and text encoders. Note that the 4B and 9B models require different text encoders, “FLUX.2 Klein Qwen3 4B Encoder” and “FLUX.2 Klein Qwen3 8B Encoder” respectively. (Not a misprint: use the 9B F2K model with the 8B text encoder!)
  • FLUX.2 Klein 4B/9B (FP8) These are the standalone versions quantized to 8 bits. The 4B model will run comfortably on macines with 8GB VRAM, while the 9B model will run on machines with 12GB or higher. As with all quantized versions, there is minor loss of generation accuracy.
  • FLUX.2 Klein 4B/9B (Q4) These are standalone versions that have been quantized to 4 bits, resulting in very small and fast models that can run on cards with 6-8 GB VRAM.

There is only one F2K VAE, and it happens to be same as the one used by FLUX.1 and Z-Image Turbo. However, there are several text encoder options:

  • FLUX.2 Klein Qwen3 4B Encoder Use this encoder with the F2K 4B versions. It also works with Z-Image Turbo.
  • Z-Image Qwen3 Text Encoder (quantized) This is a Q6-quantized version of the text encoder, that works with both F2K and ZiT. You may use this on smaller memory systems to reduce swapping of models in and out of VRAM.
  • FLUX.2 Klein Qwen3 8B Encoder Use this encoder with the F2K 9B versions. It is not compatible with ZiT.

You will find additional F2K models on HuggingFace and other model repositories, including the base models intended for fine-tuning and LoRA training. We have not exhaustively tested InvokeAI compatibility with all the available variants. Please report any incompatible models to InvokeAI Issues.

Many credits to @Pfannkuchensack for contributing F2K support.

Other Features in this Release

The other features in this release include:

Z-Image Turbo Variance Enhancer

ZiT tends to produce very similar images for a given prompt. To increase image diversity, @Pfannkuchensack contributed a Seed Variance Enhancer node which adds calibrated amounts of noise to the prompt conditioning prior to generation. You will find this feature in the Generation panel under Advanced Options. When activated, you will see two sliders, one for Variance Strength and the other for Randomize Percent. The first slider controls how much noise will be added to the conditioned prompt, and the second controls what proportion of the conditioning’s weights will be altered. Using the default randomization of 50% of the values, a variance strength of 0.1 will produce subtle variations, while a strength of 0.5 will produce very marked deviation from the prompt. Increasing the percentage of weights modified will also increase the level of variation.

Improved Support for High-Resolution FLUX.1 Images

A new denoising tuning algorithm, introduced by @Pfannkuchensack, increases the accuracy of FLUX.1 generations at high resolutions. When a FLUX.1 model is selected, a new DyPE option will appear in the Generation panel. Its settings are Off (the default) to disable the algorithm, Auto to automatically activate DyPE when rendering images greater than 1536 pixels in either dimension, and 4K Optimized to activate the algorithm with parameters that are tuned for 4K images. Note that if you do not have sufficient VRAM to generate 4K images, this feature will not help you generate them. Instead, generate a smaller image and use Invoke’s Upscaling feature.

Canvas high level transform smoothing

Another improvement contributed by @DustyShoe: The Canvas raster layer transform operation now supports multiple types of smoothing, thereby reducing the number of artifacts when an area is upscaled.

Text Search and Highlighting in the Image Metadata Tab

The Image Viewer’s info (🛈) tab now has a search field that allows you to rapidly search and highlight text in image metadata, details, workflow and generation graph. In addition, the left margin of the metadata display has been widened to make the display more readable.

Thanks to @DustyShoe for this improvement.

Bugfixes

Several bugs were caught and fixed in this release and are listed in the detailed changelog below. Thanks to first-time contributors @kyhavlov and @aleyan for the bugs they caught and fixed.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

Translation Credits

Many thanks to the following language translators who contributed to this release: @Harvester62 (Italian) and @DustyShoe (Russian).

Also many thanks to Weblate for granting InvokeAI a free Open Source subscription to use its translation management service.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.10.0…v6.11.0

v6.11.1

InvokeAI 6.11.1

This is a bugfix release that corrects several image generation and user interface glitches:

  • Fix FLUX.2 Klein image generation quality (@Pfannkuchensack)
    • At higher step values and larger images, the FLUX.2 Klein models were generating image artifacts characterized by diagonals, cross-hatching and dust. This bug is now corrected.
  • Restore denoising strength for outpaint mode (@Pfannkuchensack)
    • Previously, when outpainting, the denoising strength was pinned at 1.0 rather than observing the value set by the user.
  • Only show FLUX.1 VAEs when a FLUX.1 main model is selected (@Pfannkuchensack)
    • This fix prevents the user from inadvertently selecting a FLUX.2 VAE when generating with FLUX.1.
  • Reset ZiT seed variance toggle when recalling images without that metadata (@Pfannkuchensack)
    • When remixing an image generated by Z-Image Turbo, the setting of the seed variance toggle (which increases image diversity) is now correctly restored.
  • Improve DyPE area calculation (@JPPhoto)
    • DyPE increases the quality of FLUX.1 models at higher resolutions.. This fix improves how the algorithm’s parameters are automatically adjusted for image size.
  • Remove duplicate DyPE preset dropdown in generation settings (@Pfannkuchensack
    • The DyPE dropdown in generation settings is no longer duplicated in the generation UI.

In addition to these bug fixes, new Russian translations were added by (@DustyShoe).

Checkout the roadmap

To see what the development team has planned for forthcoming releases, check out the InvokeAI roadmap. Feature releases will be issued roughly monthly.

Take the user survey

And don’t forget to tell us who you are, what features you use, and what features you most want to see included in future releases. Take the InvokeAI 2026 User Engagement Survey and share your thoughts!

Credits

In addition to the authors of these bug fixes, many thanks to @blessedcoolant, @skunkworxdark, and @mickr777 for their time and patience testing and reviewing the code.

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.11.0…v6.11.1

v6.12.0 Latest

InvokeAI v6.12.0

This is a feature release of InvokeAI which provides support for multiple accounts on the same InvokeAI backend, enhanced support for the Z-Image and FLUX.2 models, multiple user interface enhancements, and new utilities for managing models.

[Jump to Installing and Updating]

Multi-User Mode (Experimental)

Have you ever wished you could share your InvokeAI instance with your friends, family or coworkers, but didn’t want to share your galleries or give everyone the ability to add and delete models? Now you can. InvokeAI 6.12 introduces an experimental multi-user mode that allows you to create separate user accounts with login names and passwords. Each account’s image boards, images, canvas state and UI preferences are separate from the others. Users with administrative privileges are allowed to perform system-wide tasks such as adding and configuring models and managing the session queue, while ordinary users are prevented from making this type of change.

InvokeAI Log-In Screen

See the Multi-User Mode User’s Guide for information on setting up and using this mode.

Multi-User mode was contributed by @lstein .

Enhanced Support for Z-Image and FLUX.2 Models

Z-Image Base — This version of InvokeAI adds support for the Z-Image Base model family. This is an undistilled version of Z-Image suitable for fine-tuning and LoRA training. It also provides a high level of image diversity while preserving excellent image quality.

FLUX.2 LoRAs — InvokeAI now supports a variety of FLUX.2 Klein LoRA formats.

Thanks to @Pfannkuchensack for his work on these enhancements.

Paged Gallery Browsing — Paged gallery browsing is back. Go to image board settings and select “Use Paged Gallery View” to replace infinite gallery scrolling with page-by-page navigation.

image

Arrow Key Navigation — The arrow keys now work correctly when browsing a gallery. When the Viewer is in focus, the right and left arrow keys will navigate through the currently selected gallery. When the gallery thumbnails are in focus, the right/left/up/down arrows navigate among them.

@DustyShoe contributed these enhancements.

New Canvas Features

The Canvas now features several new features added by @DustyShoe

Text Tool — The Canvas now features a Text tool that allows you to insert text in a variety of fonts, sizes and styles, move it around the canvas, and commit it to the raster layer.

Linear and radial gradient tools — These new tools add radial and linear gradients to the Canvas. The gradients use color transparency and the foreground/background colors to draw gradients in the direction of the mouse movement.

image

Invert Button for Regional Guidance Layers — You can now select any Regional Guidance region and select the “invert” button to exchange painted regions with unpainted ones and vice versa. As an added bonus, the invert button also works with Inpaint Masks.

Layer Controls Moved The controls for creating, duplicating and deleting canvas layers have been moved from the top of the layers list to the bottom, which is more consistent with how other graphics packages position their layer controls and, we think, more intuitive. Long-term Canvas users may need to adjust to the new positioning.

A few improvements contributed by @lstein aim to make it easier to maintain the model and image databases.

Remove Orphaned Models — Over time InvokeAI may accumulate unused “orphan” models in its models directory that take up space but have no entries in the models database for one reason or another. This means they take up disk space without being usable. A new “Sync Models” button in the Model Manager detects and offers to delete such orphaned models. Developers and other users who have access to the source code repository will also find a script, located in scripts/remove_orphaned_models.py , that will do the same thing from the command line.

Remove Dangling Models — The converse problem occurs when a model directory, or one of its files, was removed or renamed externally, causing it to be referenced in the models database but not be usable. There is now a “Missing Files” filter option in the Model Manager that will identify models that are damaged or deleted. You can then select the models you wish to delete and remove them from the database. In addition, the model selection menus will no longer display models that are missing or broken.

Gallery Maintenance Script — For users with access to the source code repository, the scripts/gallery_maintenance.py python script will clean up dangling and orphaned gallery images. Dangling images are those that appear in the Invoke gallery database but whose files have been deleted from disk. Orphaned images are those that have files on disk but are missing from the database. A related database maintenance tool with more bells and whistles can also be found in @Pfannkuchensack ‘s GitHub at https://github.com/Pfannkuchensack/sqlite_invokeai_db_tool.

Workflow Iterator Improvements

@JPPhoto fixed the way that workflow collections work. Previously when you created a Collection and passed it to an iterator, the items in the collection would be passed to downstream nodes in an unpredictable order. Now, the order of items in the collection is preserved, making complex workflows more predictable and reproducible.

Remote Controlling Invoke’s Generation Parameters

It is now possible to programmatically set Invoke’s generation parameters using a new REST endpoint. This allows a script or other external program to select the model, image size, seed, steps, LoRAs, reference images, and all the other parameters that go into a generation. For documentation of the feature see:

@lstein added this feature.

Translations

Thanks to @Harvester62 for providing the Italian translations for this release.


Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.

Behind-the-Scenes Improvements

This release are contains a number of bug fixes and performance enhancements.

  • Optimize cache locking in Klein text encoder — (@girlyoulookthebest) This addresses a race condition in the Model Cache which prematurely removed the FLUX.2 Klein encoder from memory.
  • Run Text Encoder on CPU — (@lstein) This is an option available in the details panel of the Model Manager that allows you to force large text encoder models to run on CPU rather than GPU. This preserves VRAM for use by the denoiser steps and in some cases improves performance. Thanks to @girlyoulookthebest who found and fixed a bug in this feature.
  • Fix IP Adapters losing their model path — (@Pfannkuchensack) Fixes the Model Manager’s “reidentify” function when run on IP Adapter models.
  • Kill the server with a single ^C — (@lstein) When previous version of Invoke were launched from a command-line terminal, it used to require two key board interrupts (control-C) to completely shut it down. This is now fixed.
  • Persist the selected board and image across browser sessions — (@lstein) The last image board selected is now restored when you edit a browser session and restart it.

Detailed Change Log

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.11.0…v6.12.0

This site was designed and developed by Aether Fox Studio.