Releases
InvokeAI v3.0.2post1
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
- What’s New
- Installation and Upgrading
- Getting Started with SDXL
- Known Issues
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What’s New in v3.0.2post1
- Support for LoRA models in diffusers format
- Warn instead of crashing when a corrupted model is detected
- Bug fix for auto-adding to a board
What’s New in v3.0.2
- LoRA support for SDXL is now available
- Mutli-select actions are now supported in the Gallery
- Images are automatically sent to the board that is selected at invocation
- Images from previous versions of InvokeAI are able to imported with the
invokeai-import-imagescommand - Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
- Model merging functionality has been fixed
- Improved Model Manager UI/UX
- InvokeAI 3.0 can be served via HTTPS
- Execution statistics are visible in the terminal after each invocation
- ONNX models are now supported for use with Text2Image
- Pydantic errors when upgrading inplace have been resolved
- Code formatting is now part of the CI/CD pipeline
- …and lots more! You can view the full change log here
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.2post1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
InvokeAI-installer-v3.0.2post1.zip
Upgrading in place
All users can upgrade from 3.0.1 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the
upgrademenu option [9] - Select “Manually enter the tag name for the version you wish to update to” option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the “Developer’s console” option [8]
- Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgradeinvokeai-configure --root .This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.
Note:
- If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your
models/ .cachefolder before proceeding.
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAIinvokeai-configure --yes --skip-sd-weightsYou may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.2invokeai-configure --yes --skip-sd-weightsImportant: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.batlauncher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models. - Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
stabilityai/stable-diffusion-xl-base-1.0stabilityai/stable-diffusion-xl-refiner-1.0
- Download the models manually and cut and paste their paths into the Location field in “Import Models”
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:
precision: float16max_cache_size: 12.0max_vram_cache_size: 0.5Known Issues in 3.0
This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
Getting Help
For support, please use this repository’s GitHub Issues tracking service, or join our Discord.
Contributing
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
New Contributors
- @zopieux made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3904
- @joshistoast made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3972
- @camenduru made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3944
- @ZachNagengast made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4040
- @sohelzerdoumi made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4116
- @KevinBrack made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4086
- @SauravMaheshkar made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4060
Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!
Detailed Change Long since 3.0.2
- surface error detail by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4231
- Fix maximum python version installation instructions by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4243
- Add support for SDXL LoRA models in diffusers format. by @RyanJDick in https://github.com/invoke-ai/InvokeAI/pull/4242
- Warn instead of crashing when a corrupted model is detected during startup scan by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4244
- fix missed spot for autoAddBoardId none by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4241
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.2…v3.0.2post1
InvokeAI Version 3.0.2
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
- What’s New
- Installation and Upgrading
- Getting Started with SDXL
- Known Issues
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What’s New in v3.0.2
- LoRA support for SDXL is now available
- Mutli-select actions are now supported in the Gallery
- Images are automatically sent to the board that is selected at invocation
- Images from previous versions of InvokeAI are able to imported with the
invokeai-import-imagescommand - Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
- Model merging functionality has been fixed
- Improved Model Manager UI/UX
- InvokeAI 3.0 can be served via HTTPS
- Execution statistics are visible in the terminal after each invocation
- ONNX models are now supported for use with Text2Image
- Pydantic errors when upgrading inplace have been resolved
- Code formatting is now part of the CI/CD pipeline
- …and lots more! You can view the full change log here
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.2 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
Upgrading in place
All users can upgrade from 3.0.1 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the
upgrademenu option [9] - Select “Manually enter the tag name for the version you wish to update to” option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the “Developer’s console” option [8]
- Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgradeinvokeai-configure --root .This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.
Note:
- If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your
models/ .cachefolder before proceeding.
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAIinvokeai-configure --yes --skip-sd-weightsYou may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.2invokeai-configure --yes --skip-sd-weightsImportant: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.batlauncher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models. - Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
stabilityai/stable-diffusion-xl-base-1.0stabilityai/stable-diffusion-xl-refiner-1.0
- Download the models manually and cut and paste their paths into the Location field in “Import Models”
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:
precision: float16max_cache_size: 12.0max_vram_cache_size: 0.5Known Issues in 3.0
This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
Getting Help
For support, please use this repository’s GitHub Issues tracking service, or join our Discord.
Contributing
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
New Contributors
- @zopieux made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3904
- @joshistoast made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3972
- @camenduru made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3944
- @ZachNagengast made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4040
- @sohelzerdoumi made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4116
- @KevinBrack made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4086
- @SauravMaheshkar made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4060
Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!
Detailed Change Log
- Add LoRAs to the model manager by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/3902
- feat: Unify Promp Area Styling by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4033
- Update troubleshooting guide with ~ydantic and SDXL unet issue advice by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4054
- fix: Concat Link Styling by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4048
- bugfix: Float64 error for mps devices on set_timesteps by @ZachNagengast in https://github.com/invoke-ai/InvokeAI/pull/4040
- Release 3.0.1 release candidate 3 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4025
- Feat/Nodes: Change Input to Textbox by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3853
- fix: Prompt Node using incorrect output type by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4058
- fix: SDXL Metadata not being retrieved by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4057
- Fix recovery recipe by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4066
- Unpin pydantic and numpy in pyproject.toml by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4062
- Fix various bugs in ckpt to diffusers conversion script by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4065
- Installer tweaks by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4070
- fix relative model paths to be against config.models_path, not root by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4061
- Update communityNodes.md - FaceTools by @ymgenesis in https://github.com/invoke-ai/InvokeAI/pull/4044
- 3.0.1post3 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4082
- Restore model merge script by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4085
- Add Nix Flake for development by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/4077
- Add python black check to pre-commit by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/4094
- Added a getting started guide & updated the user landing page flow by @Millu in https://github.com/invoke-ai/InvokeAI/pull/4028
- Add missing Optional on a few nullable fields by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/4076
- ONNX Support by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3562
- Chakra optimizations by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4096
- Add onnxruntime to the main dependencies by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/4103
- fix(ui): post-onnx fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4105
- Update lint-frontend.yml by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4113
- fix: Model Manager Tab Issues by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4087
- fix: flake: add opencv with CUDA, new patchmatch dependency by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/4115
- Fix manual installation documentation by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4107
- fix https/wss behind reverse proxy by @sohelzerdoumi in https://github.com/invoke-ai/InvokeAI/pull/4116
- Refactor/cleanup root detection by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4102
- chore: delete nonfunctional nix flake by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4117
- Feat/auto assign board on click by @KevinBrack in https://github.com/invoke-ai/InvokeAI/pull/4086
- (ci) only install black when running static checks by @ebr in https://github.com/invoke-ai/InvokeAI/pull/4036
- fix .swap() by reverting improperly merged @classmethod change by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/4080
- chore: move PR template to
.github/dir by @SauravMaheshkar in https://github.com/invoke-ai/InvokeAI/pull/4060 - Path checks in a workflow step for python tests by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/4122
- fix(db): retrieve metadata even when no session_id by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4110
- project header by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4134
- Restore ability to convert merged inpaint .safetensors files by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4084
- ui: multi-select and batched gallery image operations by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4032
- Add execution stat reporting after each invocation by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4125
- Stop checking for unet/model.onnx when a model_index.json is detected by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/4132
- autoAddBoardId should always be defined as “none” or board_id by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4149
- Add support for diff/full lora layers by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/4118
- [WIP] Add sdxl lora support by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/4097
- Provide ti name from model manager, not from ti itself by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/4120
- Installer should download fp16 models if user has specified ‘auto’ in config by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4129
- add
--ignore_missing_core_modelsCLI flag to bypass checking for missing core models by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/4081 - fix broken civitai example link by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4153
- devices.py - Update MPS FP16 check to account for upcoming MacOS Sonoma by @gogurtenjoyer in https://github.com/invoke-ai/InvokeAI/pull/3886
- Bump version number on main to distinguish from release by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4158
- Fix random number generator by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/4159
- Added HSL Nodes by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/3459
- backend: fix up types by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4109
- Fix hue adjustment by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/4182
- fix(ModelManager): fix overridden VAE with relative path by @keturn in https://github.com/invoke-ai/InvokeAI/pull/4059
- Maryhipp/multiselect updates by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4188
- feat(ui): add LoRA support to SDXL linear UI by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4194
- api(images): allow HEAD request on image/full by @keturn in https://github.com/invoke-ai/InvokeAI/pull/4193
- Fix crash when attempting to update a model by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4192
- Refrain from writing deprecated legacy options to invokeai.yaml by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4190
- Pick correct config file for sdxl models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4191
- Add slider for VRAM cache in configure script by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4133
- 3.0.2 Release Branch by @Millu in https://github.com/invoke-ai/InvokeAI/pull/4203
- Add techjedi’s image import script by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4171
- refactor(diffusers_pipeline): remove unused pipeline methods 🚮 by @keturn in https://github.com/invoke-ai/InvokeAI/pull/4175
- ImageLerpInvocation math bug: Add self.min, not self.max by @lillekemiker in https://github.com/invoke-ai/InvokeAI/pull/4176
- feat: add
app_versionto image metadata by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4198 - fix(ui): fix canvas model switching by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4221
- fix(ui): fix lora sort by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/4222
- Update dependencies and docs to cu118 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4212
- Prevent
vae: ''from crashing model by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4209 - Probe LoRAs that do not have the text encoder by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4181
- Bugfix: Limit RAM and VRAM cache settings to permissible values by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4214
- Temporary force set vae to same precision as unet by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/4233
- Add support for LyCORIS IA3 format by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/4234
- Two changes to command-line scripts by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4235
- 3.0.2 Release by @Millu in https://github.com/invoke-ai/InvokeAI/pull/4236
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1rc3…v3.0.2
InvokeAI 3.0.1 (hotfix 3)
InvokeAI Version 3.0.1
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
- What’s New
- Installation and Upgrading
- Getting Started with SDXL
- Known Bugs
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What’s New in v3.0.1
- Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
- Can install and run both diffusers-style and .safetensors-style SDXL models.
- Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
- Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
- The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
- During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several “starter” main models.
- User interface cleanup to reduce visual clutter and increase usability.
v3.0.1post3Hotfixes
This release containss a proposed hotfix for the Windows install OSError crashes that began appearing in 3.0.1. In addition, the following bugs have been addressed:
- Correct issue of some SD-1 safetensors models could not be loaded or converted
- The
models_dirconfiguration variable used to customize the location of the models directory is now working properly - Fixed crashes of the text-based installer when the number of installed LoRAs and other models exceeded 72
- SDXL metadata is now set and retrieved properly
- Correct post1’s crash when performing configure with
--yesflag. - Correct crashes in the CLI model installer
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
InvokeAI-installer-v3.0.1post3.zip
Upgrading in place
All users can upgrade from 3.0.0 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the
upgrademenu option [9] - Select “Manually enter the tag name for the version you wish to update to” option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the “Developer’s console” option [8]
- Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1post3.zip" --use-pep517 --upgradeinvokeai-configure --root .This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAIinvokeai-configure --yes --skip-sd-weightsYou may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1post3invokeai-configure --yes --skip-sd-weightsImportant: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.batlauncher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models. - Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
stabilityai/stable-diffusion-xl-base-1.0stabilityai/stable-diffusion-xl-refiner-1.0
- Download the models manually and cut and paste their paths into the Location field in “Import Models”
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:
precision: float16max_cache_size: 12.0max_vram_cache_size: 0.5Known Bugs in 3.0
This is a list of known bugs in 3.0.1post3 as well as features that are planned for inclusion in later releases:
- The merge script isn’t working, and crashes during startup (will be fixed soon)
- Inpainting models generated using the A1111 merge module are not loading properly (will be fixed soon)
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
- There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.
Getting Help
For support, please use this repository’s GitHub Issues tracking service, or join our Discord.
Contributing
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
What’s Changed
- Feat/Nodes: Change Input to Textbox by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3853
- fix: Prompt Node using incorrect output type by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4058
- fix: SDXL Metadata not being retrieved by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4057
- Fix recovery recipe by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4066
- Unpin pydantic and numpy in pyproject.toml by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4062
- Fix various bugs in ckpt to diffusers conversion script by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4065
- Installer tweaks by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4070
- fix relative model paths to be against config.models_path, not root by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4061
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.1…v3.0.1post1
InvokeAI Version 3.0.1
InvokeAI Version 3.0.1
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
- What’s New
- Installation and Upgrading
- Getting Started with SDXL
- Known Bugs
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What’s New in v3.0.1
- Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
- Can install and run both diffusers-style and .safetensors-style SDXL models.
- Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
- Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
- The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
- During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several “starter” main models.
- User interface cleanup to reduce visual clutter and increase usability.
Recent Changes
Since RC3, the following has changed:
- Fixed crash on Macintosh M1 machines when rendering SDXL images
- Fixed black images when generating on Macintoshes using the Unipc scheduler (falls back to CPU; slow)
Since RC2, the following has changed:
- Added compatibility with Python 3.11
- Updated diffusers to 0.19.0
- Cleaned up console logging - can now change logging level as described in the docs
- Added download of an updated SDXL VAE “sdxl-vae-fix” that may correct certain image artifacts in SDXL-1.0 models
- Prevent web crashes during certain resize operations
Developer changes:
- Reformatted the whole code base with the “black” tool for a consistent coding style
- Add pre-commit hooks to reformat committed code on the fly
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
Upgrading in place
All users can upgrade from 3.0.0 using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the
upgrademenu option [9] - Select “Manually enter the tag name for the version you wish to update to” option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the “Developer’s console” option [8]
- Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1.zip" --use-pep517 --upgradeinvokeai-configure --root .This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAIinvokeai-configure --skip-sd-weightsYou may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1invokeai-configure --skip-sd-weightsImportant: After doing the pip install, it is necessary to invokeai-configure in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.batlauncher script, and select the base model, and optionally the refiner, from the checkbox list of “starter” models. - Use the Web’s Model Manager to select “Import Models” and when prompted provide the HuggingFace repo_ids for the two models:
- stabilityai/stable-diffusion-xl-base-1.0
- stabilityai/stable-diffusion-xl-refiner-1.0 (note that these are preliminary IDs - these notes are being written before the SDXL release)
- Download the models manually and cut and paste their paths into the Location field in “Import Models”
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:
precision: float16max_cache_size: 12.0max_vram_cache_size: 0.0Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size to 6 GB or higher.
Known Bugs in 3.0
This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
- There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.
Getting Help
For support, please use this repository’s GitHub Issues tracking service, or join our Discord.
Contributing
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
What’s Changed
- fix: mps attention fix for sd2 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3874
- Warn, do not crash, when duplicate models encountered by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3877
- Update communityNodes.md by @ymgenesis in https://github.com/invoke-ai/InvokeAI/pull/3876
- Update communityNodes.md by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/3873
- ui: pay back tech debt by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3896
- Fix ‘Del’ hotkey to delete current image by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/3904
- fix: Fix app crashing when you upload an incorrect JSON to node editor by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3911
- feat: increase seed from int32 to uint32 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3933
- fix: Generate random seed using the generator instead of RandomState by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3940
- Add missing import by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/3917
- Fix incorrect use of a singleton list by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/3914
- feat(nodes,ui): fix soft locks on session/invocation retrieval by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3910
- feat(ui): display canvas generation mode in status text by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3915
- docs generation: fix typo and remove trailing white space by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/3972
- add option to disable model syncing in UI by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3992
- docs/features/NODES.md - Fix image links by @ymgenesis in https://github.com/invoke-ai/InvokeAI/pull/3969
- Update stale issues action by @Millu in https://github.com/invoke-ai/InvokeAI/pull/3960
- feat: Add SDXL To Linear UI by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3973
- [Nodes/Temp fix] for is intermediate switch for l2i by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3997
- 3.0.1 - Pre-Release UI Fixes by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4001
- Add support for controlnet & sdxl checkpoint conversion by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3905
- feat: :sparkles: consolidated app nav to settings & dropdown by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/4000
- NSFW checker and watermark nodes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3923
- Download all model types. by @camenduru in https://github.com/invoke-ai/InvokeAI/pull/3944
- enable hide localization toggle by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4004
- feat: SDXL - Concat Prompt and Style for Style Prompt by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4005
- Bugfix/checkpoint conversion by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4010
- fix: Metadata Not Being Saved by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4009
- Documentation updates for SDXL license terms, invisible watermark by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4012
- Rework configure/install TUI to require less space by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3989
- Configure script should not overwrite models.yaml if it is well formed by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4019
- install SDXL “fixed” VAE by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4020
- Restore ability to convert SDXL checkpoints to diffusers by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4021
- PR for 3.0.1 release by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3998
- Dev/black by @lillekemiker in https://github.com/invoke-ai/InvokeAI/pull/3840
- prevent resize error by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/4031
- Support Python 3.11 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3966
- feat: Upgrade Diffusers to 0.19.0 by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4011
- Unify uvicorn and backend logging by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4034
- Add LoRAs to the model manager by @zopieux in https://github.com/invoke-ai/InvokeAI/pull/3902
- feat: Unify Promp Area Styling by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4033
- Update troubleshooting guide with ~ydantic and SDXL unet issue advice by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4054
- fix: Concat Link Styling by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/4048
- bugfix: Float64 error for mps devices on set_timesteps by @ZachNagengast in https://github.com/invoke-ai/InvokeAI/pull/4040
- Release 3.0.1 release candidate 3 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/4025
New Contributors
- @zopieux made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3904
- @joshistoast made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3972
- @camenduru made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3944
- @ZachNagengast made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/4040
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v3.0.0…v3.0.1
Source code and previous installer files
The files below include the InvokeAI installer zip file, the full source code, and previous release candidates for 3.0.1
InvokeAI 3.0.0
InvokeAI Version 3.0.0
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.0 represents a major advance in functionality and ease compared with the last official release, 2.3.5.
- What’s New
- Installation and Upgrading
- Getting Started with SDXL
- Known Bugs
- Getting Help
- Contributing
- Detailed Change Log
Please use the 3.0.0 release discussion thread, for comments on this version, including feature requests, enhancement suggestions and other non-critical issues. Report bugs to InvokeAI Issues. For interactive support with the development team, contributors and user community, you are invited join the InvokeAI Discord Server.
To learn more about InvokeAI, please see our Documentation Pages.
What’s New in v3.0.0
Quite a lot has changed, both internally and externally.
Web User Interface:
- A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
- A Dynamic Prompts interface that lets you generate combinations of prompt elements.
- Preliminary support for Stable Diffusion XL the latest iteration of Stability AI’s image generation models.
- A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
- The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
- An experimental Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface. To activate this, please use the settings icon at the upper right of the Web UI.
- Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
- Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
- Long prompt support (>77 tokens).
- Memory and speed improvements.
The WebUI can now be launched from the command line using either invokeai-web (preferred new way) or invokeai --web (deprecated old way).
Command Line Tool
The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli that allows you to experiment with InvokeAI nodes.
Installer
The console-based model installer, invokeai-model-install has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.
Internal
Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as “nodes”, which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.0 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3 instead of the previously-used invokeai directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
Upgrading in place
All users can upgrade from the 3.0 beta releases using the launcher’s “upgrade” facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the
upgrademenu option [9] - Select “Manually enter the tag name for the version you wish to update to” option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose “rerun the configure script to fix a broken install” option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.shorinvoke.bat - Select the “Developer’s console” option [8]
- Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0.zip" --use-pep517 --upgradeThis will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the ‘*.orig” files and directories to their original names (without the .orig), run the update script again, and select [1] “Update to the latest official release”.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] “Developer’s console”. This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>Provide the old and new directory names with the --from and --to arguments respectively. This will migrate your models as well as the settings inside invokeai.init. You may provide the same --from and --to directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Once 3.0.0 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAIYou may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.0To upgrade to an xformers version if you are not currently using xformers, use:
pip install --use-pep517 --upgrade InvokeAI[xformers]You can see which versions are available by going to The PyPI InvokeAI Project Page
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI’s image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature as soon as SDXL 1.0 is officially released.
SDXL comes with two models, a “base” model that generates the initial image, and a “refiner” model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.
To experiment with SDXL, you’ll need the “base” and “refiner” models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen). Alternatively, select launcher option [6] “Change InvokeAI startup options” and paste the HF token into the indicated field.
Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9 and stable-diffusion-xl-refiner-0-9. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.
Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9. Press Add Model and wait for the model to download and install. After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9.
Note that these are large models (12 GB each) so be prepared to wait a while.
To use the installed models you will need to activate the Node Editor, an advanced feature of InvokeAI. Go to the Settings (gear) icon on the upper right of the Web interface, and activate “Enable Nodes Editor”. After reloading the page, an inverted “Y” will appear on the left-hand panel. This is the Node Editor.
Enter the Node Editor and click the Upload button to upload either the SDXL base-only or SDXL base+refiner pipelines (right click to save these .json files to disk). This will load and display a flow diagram showing the (many complex) steps in generating an SDXL image.
Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style (“bluebird in a sakura tree” and “chinese classical painting”) and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will eventually be generated and added to the image gallery. Unlike standard rendering, intermediate images are not (yet) displayed during rendering.
Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32 precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml:
precision: float16max_cache_size: 12.0max_vram_cache_size: 0.0Known Bugs in 3.0
This is a list of known bugs in 3.0 as well as features that are planned for inclusion in later releases:
- On Macintoshes with MPS, Stable Diffusion 2 models will not render properly. This will be corrected in the next point release.
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
- There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.
- The NSFW checker (blurs explicit images) is currently disabled but will be reenabled in time for the next release.
Getting Help
For support, please use this repository’s GitHub Issues tracking service, or join our Discord.
Contributing
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
What’s Changed Since 2.3.5
- doc(invoke_ai_web_server): put docstrings inside their functions by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2816
- perf(invoke_ai_web_server): encode intermediate result previews as jpeg by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2817
- Split requirements / pyproject installation in Dockerfile by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2815
- add a workflow to close stale issues by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2808
- [nodes] Add better error handling to processor and CLI by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/2828
- fix newlines causing negative prompt to be parsed incorrectly by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/2837
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/2850
- Final phase of source tree restructure by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2833
- Fix for txt2img2img.py by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/2856
- Protect invocations against black autoformatting by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2854
- fix broken scripts by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2857
- deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16 by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2865
- remove legacy ldm code by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2866
- fix Dockerfile after restructure by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2863
- [ui]: migrate all styling to chakra-ui theme by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2814
- Migrate to new HF diffusers cache location by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2867
- Bugfix/reenable ckpt conversion to ram by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2868
- add .git-blame-ignore-revs file to maintain provenance by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2855
- migrate to new diffusers model layout by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2871
- support both epsilon and v-prediction v2 inference by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2870
- feat(ui): migrate theming to chakra ui by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2873
- add missing package by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/2878
- Fixed startup issues with the web UI. by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/2876
- [cli] Update CLI to define commands as Pydantic objects by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/2861
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/2882
- ui: update readme & scripts by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2884
- build: update actions by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2883
- Make img2img strength 1 behave the same as txt2img by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/2895
- decouple default component from react root by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/2897
- [cli] Execute commands in-order with nodes by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/2901
- add back pytorch-lightning dependency by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2899
- [ui] chore(Accessibility): various additions by @ElrikUnderlake in https://github.com/invoke-ai/InvokeAI/pull/2888
- Bypass the 77 token limit by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/2896
- Remove label from stale issues on comment event by @ebr in https://github.com/invoke-ai/InvokeAI/pull/2903
- Revert “Remove label from stale issues on comment event” by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2912
- backend: more post-ldm-removal cleanup by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2911
- Make sure command also works with Oh-my-zsh by @patrickvonplaten in https://github.com/invoke-ai/InvokeAI/pull/2905
- FIX bug that caused black images when converting ckpts to diffusers by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2914
- Chore/accessibility add all aria labels to translation by @ElrikUnderlake in https://github.com/invoke-ai/InvokeAI/pull/2919
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/2922
- raise operations-per-run for issue workflow to 500 by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/2925
- build: exclude ui from
test-invoke-pipby @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2892 - Remove image generation node dependencies on generate.py by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2902
- [ui]: add resizable pinnable drawer component by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2874
- [fix] Get the model again if current model is empty by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/2938
- [nodes-api] Fix API generation to correctly reference outputs by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/2939
- [nodes] Fixes calls into image to image and inpaint from nodes by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/2940
- Fix bug #2931 by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/2942
- chore(UI, accessibility): Icons. Header links & radio button by @ElrikUnderlake in https://github.com/invoke-ai/InvokeAI/pull/2935
- add additional build mode by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/2904
- fix —png_compression command line argument by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2950
- Removed file-extension-based arbitrary code execution attack vector by @CodeZombie in https://github.com/invoke-ai/InvokeAI/pull/2946
- fix(inpaint): Seam painting being broken by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/2952
- Allow for dynamic header by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/2955
- [nodes] Add Edge data type by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/2958
- [deps] update compel to fix black output with weight=0; also use new downweighting algorithm by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/2961
- nodes: api fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2959
- Fix some text and a link by @thinkyhead in https://github.com/invoke-ai/InvokeAI/pull/2910
- [WebUI] Quick Fixes by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/2974
- [nodes] Update fastapi packages to latest (except FastAPI, which has an annotation bug in the newest version) by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/3004
- Export more for header by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/2996
- load embeddings after a ckpt legacy model is converted to diffusers by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3013
- make step_callback work again in generate() call by @lstein in https://github.com/invoke-ai/InvokeAI/pull/2957
- [deps] upgrade compel for better .swap defaults and a bugfix by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/3014
- Tidy up Tests and Provide Documentation by @mastercaster9000 in https://github.com/invoke-ai/InvokeAI/pull/2869
- Allow loading all types of dreambooth models - Fix issue #2932 by @mrwho in https://github.com/invoke-ai/InvokeAI/pull/2933
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/3021
- build: do not run python tests on ui build by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/2987
- re-implement model scanning when loading legacy checkpoint files by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3012
- doc(readme): fix incorrect install command by @darkcl in https://github.com/invoke-ai/InvokeAI/pull/3024
- add github API token to mkdocs workflow by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3023
- Convert custom VAEs during legacy checkpoint loading by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3010
- (fix)[docs] Fixed snippet/code formatting by @felixsanz in https://github.com/invoke-ai/InvokeAI/pull/2918
- fix issue with embeddings being loaded twice by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3029
- Avoid invoke.sh infinite loop (main branch) by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3017
- feat[web]: use the predicted denoised image for previews by @keturn in https://github.com/invoke-ai/InvokeAI/pull/2915
- nodes: add cancelation, updated progress callback, typing fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3036
- fix(ui): fix viewer tooltip localisation strings by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3037
- Doc: updating ROCm version in documentation by @aeyno in https://github.com/invoke-ai/InvokeAI/pull/3041
- Downgrade FastAPI by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3039
- Fix bugs in online ckpt conversion of 2.0 models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3057
- improve importation and conversion of legacy checkpoint files by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3053
- I18n build mode by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3051
- add basic autocomplete functionality to node cli by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3035
- Add support for yet another TI embedding format (main version) by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3050
- fix(nodes): commit changes to db by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3103
- [deps] bump compel version to fix crash on invalid (auto111) syntax by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/3107
- fix(nodes): fix typo in
list_sessionshandler by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3109 - Right link on pytorch installer for linux rocm by @creachec in https://github.com/invoke-ai/InvokeAI/pull/3084
- feat(nodes): save thumbnails by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3105
- fix build-container.yml by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/3117
- Model API Merge Into Main by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/3079
- [nodes] Add latent nodes, storage, and fix iteration bugs by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/3091
- Fix typo by @panicsteve in https://github.com/invoke-ai/InvokeAI/pull/3133
- Change where !replay looks for its infile by @teerl in https://github.com/invoke-ai/InvokeAI/pull/3129
- add a new method to model_manager that retrieves individual pipeline components by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3120
- chore: configure stale bot by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3134
- remove vestiges of non-functional autoimport code for legacy checkpoints by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3076
- Add python-multipart, which is needed by nodes by @cmsj in https://github.com/invoke-ai/InvokeAI/pull/3141
- fix(nodes): use correct torch device in NoiseInvocation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3128
- fix typo by @c67e708d in https://github.com/invoke-ai/InvokeAI/pull/3147
- Add/Update and Delete Models by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/3131
- feat(nodes): add list_images endpoint by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3126
- feat(nodes): mark ImageField properties required, add invocation README by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3108
- Increase chunk size when computing diffusers SHAs by @AbdBarho in https://github.com/invoke-ai/InvokeAI/pull/3159
- fix(nodes): add missing type to
ImageFieldby @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3170 - fix(nodes):
sampler_name—>schedulerby @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3169 - feat(nodes): fix typo in PasteImageInvocation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3155
- feat(nodes): add invocation schema customisation, add model selection by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3171
- Fixed a Typo. by @nicholaskoerfer in https://github.com/invoke-ai/InvokeAI/pull/3190
- [nodes] Add subgraph library, subgraph usage in CLI, and fix subgraph execution by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/3180
- feat(ui): Add “Hide Preview” Button to WebUI by @SammCheese in https://github.com/invoke-ai/InvokeAI/pull/3204
- Make InvocationQueueItem serializable by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3187
- [bug] #3218 HuggingFace API off when —no-internet set by @TimCabbage in https://github.com/invoke-ai/InvokeAI/pull/3219
- Added CPU instruction for README by @EternalLeo in https://github.com/invoke-ai/InvokeAI/pull/3225
- Update NSFW.md by @AldeRoberge in https://github.com/invoke-ai/InvokeAI/pull/3231
- update CODEOWNERS for changed team composition by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3234
- chore: add “.version” and “.last_model” to gitignore by @SammCheese in https://github.com/invoke-ai/InvokeAI/pull/3208
- Partial migration of UI to nodes API by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3195
- [Nodes UI] Invocation Component Updates by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3247
- Responsive Mobile Layout by @SammCheese in https://github.com/invoke-ai/InvokeAI/pull/3207
- [Nodes UI] More Work by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3248
- feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3250
- feat(ui): add reload schema button by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3252
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/3259
- Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly by @cmsj in https://github.com/invoke-ai/InvokeAI/pull/3256
- update to diffusers 0.15 and fix code for name changes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3201
- [Bugfix] fixes and code cleanup to update and installation routines by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3101
- [Bugfix] prevent cli crash by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3132
- fix(ui): fix no progress images when gallery is empty by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3268
- [nodes] add delete image & delete images endpoint by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3245
- feat(ui): support disabledFeatures, add nicer loading by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3271
- feat(ui): wip img2img ui by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3258
- fix(ui): update UI to handle uploads with alternate URLs by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3274
- docs: add note on README about migration by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3277
- feat(ui): add config slice, configuration default values by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3273
- fix(ui): add formatted neg prompt for linear nodes by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3282
- feat(ui): remove connected/disconnected toasts bc we have status for that by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3287
- fix(ui): update thumbnailReceived to match change in route params by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3286
- fix(ui): update exported types by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3283
- feat(nodes): add resize and scale latents nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3261
- ui: set up more straightforward packaging of UI components by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3292
- ui: session persistence by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3280
- fix(ui): fix packaging import issue by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3294
- fix(ui): restore missing chakra-cli package by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3295
- feat(ui): logging by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3299
- ui: error handling, init image, cleanup, recall params by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3301
- ui, nodes: fix img2img fit param by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3308
- nodes-api: enforce single thread for the processor by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3312
- feat(ui): use windowing for gallery by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3307
- feat(nodes): allow multiples of 8 for dimensions by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3260
- fix(nodes): fix
t2igraph by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3318 - chore(ui): bump react-virtuoso by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3324
- feat(ui): do not persist gallery images by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3331
- [Enhancement] Regularize logging messages by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3176
- Fix inpaint node by @aluhrs13 in https://github.com/invoke-ai/InvokeAI/pull/3284
- Add compel node and conditioning field type by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3265
- Fix logger namespace clash in web server by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3344
- add -y to the automated install instructions by @stevemar in https://github.com/invoke-ai/InvokeAI/pull/3349
- ui: support collect nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3357
- Deploy documentation from v2.3 branch rather than main by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3356
- Use websocket transport first for Socket.io by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3369
- if backend throws a socket connection error, let user know via toast by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3364
- surface error detail field to redux layer for 403 errors by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3362
- Update dependencies to get deterministic image generation behavior (main branch) by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3354
- fix(nodes): fix #3306 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3377
- feat(nodes): add ui build static route by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3378
- fix some missing translations by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3381
- ui: migrate canvas to nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3323
- filter out websocket errors by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3382
- fix(nodes): fix missing
contextarg in LatentsToLatents by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3387 - ui: miscellaneous fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3386
- fix translations again by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3395
- Add UniPC / Euler Karras / DPMPP_2 Karras / DEIS / DDPM Schedulers by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3388
- fix(model manager): fix string formatting error on model checksum timer by @keturn in https://github.com/invoke-ai/InvokeAI/pull/3397
- docs(ui): update ui readme by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3396
- feat(ui): make core parameters layout consistent by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3394
- fix(nodes): remove Optionals on ImageOutputs by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3392
- feat(nodes): add w/h to latents outputs by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3389
- feat(nodes): add RandomIntInvocation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3390
- feat(ui): expand config options by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3393
- ui: misc fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3398
- Feat/ui/improve-language by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3399
- fix(nodes): temporarily disable librarygraphs by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3372
- Make InvocationProcessor more robust against exceptions by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3376
- rehydrate selectedImage URL when results and uploads are fetched by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3403
- Add Heun Karras Scheduler by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3402
- ui: commercial fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3409
- Logging Improvements by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3401
- fix(ui): fix syntax error in the logo component flexbox by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3410
- make conditioning.py work with compel 1.1.5 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3383
- ui: restore canvas and upload functionality by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3414
- ui: cleanup by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3418
- feat(nodes): add low and high to RandomIntInvocation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3419
- tell user to refresh page on image load error by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3425
- Add configuration system, remove legacy globals, args, generate and CLI by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3340
- add some IDs by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3432
- added optional middleware prop and new actions needed by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3437
- add crossOrigin = anonymous attribute to konva image by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3450
- fix(ui): send to canvas in currentimagebuttons not working by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3428
- build: fix test-invoke-pip.yml by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3451
- fix: attempt to fix actions by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3454
- fix(nodes): fix seam painting by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3449
- images refactor by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3443
- Update 020_INSTALL_MANUAL.md by @KernelGhost in https://github.com/invoke-ai/InvokeAI/pull/3439
- fix tests by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3458
- Use parameter fixes by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3464
- Default Scheduler Update by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/3457
- feat(nodes, ui): change how “intermediate” artefacts are handled by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3463
- (wip) Fix gallery jumpiness when image is fully generated by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3465
- ui: error and event handling by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3467
- Update CODEOWNERS by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/3468
- Feat/controlnet nodes by @GreggHelt2 in https://github.com/invoke-ai/InvokeAI/pull/3405
- image service fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3473
- misc fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3479
- Fixed problem with inpainting after controlnet support added to main. by @GreggHelt2 in https://github.com/invoke-ai/InvokeAI/pull/3483
- feat(ui): more misc fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3482
- fix(ui): ensure download image opens in new tab by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3484
- fix(ui): fix width and height not working on txt2img tab by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3485
- ui: improve metadata handling by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3487
- ui: fix looping gallery fetch by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3488
- add optional config for settings modal by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3490
- Make Invoke Button also the progress bar by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3492
- Add logging configuration by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3460
- feat(nodes): add separate scripts to launch cli and web by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3495
- ui: controlnet in linear ui by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3491
- Update web README.md by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3496
- Prompting: enable long prompts and compel’s new
.and()concatenating feature by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/3497 - ui: handle drag and drop to canvas by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3499
- feat(ui): fix image fit by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3501
- Fix potential race condition in config system by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3466
- feat(ui): improve UI on smaller screens by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3505
- feat(ui): fix canvas saving by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3503
- ui: improve image url refresh, error and deletion handling by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3502
- feat(ui): update image urls on connect by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3507
- feat(ui): fix bugs with image deletion by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3506
- Graph overlay was expanding off the screen to the size of the prompt line by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3510
- ui: controlnet tweaks by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3508
- docs(nodes): update INVOCATIONS.md by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3511
- fix(ui): default controlnet autoprocess to true by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3513
- Update installer support for main by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3448
- create databases directory on startup by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3518
- feat(ui): skip resize on img2img if not needed by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3520
- feat(ui): restore reset button for init image by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3521
- feat(nodes): depth-first execution by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3517
- feat(ui): decrease delay on dnd to 150ms by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3522
- feat(ui): enhance IAICustomSelect by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3523
- ui: misc fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3525
- fix(ui): fix crash when using dropdown on certain device resolutions by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3526
- fix logger behavior so that it is initialized after command line parsed by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3509
- Feat/easy param by @GreggHelt2 in https://github.com/invoke-ai/InvokeAI/pull/3504
- fix: git stash by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3528
- Fix/UI/controlnet fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3531
- Upgrade to Diffusers 0.17.0 by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3514
- Add Mantine UI Support by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3532
- remove
image_originfrom most places by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3537 - refactor(minor): Image & Latent File Storage by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3538
- nodes: add dynamic prompt node by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3533
- Model Manager rewrite by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3335
- Model manager fixes by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3541
- Add lms and dpmpp2_s karras scheduler by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3551
- feat: Port Schedulers to Mantine by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3552
- Update index.md by @DrGunnarMallon in https://github.com/invoke-ai/InvokeAI/pull/3553
- Add dpmpp_sde and dpmpp_2m_sde schedulers(with karras) by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3554
- Fix inpaint node to new manager by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3550
- feat: Upgrade to Diffusers 0.17.1 by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3545
- fix failing pytest for config module by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3559
- image boards by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3534
- Update UI To Use New Model Manager by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3548
- fix(linux): installer script prints maximum python version usable by @skf-funzt in https://github.com/invoke-ai/InvokeAI/pull/3546
- Fix vae conversion by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3555
- ui: api cleanup by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3572
- ui: fix initial image buttons, uploader functionality by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3576
- chore(ui): bump all packages by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3579
- Add control_mode parameter to ControlNet by @GreggHelt2 in https://github.com/invoke-ai/InvokeAI/pull/3535
- fix(ui): fix controlnet image size by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3585
- feat(ui): improved node parsing by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3584
- feat(ui): add dynamic prompts to t2i tab by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3588
- feat(ui): only show canvas image fallback on loading error by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3589
- Bypass failing tests by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3593
- Configure and model install TUI tweaks by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3519
- feat(ui): add type extraction helpers by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3597
- fix(nodes): use context for logger in param_easing by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3529
- feat(ui): use max prompts for combinatorial, iterations for non-combi… by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3600
- Feat/controlnet extras by @GreggHelt2 in https://github.com/invoke-ai/InvokeAI/pull/3596
- nodes: default to CPU noise by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3598
- Configuration and model installer for new model layout by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3547
- Update 060_INSTALL_PATCHMATCH.md by @sammyf in https://github.com/invoke-ai/InvokeAI/pull/3591
- Apply lora by model patching by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3583
- Set use-credentials on commercial deployment if authToken is set on c… by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/3606
- ui: fix ts perf by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3609
- Fix Typo in migrate_to_3.py by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3610
- Fix duplicate model key addition when root directory is a relative path by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3607
- Maryhipp/delete images on board by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3604
- Add image board support to invokeai-node-cli by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3594
- ui: support dark mode by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3592
- feat(ui): tweak light mode colors, buttons pop by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3612
- fix(ui): fix canvas crash by rolling back swagger-parser by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3611
- export new ColorModeButton component by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3614
- fix incorrect VAE config file path during conversion of ckpts by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3616
- feat(ui): minimum gallery size by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3617
- feat(ui): gallery minSize tweak by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3618
- loading state for gallery and model list by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3615
- Fix Invoke Progress Bar by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3626
- ui: batches by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3603
- feat(ui): hide batch ui pending logic implementation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3629
- Add missing k-* legacy sampler names to init file migrate list by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3625
- Make unit tests work again by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3575
- Add runtime root path to relative vaes and other submodels by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3631
- Quash memory leak when compel invocation called by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3633
- (wip) Model Manager 3.0 UI by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3586
- ui: loras by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3638
- ui: render perf fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3642
- feat: Add Lora to Canvas by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3643
- fix: Change Lora weight bounds to -1 to 2 by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3645
- Expose max_cache_size in config by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3640
- Fix model detection by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3646
- feat(ui): improve accordion ux by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3647
- fix(ui): deleting image selects first image by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3648
- fix(ui): fix dnd on nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3649
- Detect invalid model names when migrating 2.3->3.0 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3623
- Fix clip path in migrate script by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3651
- fix(ui): fix prompt resize & style resizer by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3652
- Fix loading diffusers ti by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3661
- Fix ckpt scanning on conversion by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3653
- close modal when user clicks cancel by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3656
- build: remove web ui dist from gitignore by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3650
- Remove hardcoded cuda device in model manager init by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3624
- Put tokenizer and text encoder in same clip-vit-large-patch14 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3662
- feat: Add Embedding Picker to Linear UI by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3654
- only show delete icon if big enough by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3657
- LoRA model loading fixes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3663
- expose max_cache_size to invokeai-configure interface by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3664
- Improved loading for UI by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3667
- [Docs] ELI5 Tutorial For Invocations by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3524
- Add REACT API routes for model manager by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3639
- feat: Add Clip Skip by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3666
- feat(ui): update
openapi-fetch; fix upload issue by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3674 - fix: Adjust clip skip layer count based on model by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3675
- UI model compatibiltiy by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3672
- feat(ui): improve embed button styles by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3676
- add ability to disable lora, ti, dynamic prompts, vae selection by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3677
- prop to hide toggle for advanced settings by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3681
- Branch for 3.0alpha release installation tweaks and bug fixes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3608
- Make the update script work on Windows by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3622
- fixes ImportError described in #3658. by @Zadagu in https://github.com/invoke-ai/InvokeAI/pull/3668
- Mac MPS FP16 fixes by @gogurtenjoyer in https://github.com/invoke-ai/InvokeAI/pull/3641
- get uploads working again by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3679
- Revert “get uploads working again” by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3686
- ui: add cpu noise by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3688
- fix(ui): fix readonly inputs by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3689
- fix(ui): do not diable show progress toggle while generating by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3690
- fix(ui): fix inconsistent shift modifier capture by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3691
- gallery tweaks by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3682
- feat: Add App Version to UI by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3692
- fix(ui): fix tab translations by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3698
- fix(ui): fix selection on dropdowns by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3697
- Version of _find_root() that works in conda environment by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3696
- feat: Upgrade Diffusers to 0.18.1 by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3699
- fix: Rearrange Model Select to take full width by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3701
- add load more images to the right arrow by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3694
- fix(ui): escape on embedding popup closes it by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3703
- feat(ui): add progress image node by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3704
- fix(api): fix for borked windows mimetypes registry by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3705
- Add Cancel Button button to nodes tab by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3706
- Updating Docs by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/3456
- ui, db: rand fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3715
- feat: Add Aspect Ratio by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3709
- Maryhipp/disable multiselect by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3722
- disable hotkey for lightbox if lightbox is disabled by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3725
- always enable these things on txt2img tab by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3726
- Branch for invokeai 3.0-beta bugfixes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3683
- Fix the test of the config system by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3730
- Node Editor: QoL Fixes by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3734
- installer installs torchimetrics 0.11.4 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3733
- Less naive model detection by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3685
- disable features that are not supported yet or no longer supported by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3739
- Container build fixes + docker-compose by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3587
- feat: Save and Loads Nodes From Disk by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3724
- feat: Add Aspect Ratio To Canvas Bounding Box by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3717
- output stringified error for session and invocation errors by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3741
- feat: core metadata & graph in viewer by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3728
- Add Clear nodes Button by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3747
- fix(ui): fix inpaint invalid model error by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3750
- fix(ui): check for metadata accumulator before connecting to it by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3751
- fix(ui): fix lora name disappearing by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3753
- fix(ui): fix node parsing failing by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3754
- fix(ui): fix nodes crash when adding model loader by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3755
- Fix clear nodes with psychedelicious Requests by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3749
- Fix Inpainting Issues by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3744
- Fix wrong conditioning used by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3595
- detect patchmatch by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3740
- fix: model_name reference in Model Manager by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3756
- ui: gallery enhancements by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3752
- ui: model-related fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3761
- fix(nodes): make ResizeLatents w/h optional by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3762
- feat(api): set max-age for images by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3711
- Model Manager 3.0 UI by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3735
- Allow ImageResizeInvocation w/h to take inputs from other nodes by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/3765
- Create NODES.md by @ymgenesis in https://github.com/invoke-ai/InvokeAI/pull/3729
- Pad conditionings using zeros and encoder_attention_mask by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3772
- fix(ui): fix mouse interactions by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3764
- fix(ui): fix crash on LoRA remove / weight change by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3774
- restore scrollbar by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3767
- Model manager route enhancements by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3768
- Rewrite controlnet to new model manager by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3665
- fix: Minor UI tweak to Control Net enable button by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3783
- Nodes + Backend Owners by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/3782
- add fp16 support to controlnet models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3785
- migrate script now initializes destination root if needed by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3784
- add documentation on the configuration system by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3787
- Setup textual inversion training with new model manager by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/3775
- Update NODES.md by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3788
- add renaming capabilities to model update API route by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3793
- Updating installation instructions to include Node.JS by @lillekemiker in https://github.com/invoke-ai/InvokeAI/pull/3795
- Add defaults for controlnet/textual inversion/realesrgan by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3773
- Fix error with long prompts when controlnet used by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3792
- Added class PromptsFromFileInvocation to prompt.py. by @skunkworxdark in https://github.com/invoke-ai/InvokeAI/pull/3781
- Allow bin extension to detect diffusers-ti provided as file by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3796
- Create pull request template for the project by @Millu in https://github.com/invoke-ai/InvokeAI/pull/3798
- ui: improve context menu by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3799
- model installer — Prevent crashes on malformed models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3619
- Fix/long prompts by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3806
- docs: add vscode setup instructions by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3807
- Hide legend button Option 2 by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3797
- feat: model events by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3786
- Model Manager UI 3.0 by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3778
- Update LOCAL_DEVELOPMENT.md by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3808
- fix: Model Manager scan Auto Add not detecting checkpoint correctly by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3810
- Support SDXL models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3714
- Add Toggle for Minimap and Tooltips by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3809
- feat(ui): hide sdxl from linear UI by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3815
- 3.0 Pre-Release Polish — Bug Fixes / Style Fixes / Misc by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3812
- Release - invokeai 3 0 beta by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3814
- fix(nodes): fix inpaint cond logic for new compel version by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3816
- VRAM Optimizations, sdxl on 8gb by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3818
- feat: String Param Node + titles and tags for all Nodes by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3819
- Rename clip1 to clip by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3822
- Avoid crash if unable to modify the model config file by @ebr in https://github.com/invoke-ai/InvokeAI/pull/3824
- Cleanup vram after models offloading by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3826
- add option to hide version on logo by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3825
- install, nodes, ui: restore ad-hoc upscaling by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3800
- Release/invokeai 3 0 beta - mkdocs fix by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3821
- Missing def choose_torch_device by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3834
- Tweaks to Image Progress Node by @mickr777 in https://github.com/invoke-ai/InvokeAI/pull/3833
- [WIP] Load text_model.embeddings.position_ids outsude state_dict by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3829
- Maryhipp/clear intermediates by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3820
- clear canvas alongside intermediates by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3837
- feat(ui): another go at gallery by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3791
- add toggle for isNodesEnabled in settings by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3839
- Changing ImageToLatentsInvocation node to default to detected precision by @lillekemiker in https://github.com/invoke-ai/InvokeAI/pull/3838
- Beta branch containing documentation enhancements, minor bug fix by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3831
- feat(api): use next available port by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3710
- fix inpaint model detection by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3843
- fix v1-finetune.yaml is not in the subpath of "" by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3848
- fix: Model List not scrolling through checkpoints by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3849
- feat: Add Setting Switch Component by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/3847
- Updated documentation by @Millu in https://github.com/invoke-ai/InvokeAI/pull/3832
- ui: enhance intermediates clear, enhance board auto-add by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3851
- feat: Add Sync Models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3850
- feat(ui): boards styling by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3856
- feat: ControlNet Resize Mode by @GreggHelt2 in https://github.com/invoke-ai/InvokeAI/pull/3854
- if updating intermediate, dont add to gallery list cache by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/3859
- fix(ui): fix no_board cache not updating by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/3852
- Add get_log_level and set_log_level operations to the app route by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3858
- Add sdxl generation preview by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3862
New Contributors
- @ElrikUnderlake made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/2888
- @patrickvonplaten made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/2905
- @mastercaster9000 made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/2869
- @mrwho made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/2933
- @darkcl made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3024
- @felixsanz made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/2918
- @aeyno made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3041
- @creachec made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3084
- @teerl made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3129
- @cmsj made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3141
- @c67e708d made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3147
- @AbdBarho made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3159
- @nicholaskoerfer made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3190
- @TimCabbage made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3219
- @EternalLeo made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3225
- @AldeRoberge made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3231
- @aluhrs13 made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3284
- @stevemar made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3349
- @KernelGhost made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3439
- @GreggHelt2 made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3405
- @DrGunnarMallon made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3553
- @skf-funzt made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3546
- @sammyf made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3591
- @brandonrising made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3606
- @Zadagu made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3668
- @ymgenesis made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3729
- @lillekemiker made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3795
- @skunkworxdark made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3781
- @Millu made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3798
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5…v3.0.0rc2
InvokeAI 2.3.5.post2
We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post2.
- What’s New
- Installation and Upgrading
- Known Bugs
- Getting Help
- Development Roadmap
- Detailed Change Log
- Acknowledgements
What’s New in 2.3.5.post2
This is a bugfix release. In previous versions, the built-in updating script did not update the Xformers library when the torch library was upgraded, leaving people with a version that ran on CPU only. Install this version to fix the issue so that it doesn’t happen when updating to future versions of InvokeAI 3.0.0.
As a bonus, this version allows you to apply a checkpoint VAE, such as vae-ft-mse-840000-ema-pruned.ckpt to a diffusers model, without worrying about finding the diffusers version of the VAE. From within the web Model Manager, choose the diffusers model you wish to change, press the edit button, and enter the Location of the VAE file of your choice. The field will now accept either a .ckpt file, or a diffusers directory.
Installation / Upgrading
To install 2.3.5.post2 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.post2.zip
If you are using the Xformers library, and running v2.3.5.post1 or earlier, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:
- Start the launcher script and select option # 8 - Developer’s console.
- Give the following command:
pip install invokeai[xformers] --use-pep517 --upgradeIf you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the “[xformers]” part. From v2.3.5.post2 onward, the updater script will work properly with Xformers installed.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAIYou may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==2.3.5.post2To upgrade to an xformers version if you are not currently using xformers, use:
pip install --use-pep517 --upgrade InvokeAI[xformers]You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.5.post2
These are known bugs in the release.
- Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pthface restoration model, as well as theCIDAS/clipsegandrunwayml/stable-diffusion-v1.5models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Development Roadmap
This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (mid-May, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of “nodes”.
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5…v2.3.5.post2
What’s Changed
- autoconvert legacy VAEs by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3235
- 2.3.5 fixes to automatic updating and vae conversions by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3444
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5.post1…v2.3.5.post2
InvokeAI Version 2.3.5.post1
We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post1.
- What’s New
- Installation and Upgrading
- Known Bugs
- Getting Help
- Development Roadmap
- Detailed Change Log
- Acknowledgements
What’s New in 2.3.5.post1
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
Here are the new library versions:
| Library | Version |
|---|---|
| Torch | 2.0.0 |
| Diffusers | 0.16.1 |
| Xformers | 0.0.19 |
| Compel | 1.1.5 |
Other Improvements
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running (thanks to @pedantic79 for this).
Installation / Upgrading
To install or upgrade to InvokeAI 2.3.5.post1 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.post1.zip
If you are using the Xformers library, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:
- Start the launcher script and select option # 8 - Developer’s console.
- Give the following command:
pip install invokeai[xformers] --use-pep517 --upgradeIf you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the “[xformers]” part.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5.post1. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.5.post1
These are known bugs in the release.
- Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pthface restoration model, as well as theCIDAS/clipsegandrunwayml/stable-diffusion-v1.5models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Development Roadmap
This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (mid-May, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of “nodes”.
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1…v2.3.5-rc1
What’s Changed
- Update dependencies to get deterministic image generation behavior (2.3 branch) by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3353
- [Bugfix] Update check failing because process disappears by @pedantic79 in https://github.com/invoke-ai/InvokeAI/pull/3334
- Turn the HuggingFaceConceptsLib into a singleton to prevent redundant connections by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3337
New Contributors
- @pedantic79 made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/3334
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.5…v2.3.5.post1
InvokeAI 2.3.5
We are pleased to announce a features update to InvokeAI with the release of version 2.3.5. This is currently a pre-release for community testing and bug reporting.
- What’s New
- Installation and Upgrading
- Known Bugs
- Getting Help
- Development Roadmap
- Detailed Change Log
- Acknowledgements
What’s New in 2.3.5
This release expands support for additional LoRA and LyCORIS models, upgrades diffusers to 0.15.1, and fixes a few bugs.
LoRA and LyCORIS Support Improvement
- A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
- Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
- Support for the newer LoKR LyCORIS files has been added.
Diffusers 0.15.1
- This version updates the diffusers module to version 0.15.1 and is no longer compatible with 0.14. This provides a number of performance improvements and bug fixes.
Performance Improvements
- When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
Bug Fixes
- The “import models from directory” and “import from URL” functionality in the console-based model installer has now been fixed.
Installation / Upgrading
To install or upgrade to InvokeAI 2.3.5 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.zip
To update from versions 2.3.1 or higher, select the “update” option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.5. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.5
These are known bugs in the release.
- Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pthface restoration model, as well as theCIDAS/clipsegandrunwayml/stable-diffusion-v1.5models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available. - If the
xformersmemory-efficient attention module is used, each image generated with the same prompt and settings will be slightly different.xformers 0.0.19reduces or eliminates this problem, but hasn’t been extensively tested with InvokeAI. If you wish to upgrade, you may do so by entering the InvokeAI “developer’s console” and giving the commandpip install xformers==0.0.19. You may see a message about InvokeAI being incompatible with this version, which you can safely ignore. Be sure to report any unexpected behavior to the Issues pages.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Development Roadmap
This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (late April, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of “nodes”.
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Change Log
- fix the “import from directory” function in console model installer by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3211
- [Feature] Add support for LoKR LyCORIS format by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3216
- CODEOWNERS update - 2.3 branch by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3230
- Enable LoRAs to patch the text_encoder as well as the unet by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/3214
- improvements to the installation and upgrade processes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3186
- Revert “improvements to the installation and upgrade processes” by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3266
- [Enhancement] distinguish v1 from v2 LoRA models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3175
- increase sha256 chunksize when calculating model hash by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3162
- bump version number to 2.3.5-rc1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3267
- [Bugfix] Renames in 0.15.0 diffusers by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3184
New Contributors and Acknowledgements
- @AbdBarho contributed the checksum performance improvements
- @StAlKeR7779 (Sergey Borisov) contributed the LoKR support, did the diffusers 0.15 port, and cleaned up the code in multiple places.
Many thanks to these individuals, as well as @damian0815 for his contribution to this release.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.4.post1…v2.3.5-rc1
InvokeAI Version 2.3.4.post1 - A Stable Diffusion Toolkit
We are pleased to announce a features update to InvokeAI with the release of version 2.3.4.
Update: 13 April 2024 - 2.3.4.post1 is a hotfix that corrects an installer crash resulting from an update to the upstream diffusers library. If you have recently tried to install 2.3.4 and experienced a crash relating to “crossattention,” this release will fix the issue.
What’s New in 2.3.4
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
LoRA and LyCORIS Support
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
To use LoRA/LyCORIS models in InvokeAI:
-
Download the
.safetensorsfiles of your choice and place in/path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually. -
Add
withLora(lora-file,weight)to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file namedloras/sushi.safetensorsis present:
family sitting at dinner table eating sushi withLora(sushi,0.9)family sitting at dinner table eating sushi withLora(sushi, 0.75)family sitting at dinner table eating sushi withLora(sushi)Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA’s influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
-
Generate as you usually would! If you find that the image is too “crisp” try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you’ll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA’s training. Don’t try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
-
You can change the location of the
lorasdirectory by passing the--lora_directoryoption to `invokeai.
New WebUI LoRA and Textual Inversion Buttons
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.

Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on “Show Textual Inversions from HF Concepts Library.” When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
Minor features and fixes
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.
Installation / Upgrading
To install or upgrade to InvokeAI 2.3.4 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.4.post1.zip
To update from versions 2.3.1 or higher, select the “update” option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.4. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.4. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page (Pre-release note: this will only work after the official release.)
Known Bugs in 2.3.4
These are known bugs in the release.
- The Ancestral DPMSolverMultistepScheduler (
k_dpmpp_2a) sampler is not yet implemented fordiffusersmodels and will disappear from the WebUI Sampler menu when adiffusersmodel is selected. - Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pthface restoration model, as well as theCIDAS/clipsegandrunwayml/stable-diffusion-v1.5models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Change Log
- [FEATURE] Lora support in 2.3 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3072
- [FEATURE] LyCORIS support in 2.3 by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3118
- [Bugfix] Pip - Access is denied durring installation by @StAlKeR7779 in https://github.com/invoke-ai/InvokeAI/pull/3123
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/2804
- [Enhancement] save name of last model to disk whenever model changes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3102
New Contributors and Acknowledgements
- @felorhik contributed the vast bulk of the LoRA implementation in https://github.com/invoke-ai/InvokeAI/pull/2712
- @felorhik, @neecapp, and @StAlKeR7779 (Sergey Borisov) all contributed to the v2.3 backport in https://github.com/invoke-ai/InvokeAI/pull/3072
- @StAlKeR7779 (Sergey Borisov) contributed LyCORIS support in https://github.com/invoke-ai/InvokeAI/pull/3118, plus multiple bugfixes to the LoRA manager.
Many thanks to these individuals, as well as @blessedcoolant and @damian0815 for their contributions to this release.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.3…v2.3.4rc1
InvokeAI Version 2.3.3 - A Stable Diffusion Toolkit
We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.3.
What’s New in 2.3.3
This is a bugfix and minor feature release.
Bugfixes
Since version 2.3.2 the following bugs have been fixed:
Bugs
- When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
- Textual inversion will select an appropriate batchsize based on whether
xformersis active, and will default toxformersenabled if the library is detected. - The batch script log file names have been fixed to be compatible with Windows.
- Occasional corruption of the
.next_prefixfile (which stores the next output file name in sequence) on Windows systems is now detected and corrected. - Support loading of legacy config files that have no personalization (textual inversion) section.
- An infinite loop when opening the developer’s console from within the
invoke.shscript has been corrected. - Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
Enhancements
- It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested “Illuminati” model.
- The “NegativePrompts” embedding file, and others like it, can now be loaded by placing it in the InvokeAI
embeddingsdirectory. - If no
--modelis specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched. - On Linux systems, the
invoke.shlauncher now uses a prettier console-based interface. To take advantage of it, install thedialogpackage using your package manager (e.g.sudo apt install dialog). - When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckptmy-favorite-model.yamlmy-favorite-model.vae.pt # or my-favorite-model.vae.safetensorsInstallation / Upgrading
To install or upgrade to InvokeAI 2.3.3 please download the zip file at the bottom of the release notes (under “Assets”), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
To update from 2.3.1 or 2.3.2 you may use the “update” option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.3.
Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate “yes”.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.3. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.3
These are known bugs in the release.
- The Ancestral DPMSolverMultistepScheduler (
k_dpmpp_2a) sampler is not yet implemented fordiffusersmodels and will disappear from the WebUI Sampler menu when adiffusersmodel is selected. - Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pthface restoration model, as well as theCIDAS/clipsegandrunwayml/stable-diffusion-v1.5models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
What’s Changed
- Enhance model autodetection during import by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3043
- Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3058
- Add support for the TI embedding file format used by
negativeprompts.safetensorsby @lstein in https://github.com/invoke-ai/InvokeAI/pull/3045 - Keep torch version at 1.13.1 by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/2985
- Fix textual inversion documentation and code by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3015
- fix corrupted outputs/.next_prefix file by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3020
- fix batch generation logfile name to be compatible with Windows OS by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3018
- Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3011
- prevent infinite loop when launching developer’s console by @lstein in https://github.com/invoke-ai/InvokeAI/pull/3016
- Prettier console-based frontend for
invoke.shon Linux systems with “dialog” installed by Joshua Kimsey. - ROCM debugging recipe from @EgoringKosmos
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.2.post1…v2.3.3-rc1
Acknowledgements
Many thanks to @psychedelicious, @blessedcoolant (Vic), @JPPhoto (Jonathan Pollack), @ebr (Eugene Brodsky) @JoshuaKimsey, @EgoringKosmos, and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v2.3.2.post1…v2.3.3