All versions since v6.2.0
v6.2.0
This minor release includes a handful of fixes and enhancements.
Check out the v6.0.0 release notes if you haven’t already! It’s a big one.
Enhancements
- Restored
Cancel and Clear Allfunctionality, which was removed in v6. The button for this is in the hamburger menu next to the Invoke button. - When resetting Canvas Layers, an empty Inpaint Mask layer is added.
- Restored the Viewer toggle hotkey
z. - Updated translations. Thanks @Harvester62 !
Fixes
- Fixed
useInvocationNodeContext must be used within an InvocationNodeProvidererror that could crash the Workflow Editor. - Fixed issue where scrolling on Canvas could result in zooming in the wrong direction, especially when using a mouse scrollwheel.
Internal/Dev
- Minor perf improvement in Workflow Editor, reducing re-renders of the Auto Layout popover.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- chore: prep for v6.1.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8313
- feat(ui): add default inpaint mask layer on canvas reset by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8319
- update whats new by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8321
- fix iterations for all API models by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8322
- fix(ui): Reposition export button by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/8323
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8324
- feat(ui): restore viewer toggle hotkey by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8325
- fix(ui): incorrect zoom direction on fine scroll by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8326
- feat(ui): restore clear queue button by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8327
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8330
- perf(ui): imperatively get nodes and edges in autolayout hook by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8333
- fix(ui): invocation node context error when in publish flow, notes and current image nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8332
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.1.0…v6.2.0
v6.3.0
This minor release includes a handful of fixes and enhancements.
Support for multiple reference images for FLUX Kontext
You may now use multiple ref images when using FLUX Kontext on the Generate, Canvas and Workflows tabs.
On the Generate and Canvas tabs, the images are concatenated in image-space before being encoded.
This is done using the new Flux Kontext Image Prep node, which you can use in Workflows. Use it to resize an image to one of Kontext’s preferred sizes. If multiple images are added to its collection, they are concatenated horizontally. Pass the output of this node into a single Kontext Conditioning node, and then pass its output into the Denoise node.
If, for some reason, you want to use latent-space concatenation, you can do it like this:
- Add a
Flux Kontext Image Prepfor each image- Pass each of those to its own
Kontext Conditioning- Collect the
Kontext Conditioningnodes- Pass the output collection to the Denoise node
The images will be concatenated in latent-space by the Denoise node. It will not resize the images to Kontext preferred dimensions. For best results, use the
Flux Kontext Image Prepnode, as described above, to prep your ref images.
Studio state is stored in the database
Studio state includes all generation settings, Canvas layers, Workflow editor state - in other words, everything that you lose when you click Reset Web UI. Studio state does not include models, images, boards, saved workflows, etc.
Previously, this data was stored in the web browser or Launcher’s built-in UI. In v6.3.0, it is stored in the database, allowing your Studio state to follow you across browsers and devices.
For example, let’s say you were working in Canvas from the Launcher’s UI. You need to switch computers, so you enable Server Mode in the launcher and open Invoke on the other computer.
Previously, your Studio would load up with default settings on the other computer. In v6.3.0, you will instead pick up right where you left off on the first computer.
On the first launch of Invoke after updating to v6.3.0, we will migrate Studio state stored in the browser to the database, so you shouldn’t lose anything.
Added setting to disable picklescan
Invoke uses picklescan to scan certain unsafe model types for malware during installation and loading.
Sometimes, picklescan is unable to scan models because their internal structure is broken. It is possible that these unscannable models will still work fine, and have no malware, but until now, there was no way to tell Invoke to ignore detections or scan errors.
You may now dangerously, unsafely opt-out of model scanning by adding this line to your invokeai.yaml config file:
# 😱 scary!unsafe_disable_picklescan: trueWe strongly suggest you do not disable picklescan. Disable it at your own risk.
Enhancements
- Support for multiple reference images for FLUX Kontext on Generate, Canvas and Workflows tabs. Ref images are concatenated in image space.
- New
Flux Kontext Image Prepnode. Use it to resize an image to one of Kontext’s preferred sizes. If multiple images are added to its collection, they are concatenated horizontally. - When Snap to Grid on Canvas is disabled, hold Ctrl/Cmd to temporarily enable course snapping. Hold Ctrl/Cmd+Shift to temporarily enable fine snapping. Thanks @Ar7ific1al !
- Update styling and layout for image comparison.
- Added visual indicator on node fields when they are added to the form. The field names are in blue with a small link icon.
- Added setting to disable
picklescan. - Added FLUX.1 Krea dev to starter models (full-fat and quantized).
- Added a not-broken anime upscaler model to starter models.
- Studio state is stored on the server.
- Add hotkey
shift+nto fit bbox to layers. It does the same thing as the button in the Canvas toolbar. - Add a button to the ref image display to use that image’s size for generation. This is useful for FLUX Kontext, where you often want to generate at the same/similar size as a reference image.
- Updated translations. Thanks @Harvester62 @Linos1391 !
Fixes
- Fix issue where model filenames with periods were not handled correctly.
- This fixes the error
DuplicateModelException: A model with path 'flux/main/FLUX.safetensors' is already installed.
- This fixes the error
- Fix issue where model installation required 2x the disk space the model actually needed. We now move - not copy - from download temp dir to final destination.
- Metadata not recorded for API model generations.
- Queue count badge not hidden when left panel is collapsed.
- Fix an issue where canceling a queue item didn’t clear its progress image.
- Fix an issue where viewer could briefly show the last-selected image between the last progress image being received, and its output image rendering.
- Add handling for a rare race condition where we get socket events for a queue item after it has completed.
- Add handling for a common race condition where queue status network requests complete after queue events optimistically update the counts, often resulting in a the little yellow queue count badge being incorrect.
- Fix an issue where intermediate images could trigger changes to gallery view.
- Progress image not hiding when a generation fails or is canceled, when gallery auto-switch is disabled.
- Awkward flash of incorrectly-sized image when starting image comparison.
- Fix an issue where gallery auto-scroll could not work during an image loading race condition.
- Prevent creating a new canvas while staging, which could bork your existing canvas session.
- Fix an issue where the
Reset Canvas Layersbutton also reset the bbox. - Hide
Reset Canvas Layersbutton when not on the canvas. - Fix visual overflow with very long board names.
Internal/Dev
- UI logging now includes the source code filename of the logger, making troubleshooting much easier for UI bugs.
- All redux state is modeled with zod schemas. Rehydrated state is validated against the schemas before it makes it into the browser, preventing some (very rare) errors.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- chore: prep for v6.2.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8334
- feat: server-side client state persistence by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8314
- build(ui): add vite plugin to add relative file path to logger context by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8344
- fix(ui): progress tracking fixes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8345
- fix(ui): connect metadata to output node for ext api nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8348
- fix(ui): queue count badge renders when left panel collapsed by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8350
- fix(ui): progress image does not hide on viewer with autoswitch disabled by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8349
- fix(app): handle model files with periods in their name by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8352
- fix(app): move (not copy) models from install tmpdir to destination by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8351
- build(ui): export loading component by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8356
- fix(ui): gallery slice rehydration error by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8354
- fix(docker) rocm 6.3 based image by @heathen711 in https://github.com/invoke-ai/InvokeAI/pull/8152
- feat: client state persistence updates by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8355
- Add temporary snapping while Snap to Grid is off by holding CTRL for 64px snap or CTRL+SHIFT for 8px snap by @Ar7ific1al in https://github.com/invoke-ai/InvokeAI/pull/8357
- feat: support multiple ref images for flux kontext by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8359
- feat(ui): zhoosh image comparison ui by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8364
- feat(ui): add visual indicator when input field is added to form by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8365
- refactor: client state again by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8363
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8362
- feat(ui): add migration path for client state from IndexedDB to server-backed storage by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8366
- chore: prep for v6.3.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8367
- fix(ui): add image name data attr to gallery placeholder image elements by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8369
- feat(ui): add missing translations by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8371
- feat(ui): support disabling roarr output styling via localstorage by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8370
- flux kontext multi-ref image improvements by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8386
- chore: bump version to v6.3.0rc2 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8387
- multi-ref image support for flux kontext API by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8389
- fix graph building for flux kontext API multi-image prep by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8390
- feat(app): add setting to disable picklescan by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8396
- feat(ui): prevent creating new canvas when staging by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8394
- fix(ui): reset session button actions by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8393
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8368
- fix(ui): overflow w/ long board names by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8392
- Add reset bbox to canvas hotkey by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/8388
- feat(mm): change anime upscaling model to one that doesn’t trigger picklescan by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8395
- feat(ui): add button to ref image to recall size & optimize for model by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8391
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.2.0…v6.3.0
v6.4.0
This release includes a handful of fixes and enhancements.
Enhancements
Shout-out to @csehatt741 for knocking out some great QoL improvements. Thank you!
- Canvas Bbox visibility can be toggled with
shift+o. Thanks @csehatt741! - Nodes with execution errors are highlighted red. Thanks @csehatt741!
- Prevent a field from being added to Workflow Builder forms multiple times. Thanks @csehatt741!
- Support recall of CLIP Skip metadata. Thanks @csehatt741!
- Fixed some issues with model install paths.
- Tweaked state persistence strategy - now debounced to 300ms instead of throttled to 2000ms. This should reduce stutters while doing things like panning around the Canvas.
- SDXL Style prompts have been removed from the Generate, Canvas and Upscaling tabs. This rarely-used setting was unintuitive at best. You can still use it in Workflows, but we are removing this footgun from the linear UI tabs.
- Prompt and seed metadata may now be recalled on the Upscaling tab.
- The buttons to download potentially very large starter model bundles show a confirmation dialog before starting the download. Thanks @csehatt741!
- Merged layers are inserted in the right spot in the layers panel. Thanks @csehatt741!
- Added button to image context menu and viewer toolbar to locate an image in the gallery. The image’s board is selected and image scrolled to. Thanks @csehatt741!
- Support FLUX PEFT LoRAs with
base_model.modelkey prefix. - Improved VAE encode VRAM usage.
- Updated translations. Thanks @Harvester62!
Fixes
- Minor bug when concatenating Kontext ref images in latent space that could result in some images not being “seen”.
- Fit to Bbox functionality could result in the layer being sized correctly but positioned incorrectly when the bbox was not aligned to the 64px grid.
- Allow use of mouse in node title editable inputs.
Internal/Dev
- Fix AMD docker image build issue related to disk space. Thanks @heathen711!
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- chore: prep for v6.3.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8397
- bugfix(container-builder) Use the mnt space instead of root space for docker images by @heathen711 in https://github.com/invoke-ai/InvokeAI/pull/8361
- feat(ui): add toggle for bbox with hotkey by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8385
- feat(ui): outline error nodes in red by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8398
- fix(ui): same field cannot be added to form multiple times in workflow editor by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8403
- feat(mm): improved VAE encode VRAM usage by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8414
- fix(mm): only add suffix to model paths when path is file by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8413
- feat(ui): debounce persistence instead of throttle by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8410
- fix(ui): upscaling prompt metadata by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8411
- fix(ui): input field error styling specificity by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8418
- feat(ui): add missing translation strings by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8412
- fix(mm): fail when model exists at path instead of finding unused new path by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8419
- Update NODES.md by @solkyoshiro in https://github.com/invoke-ai/InvokeAI/pull/8378
- chore: fix some comments by @jiangmencity in https://github.com/invoke-ai/InvokeAI/pull/7575
- feat(ui): remove SDXL style prompt from linear UI by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8417
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8404
- feat(ui)/clip skip by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8422
- feat(ui): confirmation before downloading starter bundle by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8427
- feat(ui): layer behaviour after merging by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8428
- feat(ui): locate in gallery image context menu by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8434
- tests: skip flaky MPS tests on CI by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8442
- fix(ui): update board totals when generation completes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8424
- fix(ui): export NumericalParameterConfig type by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8436
- refactor: estimate working vae memory during encode/decode by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8438
- Support PEFT Loras with Base_Model.model prefix by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/8433
- fix(ui): fit to bbox when bbox is not aligned to 64px grid by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8439
- fix(ui): prevent node drag when editing title by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8440
- chore: prep for v6.4.0rc2 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8441
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8432
- git: move test LoRA to LFS by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8446
New Contributors
- @csehatt741 made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8385
- @solkyoshiro made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8378
- @jiangmencity made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/7575
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.3.0…v6.4.0
v6.5.0
This release includes a handful of fixes and enhancements.
Enhancements
- Add a optional
Shufflebutton to float and integer fields in Workflow Builder forms. Thanks @csehatt741! - Canvas color picker non longer changes the alpha of the color.
- When the bbox aspect ratio is locked, resizing the bbox from the Canvas will respect the locked status of the aspect ratio. Hold
shiftto temporarily invert the locked status:- When the aspect ratio is locked, holding
shiftwhile resizing the bbox will allow you to freely resize the bbox. - When the aspect ratio is not locked, holding
shiftwhile resizing the bbox will maintain the last aspect ratio of the bbox.
- When the aspect ratio is locked, holding
- When a node field is added to a Workflow Builder form, the
+button to add it will now show a-and let you remove the field. Thanks @csehatt741! - When changing a selection of image’s board, the current board is hidden from the board drop-down. The items in the drop down are now sorted alphabetically. Thanks @csehatt741!
- When using a model that doesn’t support reference images, they will be hidden. You can now Invoke without needing to disable them.
- When using a model that doesn’t support explicit width and height settings, they will be hidden.
Fixes
- Rare issue with HF tokens that could cause an error when downloading models from a protect HF repo immediately after setting the token in Invoke’s Model Manager.
- Fix an issue with float field precision in the Workflow editor.
- Fix an error
AttributeError: module 'cv2.ximgproc' has no attribute 'thinning'. Affected users should use the Launcher’s Repair Mode to get the fix, otherwise the error will persist. - Disable the color picker when using middle mouse to pan the Canvas.
- Minor issue related to gallery multi-select where the last-selected image didn’t show in the viewer.
- Prevent dragging and dropping a node field into the Workflow Builder if it has already been added once.
- Fix an issue where the last progress image for a Canvas generation would get stuck on the Viewer tab.
- Fix an issue where certain image loading errors in Canvas were not logged correctly.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- chore: prep for v6.4.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8447
- ci: add workflow to catch incorrect usage of git-lfs by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8452
- feat(ui): shuffle button on workflows by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8450
- docs: update quick start instructions & links by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8454
- feat(ui): do not sample alpha in Canvas color picker by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8448
- fix(ui): race condition when setting hf token and downloading model by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8456
- fix(ui): float input precision by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8455
- feat(app): vendor in
invisible-watermarkby @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8468 - fix(ui): disable color picker while middle-mouse panning canvas by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8475
- fix(ui): toggle bbox visiblity translation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8476
- feat(ui): respect aspect ratio when resizing bbox on canvas by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8474
- feat(ui): remove input field from form button on node field by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8457
- fix(ui): respect direction of selection in Gallery by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8458
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8464
- chore: prep for v6.5.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8479
- feat(ui): bbox aspect ratio lock is always inverted by shift by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8477
- feat(ui): change board - sorting order of boards alphabetical by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8460
- Feat/same field multiple times added to form by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8480
- UI support for gemini 2.5 flash image by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8489
- update copy for API models without w/h controls by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8490
- fix(ui): progress image gets stuck on viewer when generating on canvas by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8483
- fix(ui): konva logging by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8493
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8485
- chore: prep for v6.5.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8492
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.4.0…v6.5.0
v6.5.1
This is a patch release, fixing a few high priority bugs.
Fixes
- Hard crash when generating with FLUX on Windows.
- Super tiny progress images on Canvas.
- Assorted Canvas issues, mostly around transparency.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- fix(ui): control layer transparency effect not working by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8496
- fix(ui): progress image renders at physical size by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8495
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8498
- fix(app): FLUX on Windows hard crash by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8494
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.5.0…v6.5.1
v6.6.0
This is a minor release, adding a few QoL improvements and fixes.
Enhancements
- Canvas Color Picker has foreground and background colors. Switch between them with
x. Pressdto reset them to black and white. Thanks @csehatt741! - You can set a default weight setting for LoRAs in the Model Manager. When you add the LoRA, it will start at the default weight. Thanks @csehatt741!
- Canvas Brush/Eraser width renders an in-line slider when there is enough space instead of showing the slider in a popover.
- Updated translations. Thanks @Harvester62!
Fixes
- Always delete LoRAs when recalling all metadata. Thanks @csehatt741!
- Incompatible LoRAs being enabled prevents you from clicking Invoke.
- Fixed an issue where it was possible to drag a tab panel to another location in the UI on Chrome and Launcher (Firefox was unaffected).
- Internal file organization fix for docker builds.
- Fix an issue where progress images were super tiny (again).
- Fix an issue where no fallback was rendered in the viewer when no image is selected.
- Fix an issue where a single middle-mouse click on Canvas would activate the View tool (i.e. drag-to-pan), and you had to click again to deactivate it.
- Fix an issue in the Viewer where the last-generated image would briefly show after the current generation finishes.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- video by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8499
- do not show negative prompt or ref images on video tab by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8501
- handle large videos by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8503
- match screen capture button to the others by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8504
- fix(app): board count queries not getting categories as params by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8505
- Add ‘sd-2’ to supported negative prompt base models by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/8513
- fix(ui): remove LoRAs for recall use all by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8512
- feat(ui): add readiness checks for LoRAs by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8506
- fix(ui): move
getItemsPerRowto frontend src dir by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8515 - chore(ui): bump dockview by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8516
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8507
- feat(ui): switchable foreground/background colors by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8510
- feat(ui): LoRA default weight by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8484
- ui(fix): remove video base models from image aspect/ratio logic by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8521
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8517
- chore: prep for v6.6.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8522
- fix(ui): fix situation where progress images are super tiny by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8524
- fix(ui): browser image caching cors race condition by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8523
- fix(ui): gallery selection issues by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8528
- fix(ui): stop dragging when user clicks mmb once by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8527
- fix(ui): prev image briefly showing in viewer as progress image “resolves” into output image by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8530
- chore: prep for v6.6.0rc2 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8531
- tidy(ui): translation cleanup and CI checks by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8536
- tidy,fix(ui): remove unused coords from params slice by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8534
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8533
- Revert “tidy(ui): translation cleanup and CI checks” by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8539
- feat(ui): slider for brush and eraser tool by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8525
- chore: prep for v6.6.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8538
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.5.1…v6.6.0
v6.7.0
This minor release includes improved object selection on Canvas, layer adjustments, prompt history and a handful of other enhancements and fixes.
Select Object v2
We’ve made some major improvements to object selection.
- Segment Anything v2 is now supported. You can choose between SAM1 and SAM2. We’ve found that SAM2 is much faster than SAM1, but often does not perform as well, so we left SAM1 as an option.
- You may now draw a box around the target object. The box doesn’t need to be exact - sometimes, you can get better results by making it a bit smaller than the target object. Points are still supported and can be used independently or as a refinement for a box.
- Holding
shiftwhile clicking creates an exclude point if you have include selected. If you have selected exclude, holdingshiftwill instead create an include point. - You can now provide a text prompt instead of a box and points. Use very simple language for best results. Internally, this uses Grounding DINO to identify the target.
Raster Layer Adjustments
Right click a Raster Layer to add adjustments. Adjustments are non-destructive, though you can accept them to bake them into the layer.
You can adjust brightness, contrast, saturation, temperature, tint, and sharpness, or use the curves editor to adjust each channel independently.
Thanks @dunkeroni for implementing this very useful feature.
Prompt History
There’s a new button in the Positive Prompt box for prompt history. Your last 100 unique prompts are stored for easy recall. You can search them, delete individual prompts, or clear the whole list.
Enhancements
- Improved object selection on Canvas.
- Raster layer adjustments. Thanks @dunkeroni!
- Support for mathematical expressions in number input fields. Currently, these are only enabled for fields in the Workflow Editor (including Builder Forms). Thanks @csehatt741!
- Prompt history for Positive Prompt.
- Queue list now sorts with newest on top. You can reverse the sort if you want, to restore the previous sorting. Thanks @csehatt741!
- Updated translations. Thanks @Harvester62 @Linos1391!
Fixes
- Fixed an issue that prevented you from using LoRA weights outside the range -1 to 2.
- Fixed an issue where LoRA settings could be lost on refresh.
- Fixed an issue where LoRAs with weights outside the range -1 to 2 were not able to be recalled from metadata.
- Fixed an issue where popovers like the Canvas Settings popover were obscured by other UI elements.
- Fixed a path traversal vulnerability affecting the bulk downloads API.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- feat(ui): reverse queue list by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8488
- fix(ui): route metadata to gemini node by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8547
- fix(ui): LoRA number input min/max restored by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8542
- fix(app): path traversal via bulk downloads paths by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8548
- feat(ui): maths enabled on numeric input fields in workflow editor by @csehatt741 in https://github.com/invoke-ai/InvokeAI/pull/8549
- feat(ui): SAM2 Node & Integration by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/8526
- queue list: remove completed_at, restore field values by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8555
- ai(ui): add CLAUDE.md to frontend by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8556
- feat(ui): Raster Layer Color Adjusters by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8420
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8545
- chore: prep for v6.7.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8557
- fix(ui): extend lora weight schema to accept full range of weights by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8559
- feat(ui): simple prompt history by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8561
- fix(ui): render popovers in portals to ensure they are on top of other ui elements by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8568
- fix(ui): dedupe prompt history by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8567
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8563
- chore: prep for v6.7.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8569
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.6.0…v6.7.0
v6.8.0
This minor release includes a handful of fixes and enhancements.
Fixes
- When accepting raster layer adjustments, the opacity of the layer was “baked” in.
- Corrected help text for non-in-place model installation. Previously, the help text said that a non-in-place model install would copy the model files. This is incorrect; it moves them into the Invoke-managed models dir.
- Failure to queue generations with an error like
Failed to Queue Batch / Unknown Error.
Enhancements
- Added a crop tool. For now, it is only enabled for Global Ref Images.
- Click the crop icon on the Ref Image preview to open the tool.
- Adjust the crop box and click apply to save the cropped image for that ref image.
- To revert, open the crop tool, click Reset, then Apply to revert to the original image.
- We’ll explore integrating this new tool elsewhere in the app in a future update.
- Improved Model Manager tab UI. Thanks @joshistoast!
- Keyboard shortcuts to navigate prompt history. Use
alt/option+up/downto move through history. - Support for the
NOOB-IPA-MARK1IP Adapter. Thanks @Iq1pl!
Internal
- Support for dynamic model drop-downs in Workflow Editor. This change greatly reduces the amount of frontend code changes needed to support a new model type. Node authors may need to update their nodes to prevent warnings from being displayed. However, there are no breakages expected. See #8577 for more details.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- fix(ui): do not reset params state on studio init nav to generate tab by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8572
- feat(model manager): :lipstick: refactor model manager ui by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8564
- Prompt history shortcuts by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/8571
- feat(ui): crop tool by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8562
- chore: prep for v6.8.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8574
- fix(ui): allow scrolling in ModelPane by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8580
- added support for NOOB-IPA-MARK1 by @Iq1pl in https://github.com/invoke-ai/InvokeAI/pull/8576
- feat: dynamic model fields in workflow editor by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8577
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8575
- fix(ui): ref images for flux kontext not parsed correctly by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8587
- feat(nodes): better ui_type deprecations by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8586
- restore list_queue_items method by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/8583
- fix(ui): do not bake opacity when rasterizing layer adjustments by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8592
- fix(ui): correct the in-place install verbiage, add tooltip by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8593
- docs: add BiRefNet and Image Export to communityNodes.md by @veeliks in https://github.com/invoke-ai/InvokeAI/pull/8602
- chore: prep for v6.8.0rc2 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8581
- chore: prep for v6.8.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8604
New Contributors
- @Iq1pl made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8576
- @veeliks made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8602
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.7.0…v6.8.0
v6.8.1
This patch release fixes the Exception in ASGI application startup error that prevents Invoke from starting.
The error was introduced by an upstream dependency (fastapi). We’ve pinned the fastapi dependency to the last known working version.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- Fix(nodes): color correct invocation by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8605
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.8.0…v6.8.1
v6.9.0
This release focuses on improvements to Invoke’s Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.
On first run after installing this release, Invoke will do some data migrations:
- Run-of-the mill database updates.
- Update some model records to work with internal Model Manager changes, described below.
- Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model’s UUID. Models outside the Invoke-managed models directory are not moved.
If you see any errors or run into any problems, please create a GH issue or ask for help in the
#new-release-discussionchannel of the Invoke discord.
Model Installation Improvements
Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.
Unknown Models
Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.
As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.
If the model still doesn’t work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.
Invoke-managed Models Directory
Previously, as a relic of times long past, Invoke’s internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn’t have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.
As of this release, Invoke’s internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.
On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won’t be touched.
We understand this change may seem user-unfriendly at first, but there are good reasons for it:
- This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
- It reinforces that the internal models directory is Invoke-managed:
- Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
- Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
- It obviates the need to move models around when changing their type and base.
Refactored Model Identification system
Several months ago, we started working on a new API to improve model identification (aka “probing” or “classification”). This process involves analyzing model files to determine what kind of model it is.
As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.
Model Identification Test Suite
Besides the business logic improvements, model identification is now fully testable!
When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.
Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.
This allows us to strip out the weights from model files, leaving only the model’s “skeleton” as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
What’s Changed
- refactor: model manager v3 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8607
- tidy: docs and some tidying by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8614
- chore: prep for v6.9.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8615
- feat: reidentify model by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8618
- fix(ui): generator nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8619
- chore(ui): point ui lib dep at gh repo by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8620
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.8.1…v6.9.0
v6.10.0
InvokeAI v6.10.0
This is the first InvokeAI Community Edition release since the closure of the commercial venture, and we think you will be pleased with the new features and capabilities. This release introduces backend support for the state-of-the-art Z-Image Turbo image generation models, and multiple frontend improvements that make working with InvokeAI an even smoother and more pleasurable experience.
The Z-Image Turbo Model Family
Z-Image Turbo (ZiT) is a bilingual image generation model that manages to combine high performance with a small footprint and excellent image generation quality. It excels in photorealistic image generation, renders both English and Chinese text accurately, and is easy to steer. The full model will run easily on consumer hardware with 16 GB VRAM, and while quantized versions will run on significantly smaller cards, with some loss of precision.
With this release InvokeAI runs almost all released versions of ZiT, including diffusers, safetensors, GGUF, FP8 and quantified versions. However, be aware that the FP8 scaled weights models are not yet fully supported and will produce image artifacts. In addition, InvokeAI supports text2image, image2image, ZiT LoRA models, controlnet models, canvas functions and regional guidance. Image Prompts (IP) are not supported by ZiT, but similar functionality is expected when Z-Image Edit is publicly released.
To get started using ZiT, go to the Models tab and from the Launchpad select the Z-Image Turbo bundle to install all the available ZiT related models and dependencies (rougly 35 GB in total). Alternatively, you can select individual models from the Starter Models tab, and search for “Z-Image.” The full and Q8 models will run on a 16 GB card. For cards with 6-8 GB of VRAM, choose the smaller quantized model, Z-Image Turbo GGUF Q4_K. Note that when using one of the quantized models, you will also need to install the standalone Qwen3 encoder and one of the Flux VAE models. This will be handled for you when you install a ZiT starter model.
When generating with these models it is recommended to use 8-9 steps and a CFG of 1. Be aware that due to ZiTs strong prompt following it does not generate as much image diversity as other models you may be used to. One way to increase image diversity is to create a custom workflow that adds noise to the Z-Image Text Encoder using @Pfannkuchensack’s Image Seed Variance Enhancer Node.
In addition to the default Euler scheduler for ZiT we offer the more accurate but slower Heun scheduler, and a faster but less accurate LCM scheduler. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.
A big shout out to @Pfannkuchensack for his critical contributions to this effort.
New Workflow Features
We have two new improvements to the Workflow Editor:
- Workflow Tags: It is now possible to add multiple arbitrary text tags to your workflows. To set a tag on the current workflow, go to Details and and scroll down to Tags. Enter a comma-delimited of tags that describe your workflow, such as “image, bounding box”, and save. The next time you browse your workflows, you will see a series of checkboxes for all the unique tags in your workflow connection. Select the tag checkboxes individually or in combination to filter the workflows that are displayed. This feature was contributed by @Pfannkuchensack.
- Prompt Template Node: Another @Pfannkuchensack workflow contribution is a new Prompt Template node, which allows you to apply any of the built-in or custom prompt style templates to a prompt before passing it onward to generation.
Prompt Weighting Hotkeys
@joshistoast has added a neat feature for adjusting the weighting of words and phrases in the prompt. Simply select a word or phrase in the prompt textbox and press Ctrl-Up Arrow to increase the weight of the selection (by adding ”+” marks) or Ctrl-Down Arrow to decrease the weighting.
Limitations: The prompt weighting does not work properly with numeric weights, nor with prompts that contain the .add() or .blend() functions. This will be fixed in the next point release.
Hotkey Editor
Speaking of hotkeys, @Pfannkuchensack and @joshistoast contributed a new user interface for editing hotkeys. Any of the major UI functions, such as kicking off a generation, opening or closing panels, selecting tools in the canvas, gallery navigation, and so forth, can now be assigned a key shortcut combination. You can also assign multiple hotkeys to the same function.
To access the hotkey editor, go to the Settings (gear) menu in the bottom left, and select Hotkeys.
Bulk Operations in the Model Manager
You can now select multiple models in the Model Manager tab and apply bulk operations to them. Currently the only supported operation is to Delete unwanted models, but this feature will be expanded in the future to allow for model exporting, archiving, and other functionality.
This feature was contributed by @joshistoast, based on earlier work by @Pfannkuchensack .
Masked Area Extraction in the Canvas
It is now possible to extract an arbitrary portion of all visible raster layers that are covered by the Inpaint Mask. The extracted region is composited and added as a new raster layer. This allows for greater flexibility in the generation and manipulation of raster layers.
Thanks to @DustyShoe for this work.
PBR Maps
@blessedcoolant added support for PBR maps, a set of three texture images that can be used in 3D graphics applications to define a material’s physical properties, such as glossiness. To generate the PBR maps, simply right click on any image in the viewer or gallery, and select “Filters -> PBR Maps”. This will generate PBR Normal, Displacement, and Roughness map images suitable for use with a separate 3D rendering package.
New FLUX Model Schedulers
We’ve also added new schedulers for FLUX models (both dev and schnell). In addition to the default Euler scheduler, you can select the more accurate but slow Heun scheduler, and the faster but less accurate LCM scheduler. Look for the selection under “Advanced Options” in the Text2Image settings panel, or in the FLUX Denoise node in the workflow editor. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.
Thanks to @Pfannkuchensack for this contribution.
SDXL Color Compensation
When performing SDXL image2image operations, the color palette changes subtly and the discrepancy becomes increasingly obvious after several such operations. @dunkeroni has contributed a new advanced option to compensate for this color drift when generating with SDXL models.
Option to Release VRAM When Idle
InvokeAI tends to grab as much GPU VRAM as it needs and then hold on to it until the model cache is manually cleared or the server is restarted. This can cause an annoyance for people who need the VRAM for other tasks. @lstein added a new feature that will automatically clear the InvokeAI model cache and release its VRAM after a set period of idleness. To activate this feature, add the configuration option model_cache_keep_alive_min to the invokeai.yaml configuration file. It takes a floating point number corresponding to the number of minutes of idleness before VRAM is released. For example, to release after 5 minutes of idleness, enter:
model_cache_keep_alive_min: 5.0Setting this value to 0 disables the feature. This is also the default if the configuration option is absent.
Bugfixes
Multiple bugs were caught and fixed in this release and are listed in the detailed changelog below.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
New Contributors
- @kyhavlov made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8613
- @aleyan made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8722
- @Copilot made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8693
Translation Credits
Many thanks to Riccardo Giovanetti (Italian) and RyoKoba (Japanese) who contributed their time and effort to providing translations of InvokeAI’s text.
What’s Changed
- Fix(nodes): color correct invocation by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8605
- chore: v6.8.1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8610
- refactor: model manager v3 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8607
- tidy: docs and some tidying by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8614
- chore: prep for v6.9.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8615
- feat: reidentify model by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8618
- fix(ui): generator nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8619
- chore(ui): point ui lib dep at gh repo by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8620
- chore: prep for v6.9.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8623
- fix(mm): directory path leakage on scan folder error by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8641
- feat: remove the ModelFooter in the ModelView and add the Delete Mode… by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8635
- chore(codeowners): remove commercial dev codeowners by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8650
- Fix to enable loading fp16 repo variant ControlNets by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8643
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8599
- (chore) Update requirements to python 3.11-12 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8657
- Rework graph.py by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8642
- Fix memory issues when installing models on Windows by @gogurtenjoyer in https://github.com/invoke-ai/InvokeAI/pull/8652
- Feat: SDXL Color Compensation by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8637
- feat(ui): improve hotkey customization UX with interactive controls and validation by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8649
- feat(ui): Color Picker V2 by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/8585
- Feature(UI): bulk remove models loras by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8659
- feat(prompts): hotkey controlled prompt weighting by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8647
- Feature: Add Z-Image-Turbo model support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8671
- fix(ui): :bug:
HotkeysModalandSettingsModalinitial focus by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8687 - Feature: Add Tag System for user made Workflows by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8673
- Feature(UI): add extract masked area from raster layers by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8667
- Feature: z-image Turbo Control Net by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8679
- fix(z-image): Fix padding token shape mismatch for GGUF models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8690
- feat(starter-models): add Z-Image Turbo starter models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8689
- fix: CFG Scale min value reset to zero by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8691
- feat(model manager): 💄 refactor model manager bulk actions UI by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8684
- feat(hotkeys): :sparkles: Overhaul hotkeys modal UI by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8682
- Feature (UI): add model path update for external models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8675
- fix support multi-subfolder downloads for Z-Image Qwen3 encoder by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8692
- Feature: add prompt template node by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8680
- feat(hotkeys modal): ⚡ loading state + performance improvements by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8694
- Feature/user workflow tags by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8698
- feat(backend): add support for xlabs Flux LoRA format by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8686
- fix(prompts): :bug: prompt attention behaviors, add tests by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8683
- Workaround for Windows being unable to remove tmp directories when installing GGUF files by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8699
- chore: bump version to v6.10.0rc1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8695
- Feature: Add Z-Image-Turbo regional guidance by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8672
- (chore) Prep for v6.10.0rc2 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8701
- fix(model-manager): add Z-Image LoRA/DoRA detection support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8709
- feat(prompts): :lipstick: increase prompt font size by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8712
- fix(ui): misaligned Color Compensation Option by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8714
- fix(z_image): use unrestricted image self-attention for regional prompting by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8718
- fix(ui): make Z-Image model selects mutually exclusive by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8717
- Fix an issue with regional guidance and multiple quick-queued generations after moving bbox by @kyhavlov in https://github.com/invoke-ai/InvokeAI/pull/8613
- Implement PBR Maps Node by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8700
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8724
- fix(docs): Bump python and github actions versions in mkdocs github action by @aleyan in https://github.com/invoke-ai/InvokeAI/pull/8722
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8725
- fix(model-manager): support offline Qwen3 tokenizer loading for Z-Image by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8719
- fix(gguf): ensure dequantized tensors are on correct device for MPS by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8713
- (chore) update WhatsNew translation text by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8727
- Feature/zimage scheduler support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8705
- Update CODEOWNERS by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8728
- feat(flux): add scheduler selection for Flux models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8704
- Update CODEOWNERS by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8731
- Update CODEOWNERS by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8732
- fix(model-loaders): add local_files_only=True to prevent network requests by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8735
- docs(z-image) add Z-Image requirements and starter bundle by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8734
- Add configurable model cache timeout for automatic memory management by @Copilot in https://github.com/invoke-ai/InvokeAI/pull/8693
- chore: Remove extraneous log debug statements from model loader by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8738
- Feature: z-image + metadata node by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8733
- feat(z-image): add
add_noiseoption to Z-Image Denoise by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8739
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.9.0…v6.10.0
v6.11.0
InvokeAI v6.11.0
This is a feature release of InvokeAI which provides support for the new FLUX.2 Klein image generation and edit models as well as a few small improvements and bug fixes. Before we get to the details, consider taking our 2026 User Engagement Survey. We want to know who you are, how you use InvokeAI, and what new features we can add to make the software even better.
Support for FLUX.2 Klein models
The FLUX2 Klein family of models (F2K) are fast, high quality image generation and editing models. Invoke provides support for multiple versions, including both the fast-but-less-precise 4 billion (4B) and the slower-but-more-accurate 9 billion (9B) models, as well as quantized versions of these models suited for systems with limited VRAM. These models are small and fast; the fastest can render images in seconds with just four steps.
In addition to the usual features (txt2img, img2img, inpainting, outpainting) F2K offers a unique image editing feature which allows you to make targeted modifications to an image or set of images using prompts like “Change the goblet in the king’s right hand from silver to gold,” or “Transfer the style from image 1 to image 2”.
Suggested hardware requirements are:
FLUX.2 Klein 4B - 1024×1024
- GPU: Nvidia 30xx series or later, 12GB+ VRAM (e.g. RTX 3090, RTX 4070). FP8 version works with 8GB+ VRAM.*
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 20GB for models (Diffusers format with encoder).
FLUX.2 Klein 9B - 1024×1024
- GPU: Nvidia 40xx series, 24GB+ VRAM (e.g. RTX 4090). FP8 version works with 12GB+ VRAM.
- Memory: At least 32GB RAM.
- Disk: 10GB for base installation plus 40GB for models (Diffusers format with encoder).
Getting Started with F2K
After updating InvokeAI, you will find a new FLUX.2 Klein starter pack for in the Starter Models section of the Model Manager. This will download three files: the Q4 quantized version of F2K 4B, which is suitable to run on low-end hardware, and two supporting files: the FLUX.2 VAE, and a quantized version of the FLUX.2 Qwen3 text encoder.
After installing the bundle, select the “FLUX.2 Klein 4B (GGUF Q4)” model in theGeneration section of Invoke’s left panel. Also go to the Advanced section at the bottom of the panel and select the F2K VAE and text encoder models that were installed with the starter bundle. (If you don’t select these, you will get an warning message on the first generation that tells you to do this.) Recommended generation settings are:
- Steps: 4-6
- CFG: 1-2
Modestly increasing the number of steps may increase accuracy somewhat. If you work with the Base versions of F2K (available from HuggingFace), increase the steps to >20 and the CFG to 3.5-5.0.
Text2img, img2img, inpainting and outpainting will all work as usual. InvokeAI does not currently support F2K LoRAs or ControlNets (there have not been many published so far). In addition, only the Euler sampler is currently available. Support for LoRAs and additional schedulers will be added in a future release.
Prompting with FLUX.2
Like ZiT, F2K’s text encoder works best when you provide it with long prose prompts that follow the framework Subject + Setting + Details + Lighting + Atmosphere. For example: “An elderly king is standing on a low dais in front of a crowded and chaotic banquet hall bursting with courtiers and noblemen. He is shown in profile, facing his noblemen, holding high a jeweled chalice of wine to toast the unification of his fifedoms. This is a cinematic shot set that conveys historical grandeur and a medieval vibe.”
F2K does not perform any form of prompt enhancement, so what you write is what the model sees. See FLUX.2 Prompting Guide for more guidance.
Image Editing
F2K provides an image editing mode that works like a souped-up version of Image Prompt (IP) Adapters. Drag-and-drop or upload an image to the Reference Image section of the Prompt panel. Then instruct the model on modifications you wish to make using active verbs. You may issue multiple instructions in the same prompt.
- Change the king’s chalice from silver to gold. Give him a crown, and grow him a salt-and-pepper beard.
- Change the image style to a scifi/fantasy vibe.
- Use an anime style and give the noblemen and courtiers brightly-colored robes.
F2K editing supports multiple reference images, letting you transfer visual elements (subjects, style and background) from one to another. When prompting over multiple images, refer to them in order as “image 1,” “image 2,” and so forth.
- Give the king in image 1 the crown that appears in image 2.
- Transfer the style of image 1 to image 2.
Dealing with multiple reference images is tricky. There is no way to adjust the weightings of each image, and so you will have to be explicit in the prompt about which visual elements you are combining. If you cannot get the effect you are looking for by modifying the prompt, you may find success by changing the order of images.
Also be aware that each image significantly increases the model’s VRAM usage. If you run into memory errors, use a smaller (quantized) model, or reduce the number and size of the reference images.
Other Versions of F2K Available in the Model Manager
To find additional supported versions of F2K, type “FLUX.2” into the Starter Models search box. This will show you the following types of files:
- FLUX.2 Klein 4B/9B (Diffusers) These are the full-size all-in-one diffusers versions of F2K which come bundled with the VAE and text encoder.
- FLUX.2 Klein 4B/9B These are standalone versions of the full-size F2K which require installation of separate VAE and text encoders. Note that the 4B and 9B models require different text encoders, “FLUX.2 Klein Qwen3 4B Encoder” and “FLUX.2 Klein Qwen3 8B Encoder” respectively. (Not a misprint: use the 9B F2K model with the 8B text encoder!)
- FLUX.2 Klein 4B/9B (FP8) These are the standalone versions quantized to 8 bits. The 4B model will run comfortably on macines with 8GB VRAM, while the 9B model will run on machines with 12GB or higher. As with all quantized versions, there is minor loss of generation accuracy.
- FLUX.2 Klein 4B/9B (Q4) These are standalone versions that have been quantized to 4 bits, resulting in very small and fast models that can run on cards with 6-8 GB VRAM.
There is only one F2K VAE, and it happens to be same as the one used by FLUX.1 and Z-Image Turbo. However, there are several text encoder options:
- FLUX.2 Klein Qwen3 4B Encoder Use this encoder with the F2K 4B versions. It also works with Z-Image Turbo.
- Z-Image Qwen3 Text Encoder (quantized) This is a Q6-quantized version of the text encoder, that works with both F2K and ZiT. You may use this on smaller memory systems to reduce swapping of models in and out of VRAM.
- FLUX.2 Klein Qwen3 8B Encoder Use this encoder with the F2K 9B versions. It is not compatible with ZiT.
You will find additional F2K models on HuggingFace and other model repositories, including the base models intended for fine-tuning and LoRA training. We have not exhaustively tested InvokeAI compatibility with all the available variants. Please report any incompatible models to InvokeAI Issues.
Many credits to @Pfannkuchensack for contributing F2K support.
Other Features in this Release
The other features in this release include:
Z-Image Turbo Variance Enhancer
ZiT tends to produce very similar images for a given prompt. To increase image diversity, @Pfannkuchensack contributed a Seed Variance Enhancer node which adds calibrated amounts of noise to the prompt conditioning prior to generation. You will find this feature in the Generation panel under Advanced Options. When activated, you will see two sliders, one for Variance Strength and the other for Randomize Percent. The first slider controls how much noise will be added to the conditioned prompt, and the second controls what proportion of the conditioning’s weights will be altered. Using the default randomization of 50% of the values, a variance strength of 0.1 will produce subtle variations, while a strength of 0.5 will produce very marked deviation from the prompt. Increasing the percentage of weights modified will also increase the level of variation.
Improved Support for High-Resolution FLUX.1 Images
A new denoising tuning algorithm, introduced by @Pfannkuchensack, increases the accuracy of FLUX.1 generations at high resolutions. When a FLUX.1 model is selected, a new DyPE option will appear in the Generation panel. Its settings are Off (the default) to disable the algorithm, Auto to automatically activate DyPE when rendering images greater than 1536 pixels in either dimension, and 4K Optimized to activate the algorithm with parameters that are tuned for 4K images. Note that if you do not have sufficient VRAM to generate 4K images, this feature will not help you generate them. Instead, generate a smaller image and use Invoke’s Upscaling feature.
Canvas high level transform smoothing
Another improvement contributed by @DustyShoe: The Canvas raster layer transform operation now supports multiple types of smoothing, thereby reducing the number of artifacts when an area is upscaled.
Text Search and Highlighting in the Image Metadata Tab
The Image Viewer’s info (🛈) tab now has a search field that allows you to rapidly search and highlight text in image metadata, details, workflow and generation graph. In addition, the left margin of the metadata display has been widened to make the display more readable.
Thanks to @DustyShoe for this improvement.
Bugfixes
Several bugs were caught and fixed in this release and are listed in the detailed changelog below. Thanks to first-time contributors @kyhavlov and @aleyan for the bugs they caught and fixed.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
Translation Credits
Many thanks to the following language translators who contributed to this release: @Harvester62 (Italian) and @DustyShoe (Russian).
Also many thanks to Weblate for granting InvokeAI a free Open Source subscription to use its translation management service.
What’s Changed
- Fix(UI): Canvas numeric brush size by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8761
- Feat(UI): Canvas high level transform smoothing by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8756
- chore(CI/CD): Remove codeowners from /docs directory by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8737
- feat(z-image): add Seed Variance Enhancer node and Linear UI integration by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8753
- fix(model_manager): prevent Z-Image LoRAs from being misclassified as main models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8754
- Add user survey section to README by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8766
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8767
- Limit automated issue closure to bug issues only by @Copilot in https://github.com/invoke-ai/InvokeAI/pull/8776
- Feat(UI): Search bar in image info code tabs and add vertical margins for improved UX in Recall Parameters tab. by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8786
- feat(flux2): add FLUX.2 klein model support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8768
- Feature: Add DyPE (Dynamic Position Extrapolation) support to FLUX models for improved high-resolution image generation by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8763
- fix(model_manager): detect Flux1/2 VAE by latent space dimensions instead of filename by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8790
- Prep for 6.11.0.rc1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8771
- Fix ref_images metadata format for FLUX Kontext recall by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8791
- Add input connectors to the FLUX model loader by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8785
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8797
- fix(ui): use proper FLUX2 latent RGB factors for preview images by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8802
- fix(ui): allow guidance slider to reach 1 for FLUX.2 Klein by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8800
- Add new model type integration guide by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8779
- [i18n]: Fix weblate merge errors by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8805
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8804
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8806
- Release Workflow: Fix workflow edge case by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8792
- Documentation: InvokeAI PR review and merge policy by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8795
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8807
- fix(ui): improve DyPE field ordering and add ‘On’ preset option by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8793
- fix: Klein 2 Inpainting breaking when there is a reference image by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8803
- fix(ui): remove scheduler selection for FLUX.2 Klein by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8808
- Fix(UI): Removed canvas’ blur filter clipping by expanding image bounds by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8773
- fix(ui): Flux 2 Model Manager default settings not showing Guidance by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8810
- fix(ui): convert reference image configs when switching main model base by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8811
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8816
New Contributors
- @kyhavlov made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8613
- @aleyan made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8722
- @Copilot made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8693
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.10.0…v6.11.0
v6.11.1
InvokeAI 6.11.1
This is a bugfix release that corrects several image generation and user interface glitches:
- Fix FLUX.2 Klein image generation quality (@Pfannkuchensack)
- At higher step values and larger images, the FLUX.2 Klein models were generating image artifacts characterized by diagonals, cross-hatching and dust. This bug is now corrected.
- Restore denoising strength for outpaint mode (@Pfannkuchensack)
- Previously, when outpainting, the denoising strength was pinned at 1.0 rather than observing the value set by the user.
- Only show FLUX.1 VAEs when a FLUX.1 main model is selected (@Pfannkuchensack)
- This fix prevents the user from inadvertently selecting a FLUX.2 VAE when generating with FLUX.1.
- Reset ZiT seed variance toggle when recalling images without that metadata (@Pfannkuchensack)
- When remixing an image generated by Z-Image Turbo, the setting of the seed variance toggle (which increases image diversity) is now correctly restored.
- Improve DyPE area calculation (@JPPhoto)
- DyPE increases the quality of FLUX.1 models at higher resolutions.. This fix improves how the algorithm’s parameters are automatically adjusted for image size.
- Remove duplicate DyPE preset dropdown in generation settings (@Pfannkuchensack
- The DyPE dropdown in generation settings is no longer duplicated in the generation UI.
In addition to these bug fixes, new Russian translations were added by (@DustyShoe).
Checkout the roadmap
To see what the development team has planned for forthcoming releases, check out the InvokeAI roadmap. Feature releases will be issued roughly monthly.
Take the user survey
And don’t forget to tell us who you are, what features you use, and what features you most want to see included in future releases. Take the InvokeAI 2026 User Engagement Survey and share your thoughts!
Credits
In addition to the authors of these bug fixes, many thanks to @blessedcoolant, @skunkworxdark, and @mickr777 for their time and patience testing and reviewing the code.
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.11.0…v6.11.1
v6.12.0 Latest
InvokeAI v6.12.0
This is a feature release of InvokeAI which provides support for multiple accounts on the same InvokeAI backend, enhanced support for the Z-Image and FLUX.2 models, multiple user interface enhancements, and new utilities for managing models.
[Jump to Installing and Updating]Multi-User Mode (Experimental)
Have you ever wished you could share your InvokeAI instance with your friends, family or coworkers, but didn’t want to share your galleries or give everyone the ability to add and delete models? Now you can. InvokeAI 6.12 introduces an experimental multi-user mode that allows you to create separate user accounts with login names and passwords. Each account’s image boards, images, canvas state and UI preferences are separate from the others. Users with administrative privileges are allowed to perform system-wide tasks such as adding and configuring models and managing the session queue, while ordinary users are prevented from making this type of change.
See the Multi-User Mode User’s Guide for information on setting up and using this mode.
Multi-User mode was contributed by @lstein .
Enhanced Support for Z-Image and FLUX.2 Models
Z-Image Base — This version of InvokeAI adds support for the Z-Image Base model family. This is an undistilled version of Z-Image suitable for fine-tuning and LoRA training. It also provides a high level of image diversity while preserving excellent image quality.
FLUX.2 LoRAs — InvokeAI now supports a variety of FLUX.2 Klein LoRA formats.
Thanks to @Pfannkuchensack for his work on these enhancements.
Gallery User Interface Improvements
Paged Gallery Browsing — Paged gallery browsing is back. Go to image board settings and select “Use Paged Gallery View” to replace infinite gallery scrolling with page-by-page navigation.
Arrow Key Navigation — The arrow keys now work correctly when browsing a gallery. When the Viewer is in focus, the right and left arrow keys will navigate through the currently selected gallery. When the gallery thumbnails are in focus, the right/left/up/down arrows navigate among them.
@DustyShoe contributed these enhancements.
New Canvas Features
The Canvas now features several new features added by @DustyShoe
Text Tool — The Canvas now features a Text tool that allows you to insert text in a variety of fonts, sizes and styles, move it around the canvas, and commit it to the raster layer.
Linear and radial gradient tools — These new tools add radial and linear gradients to the Canvas. The gradients use color transparency and the foreground/background colors to draw gradients in the direction of the mouse movement.
Invert Button for Regional Guidance Layers — You can now select any Regional Guidance region and select the “invert” button to exchange painted regions with unpainted ones and vice versa. As an added bonus, the invert button also works with Inpaint Masks.
Layer Controls Moved The controls for creating, duplicating and deleting canvas layers have been moved from the top of the layers list to the bottom, which is more consistent with how other graphics packages position their layer controls and, we think, more intuitive. Long-term Canvas users may need to adjust to the new positioning.
Model and Gallery Management Improvements
A few improvements contributed by @lstein aim to make it easier to maintain the model and image databases.
Remove Orphaned Models — Over time InvokeAI may accumulate unused “orphan” models in its models directory that take up space but have no entries in the models database for one reason or another. This means they take up disk space without being usable. A new “Sync Models” button in the Model Manager detects and offers to delete such orphaned models. Developers and other users who have access to the source code repository will also find a script, located in scripts/remove_orphaned_models.py , that will do the same thing from the command line.
Remove Dangling Models — The converse problem occurs when a model directory, or one of its files, was removed or renamed externally, causing it to be referenced in the models database but not be usable. There is now a “Missing Files” filter option in the Model Manager that will identify models that are damaged or deleted. You can then select the models you wish to delete and remove them from the database. In addition, the model selection menus will no longer display models that are missing or broken.
Gallery Maintenance Script — For users with access to the source code repository, the scripts/gallery_maintenance.py python script will clean up dangling and orphaned gallery images. Dangling images are those that appear in the Invoke gallery database but whose files have been deleted from disk. Orphaned images are those that have files on disk but are missing from the database. A related database maintenance tool with more bells and whistles can also be found in @Pfannkuchensack ‘s GitHub at https://github.com/Pfannkuchensack/sqlite_invokeai_db_tool.
Workflow Iterator Improvements
@JPPhoto fixed the way that workflow collections work. Previously when you created a Collection and passed it to an iterator, the items in the collection would be passed to downstream nodes in an unpredictable order. Now, the order of items in the collection is preserved, making complex workflows more predictable and reproducible.
Remote Controlling Invoke’s Generation Parameters
It is now possible to programmatically set Invoke’s generation parameters using a new REST endpoint. This allows a script or other external program to select the model, image size, seed, steps, LoRAs, reference images, and all the other parameters that go into a generation. For documentation of the feature see:
@lstein added this feature.
Translations
Thanks to @Harvester62 for providing the Italian translations for this release.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don’t want to use the launcher, or need a headless install, you can follow the manual install guide.
Behind-the-Scenes Improvements
This release are contains a number of bug fixes and performance enhancements.
- Optimize cache locking in Klein text encoder — (@girlyoulookthebest) This addresses a race condition in the Model Cache which prematurely removed the FLUX.2 Klein encoder from memory.
- Run Text Encoder on CPU — (@lstein) This is an option available in the details panel of the Model Manager that allows you to force large text encoder models to run on CPU rather than GPU. This preserves VRAM for use by the denoiser steps and in some cases improves performance. Thanks to @girlyoulookthebest who found and fixed a bug in this feature.
- Fix IP Adapters losing their model path — (@Pfannkuchensack) Fixes the Model Manager’s “reidentify” function when run on IP Adapter models.
- Kill the server with a single ^C — (@lstein) When previous version of Invoke were launched from a command-line terminal, it used to require two key board interrupts (control-C) to completely shut it down. This is now fixed.
- Persist the selected board and image across browser sessions — (@lstein) The last image board selected is now restored when you edit a browser session and restart it.
Detailed Change Log
What’s Changed
- Fix(nodes): color correct invocation by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8605
- chore: v6.8.1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8610
- refactor: model manager v3 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8607
- tidy: docs and some tidying by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8614
- chore: prep for v6.9.0rc1 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8615
- feat: reidentify model by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8618
- fix(ui): generator nodes by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8619
- chore(ui): point ui lib dep at gh repo by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8620
- chore: prep for v6.9.0 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/8623
- fix(mm): directory path leakage on scan folder error by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8641
- feat: remove the ModelFooter in the ModelView and add the Delete Mode… by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8635
- chore(codeowners): remove commercial dev codeowners by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8650
- Fix to enable loading fp16 repo variant ControlNets by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8643
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8599
- (chore) Update requirements to python 3.11-12 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8657
- Rework graph.py by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8642
- Fix memory issues when installing models on Windows by @gogurtenjoyer in https://github.com/invoke-ai/InvokeAI/pull/8652
- Feat: SDXL Color Compensation by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8637
- feat(ui): improve hotkey customization UX with interactive controls and validation by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8649
- feat(ui): Color Picker V2 by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/8585
- Feature(UI): bulk remove models loras by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8659
- feat(prompts): hotkey controlled prompt weighting by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8647
- Feature: Add Z-Image-Turbo model support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8671
- fix(ui): :bug:
HotkeysModalandSettingsModalinitial focus by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8687 - Feature: Add Tag System for user made Workflows by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8673
- Feature(UI): add extract masked area from raster layers by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8667
- Feature: z-image Turbo Control Net by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8679
- fix(z-image): Fix padding token shape mismatch for GGUF models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8690
- feat(starter-models): add Z-Image Turbo starter models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8689
- fix: CFG Scale min value reset to zero by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8691
- feat(model manager): 💄 refactor model manager bulk actions UI by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8684
- feat(hotkeys): :sparkles: Overhaul hotkeys modal UI by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8682
- Feature (UI): add model path update for external models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8675
- fix support multi-subfolder downloads for Z-Image Qwen3 encoder by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8692
- Feature: add prompt template node by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8680
- feat(hotkeys modal): ⚡ loading state + performance improvements by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8694
- Feature/user workflow tags by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8698
- feat(backend): add support for xlabs Flux LoRA format by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8686
- fix(prompts): :bug: prompt attention behaviors, add tests by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8683
- Workaround for Windows being unable to remove tmp directories when installing GGUF files by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8699
- chore: bump version to v6.10.0rc1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8695
- Feature: Add Z-Image-Turbo regional guidance by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8672
- (chore) Prep for v6.10.0rc2 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8701
- fix(model-manager): add Z-Image LoRA/DoRA detection support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8709
- feat(prompts): :lipstick: increase prompt font size by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8712
- fix(ui): misaligned Color Compensation Option by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8714
- fix(z_image): use unrestricted image self-attention for regional prompting by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8718
- fix(ui): make Z-Image model selects mutually exclusive by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8717
- Fix an issue with regional guidance and multiple quick-queued generations after moving bbox by @kyhavlov in https://github.com/invoke-ai/InvokeAI/pull/8613
- Implement PBR Maps Node by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8700
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8724
- fix(docs): Bump python and github actions versions in mkdocs github action by @aleyan in https://github.com/invoke-ai/InvokeAI/pull/8722
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8725
- fix(model-manager): support offline Qwen3 tokenizer loading for Z-Image by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8719
- fix(gguf): ensure dequantized tensors are on correct device for MPS by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8713
- (chore) update WhatsNew translation text by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8727
- Feature/zimage scheduler support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8705
- Update CODEOWNERS by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8728
- feat(flux): add scheduler selection for Flux models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8704
- Update CODEOWNERS by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8731
- Update CODEOWNERS by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8732
- fix(model-loaders): add local_files_only=True to prevent network requests by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8735
- docs(z-image) add Z-Image requirements and starter bundle by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8734
- Add configurable model cache timeout for automatic memory management by @Copilot in https://github.com/invoke-ai/InvokeAI/pull/8693
- chore: Remove extraneous log debug statements from model loader by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8738
- Feature: z-image + metadata node by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8733
- feat(z-image): add
add_noiseoption to Z-Image Denoise by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8739 - (chore) Bump to version 6.10.0 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8742
- chore(release): bump development version to 6.10.0.post1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8745
- Fix(model manager): Improve calculation of Z-Image VAE working memory needs by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8740
- Chore: Fix weblate merge conflicts by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8744
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8747
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8748
- Chore: Fix weblate rebase errors by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8750
- fix(invocation stats): Report delta VRAM for each invocation; fix RAM cache reporting by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8746
- Fix(UI): Error message for extract region by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8759
- Fix(UI): Canvas numeric brush size by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8761
- Feat(UI): Canvas high level transform smoothing by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8756
- chore(CI/CD): Remove codeowners from /docs directory by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8737
- feat(z-image): add Seed Variance Enhancer node and Linear UI integration by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8753
- fix(model_manager): prevent Z-Image LoRAs from being misclassified as main models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8754
- Add user survey section to README by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8766
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8767
- Limit automated issue closure to bug issues only by @Copilot in https://github.com/invoke-ai/InvokeAI/pull/8776
- Feat(UI): Search bar in image info code tabs and add vertical margins for improved UX in Recall Parameters tab. by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8786
- feat(flux2): add FLUX.2 klein model support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8768
- Feature: Add DyPE (Dynamic Position Extrapolation) support to FLUX models for improved high-resolution image generation by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8763
- fix(model_manager): detect Flux1/2 VAE by latent space dimensions instead of filename by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8790
- Prep for 6.11.0.rc1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8771
- Fix ref_images metadata format for FLUX Kontext recall by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8791
- Add input connectors to the FLUX model loader by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8785
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8797
- fix(ui): use proper FLUX2 latent RGB factors for preview images by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8802
- fix(ui): allow guidance slider to reach 1 for FLUX.2 Klein by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8800
- Add new model type integration guide by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8779
- [i18n]: Fix weblate merge errors by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8805
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8804
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8806
- Release Workflow: Fix workflow edge case by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8792
- Documentation: InvokeAI PR review and merge policy by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8795
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8807
- fix(ui): improve DyPE field ordering and add ‘On’ preset option by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8793
- fix: Klein 2 Inpainting breaking when there is a reference image by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8803
- fix(ui): remove scheduler selection for FLUX.2 Klein by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8808
- Fix(UI): Removed canvas’ blur filter clipping by expanding image bounds by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8773
- fix(ui): Flux 2 Model Manager default settings not showing Guidance by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/8810
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8812
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8813
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8814
- fix(ui): convert reference image configs when switching main model base by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8811
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8816
- Chore(CI/CD): bump version to 6.11.0.post1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8818
- feat(model_manager): add missing models filter to Model Manager by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8801
- Implemented ordering for expanded iterators by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8741
- fix(flux2): support Heun scheduler for FLUX.2 Klein models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8794
- fix(ui): only show FLUX.1 VAEs when a FLUX.1 main model is selected by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8821
- Feat(UI): Reintroduce paged gallery view as option by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8772
- fix(ui): reset seed variance toggle when recalling images without that metadata by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8829
- fix(ui): restore denoising strength for outpaint mode by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8828
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8830
- fix(ui): remove duplicate DyPE preset dropdown in generation settings by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8831
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8834
- Feat(UI): Add linear and radial gradient tools to canvas by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8774
- Feature(backend): Add user toggle to run encoder models on CPU by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8777
- Add dype area option by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8844
- fix(flux2): Fix FLUX.2 Klein image generation quality by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8838
- Fix DyPE Area Calculation by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8846
- chore(CI/CD): bump version to 6.11.1.post1 by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8852
- feat(model_manager): Add scan and delete of orphaned models by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8826
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8856
- fix(flux2): resolve device mismatch in Klein text encoder by @girlyoulookthebest in https://github.com/invoke-ai/InvokeAI/pull/8851
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8878
- Feature(app): Add an endpoint to recall generation parameters by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8758
- Feature: Canvas Blend and Boolean modes by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8661
- Feature(backend): Add a command-line utility for running gallery maintenance by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8827
- fix(flux2): apply BN normalization to latents for inpainting by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8868
- feat(z-image): add Z-Image Base (undistilled) model variant support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8799
- Feature(UI): Add text tool to canvas by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8723
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8881
- Feature (UI): Add Invert button for Regional Guidance layers by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8876
- Fix: canvas text tool broke global hotkeys by @Copilot in https://github.com/invoke-ai/InvokeAI/pull/8887
- Fix Create Board API call by @hjohn in https://github.com/invoke-ai/InvokeAI/pull/8866
- Fix: Improve non square bbox coverage for linear gradient tool. by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8889
- Fix bare except clauses and mutable default arguments by @Mr-Neutr0n in https://github.com/invoke-ai/InvokeAI/pull/8871
- Feat(Model Manager): Add improved download manager with pause/resume partial download. by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8864
- Feature: flux2 klein lora support by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8862
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8905
- chore(CI/CD): Add pfannkuchensack to codeowners for backend by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8915
- Feature: Add FLUX.2 LOKR model support (detection and loading) by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8909
- Fix(Text-tool): Remove redundant Font tooltip on fonts selection by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8906
- feat(multiuser mode): Support multiple isolated users on same backend by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8822
- Fix(UI): Fixes broken “Cancel Current Item” button in left panel introduced in last commit. by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8925
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8924
- Fix: Replace deprecated huggingface_hub.get_token_permission() with whoami() by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8913
- fix: Filter non-transformer keys from Z-Image checkpoint state dicts by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8918
- Fix(MM): Fixed incorrect advertised model size for Z-Image Turbo by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8934
- fix(model-install): persist remote access_token for resume after restart by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8932
- feat(MM):model settings export import by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8872
- Fix: Shut down the server with one keyboard interrupt (#94) by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8936
- QoL: Persist selected board and most recent image across browser sessions by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8920
- Fix(backend): Fix race condition in download queue when concurrent jobs share same destination by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8931
- Prompt Attention Fixes by @joshistoast in https://github.com/invoke-ai/InvokeAI/pull/8860
- Fix: model reidentify losing path and failing on IP Adapters by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8941
- perf(flux2): optimize cache locking in Klein encoder to fix #7513 by @girlyoulookthebest in https://github.com/invoke-ai/InvokeAI/pull/8863
- fix(model_manager): detect Flux 2 Klein LoRAs in Kohya format with transformer-only keys by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8938
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8946
- Added SQL injection tests by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8873
- Add user management UI for admin and regular users (#106) by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8937
- Fix(gallery): Re-add image browsing with arrow keys by @DustyShoe in https://github.com/invoke-ai/InvokeAI/pull/8874
- Doc: update multiuser mode documentation by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8953
- docs: Fix typo in README.md - ‘easy’ should be ‘ease’ by @haosenwang1018 in https://github.com/invoke-ai/InvokeAI/pull/8948
- docs: Fix typo in contributing guide - remove extra ‘the’ by @haosenwang1018 in https://github.com/invoke-ai/InvokeAI/pull/8949
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8947
- Feature: Make strict password checking optional by @lstein in https://github.com/invoke-ai/InvokeAI/pull/8957
- fix: only delete individual LoRA file instead of entire parent directory by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8954
- fix(ui): IP adapter / control adapter model recall for reinstalled models by @Pfannkuchensack in https://github.com/invoke-ai/InvokeAI/pull/8960
- Fix/model cache Qwen/CogView4 cancel repair by @JPPhoto in https://github.com/invoke-ai/InvokeAI/pull/8959
- ui: translations update from weblate by @weblate in https://github.com/invoke-ai/InvokeAI/pull/8956
- Fix(UI): Replace boolean submenu icon with PiIntersectSquareBold by @dunkeroni in https://github.com/invoke-ai/InvokeAI/pull/8962
New Contributors
- @kyhavlov made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8613
- @aleyan made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8722
- @Copilot made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8693
- @girlyoulookthebest made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8851
- @hjohn made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8866
- @Mr-Neutr0n made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8871
- @haosenwang1018 made their first contribution in https://github.com/invoke-ai/InvokeAI/pull/8948
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v6.11.0…v6.12.0