Skip to content

Releases

v4.2.7

v4.2.7 includes gallery improvements and some major features focused on upscaling.

Upscaling

We’ve added a dedicated upscaling tab, support for custom upscaling models, and some new nodes.

Thanks to @RyanJDick (backend implementation), @chainchompa (frontend) and @maryhipp (frontend) for working on this!

Dedicated Upscaling Tab

The new upscaling tab provides a simple and powerful UI to Invoke’s MultiDiffusion implementation. This builds on the workflow released in v4.2.6, allowing for memory-efficient upscaling to huge output image sizes.

We’re pretty happy with the results!

image

4x scale, 4x_NMKD-Siax_200k upscale model, Deliberate_v5 SD1.5 model, KDPM 2 scheduler @ 30 steps, all other settings default

Requirements

You need 3 models installed to use this feature:

  • An upscale model for the first pass upscale
  • A main SD model (SD1.5 or SDXL) for the image-to-image
  • A tile ControlNet model of the same model architecture as your main SD model

If you are missing any of these, you’ll see a warning directing you to the model manager to install them. You can search the starter models for upscale, main, and tile to get you started.

image

Tips

  • The main SD model architecture has the biggest impact on VRAM usage. For example, SD1.5 @ 2k needs just under 4GB, while SDXL @ 2k needs just under 9GB. VRAM usage increases a small amount as output size increases - SD1.5 @ 8k needs ~4.5GB while SDXL @ 8k needs ~10.5GB.
  • The upscale and main SD model choices matter. Choose models best suited to your input image or desired output characteristics.
  • Some schedulers work better than others. KDPM 2 is a good choice.
  • LoRAs - like a detail-adding LoRA - can make a big impact.
  • Higher Creativity values give the SD model more leeway in creating new details. This parameter controls denoising start and end percentages.
  • Higher Structure values tell the SD model to stick closer to the input image’s structure. This parameter controls the tile ControlNet.

Custom Upscaling Models

You can now install and use custom upscaling models in Invoke. The excellent spandrel library handles loading and running the models.

spandrel can do a lot more than upscaling - it supports a wide range of “image to image” models. This includes single-image super resolution like ESRGAN (upscalers) but also things like GFPGAN (face restoration) and DeJPEG (cleans up JPEG compression artifacts).

A complete list of supported architectures can be found here.

Note: We have not enabled the restrictively-licensed architectures, which are denoted with a + symbol in the list.

Installing Models

We’ve added a few popular upscaling models to the Starter Models tab in the Model Manager - search for “upscale” to find them.

image

You can install models found online via the Model Manager, just like any other model. OpenModelDB is a popular place to get these models. For most of them, you can copy the model’s download link and paste in into the Model Manager to install.

Nodes

Two nodes have been added to support processing images with spandrel - be that upscaling or any of the other tasks these models support.

image
  • Image-to-Image - Runs the selected model without any extra processing.
  • Image-to-Image (Autoscale) - Runs the selected model repeatedly until the desired scale is reached. This node is intended for upscaling models specifically, providing some useful extra functionality:
    • If the model overshoots the target scale, the final image will be downscaled to the target scale with Lanczos resampling.
    • As a convenience, the output image width and height can be fit to a multiple of 8, as is required for SD. This will only resize down, and may change the aspect ratio slightly.
    • If the model doesn’t actually upscale the image, the scale parameter will be ignored.

Thanks to @maryhipp and @chainchompa for continued iteration on the gallery!

  • Cleaner boards UI.
  • Improved boards and image search UI.
  • Fixed issues where board counts don’t update when images are moved between boards.
  • Added a “Jump” button to allow you to skip pages of the gallery

<video src=https://github.com/user-attachments/assets/b834cc36-995a-464e-af3f-68cf3b38818f>

Other Changes

  • Enhancement: When installing starter models, the description is carried over. Thanks @lstein!
  • Enhancement: Updated translations.
  • Fix: Model unpatching when running on CPU, causing bad/no outputs.
  • Fix: Occasional visible seams on images with smooth textures, like skies. MultiDiffusion tiling now uses gradient blending to mitigate this issue.
  • Fix: Model names overflow the model selection drop-downs.
  • Internal: Backend SD pipeline refactor (WIP). This will allow contributors to add functionality to Invoke more easily. This will be behind a feature flag until the refactor is complete and tested. Thanks to @StAlKeR7779 for leading the effort, with major contributions from @dunkeroni and @RyanJDick.

Installation and Updating

To install or update to v4.2.7, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6post1…v4.2.7

v4.2.6post1

v4.2.6post1 fixes issues some users may experience with memory management and sporadic black image outputs.

Please see the v4.2.6 release for full release notes.

💾 Installation and Updating

To install or update to v4.2.6post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.6…v4.2.6post1

v4.2.6

v4.2.6 includes a handful of fixes and improvements, plus three major changes:

  • Gallery updates
  • Tiled upscaling via MultiDiffusion
  • Checkpoint models work without conversion to diffusers

We’ve made some changes to the gallery, adding features, improving the performance of the app and reducing memory usage. The changes also fix a number of bugs relating to stale data - for example, a board not updating as expected after moving an image to it.

Thanks to @chainchompa and @maryhipp for working on this major effort.

Pagination & Selection

Infinite scroll is dead, long live infinite scroll!

The gallery is now paginated. Selection logic has been updated to work with pagination. An indicator shows how many images are selected and allows you to clear the selection entirely. Arrow keys still navigate.

https://github.com/invoke-ai/InvokeAI/assets/4822129/128c998a-efac-41e5-8639-b346da78ca5b

The number of images per page is dynamically calculated as the panel is resized, ensuring the panel is always filled with images.

Boards UI Refresh

The bulky tiled boards grid has been replaced by a scrollable list. The boards list panel is now a resizable, collapsible panel.

https://github.com/invoke-ai/InvokeAI/assets/4822129/2dd7c316-36e3-4f8d-9d0c-d38d7de1d423

Search for boards by name and images by metadata. The search term is matched against the image’s metadata as a string. We landed on full-text search as a flexible yet simple implementation after considering a few methods for search.

https://github.com/invoke-ai/InvokeAI/assets/4822129/ebe2ecfe-edb4-4e09-aef8-212495b32d65

Archived Boards

Archive a board to hide it from the main boards list. This is purely an organizational enhancement. You can still interact with archived boards as you would any other board.

https://github.com/invoke-ai/InvokeAI/assets/4822129/7033b7a1-1cb7-4fa0-ae30-5e1037ba3261

Image Sorting

You can now change the sort for images to show oldest first. A switch allows starred images to be placed in the list according to their age, instead of always showing them first.

https://github.com/invoke-ai/InvokeAI/assets/4822129/f1ec68d0-3ba5-4ed0-b1e8-8e8bc9ceb957

Tiled Upscaling via MultiDiffusion

MultiDiffusion is a fairly straightforward technique for tiled denoising. The gist is similar to other tiled upscaling methods - split the input image up in to tiles, process each independently, and stitch them back together. The main innovation for MultiDiffusion is to do this in latent space, blending the tensors together continuously. This results in excellent consistency across the output image, with no seams.

This feature is exposed as a Tiled MultiDiffusion Denoise Latents node, currently classified as a beta version. It works much the same as the OG Denoise Latents node. You can find an example workflow in the workflow library’s default workflows.

We are still thinking about to expose this in the linear UI. Most likely, we expose this with very minimal settings. If you want to tweak it, use the workflow.

Thanks to @RyanJDick for designing and implementing MultiDiffusion.

How to use it

This technique is fundamentally the same as normal img2img. Appropriate use of conditioning and control will greatly improve the output. The one hard requirement is to use the Tile ControlNet model.

Besides that, here are some tips from our initial testing:

  • Use a detail-adding or style LoRAs.
  • Use a base model best suited for the desired output style.
  • Prompts make a difference.
  • The initial upscaling method makes a difference.
  • Scheduler makes a difference. Some produce softer outputs.

VRAM Usage

This technique can upscale images to very large sizes without substantially increasing VRAM usage beyond what you’d see for a “normal” sized generation. The VRAM bottlenecks then become the first VAE encode (Image to Latents) and final VAE decode (Latents to Image) steps.

You may run into OOM errors during these steps. The solution is to enable tiling using the toggle on the Image to Latents and Latents to Image nodes. This allows the VAE operations to be done piecewise, similar to the tiled denoising process, without using gobs of VRAM.

There’s one caveat - VAE tiling often introduces inconsistency across tiles. Textures and colors may differ from tile to tile. This is a function of diffusers’ handling of VAE tiling, not the new tiled denoising process. We are investigating ways to improve this.

Takeaway: If your GPU can handle non-tiled VAE encode and decode for a given output size, use that for best results.

Checkpoint models work without conversion to diffusers

The required conversion of checkpoint format models to diffusers format has long been a pain point. The diffusers library now supports loading single-file (checkpoint) models directly, and we have removed the mandatory checkpoint-to-diffusers conversion step.

The main user-facing change is that there is no longer a conversion cache directory.

Major thanks to @lstein for getting this working.

📈 Patch Nodes for v4.2.6

Enhancements

  • When downloading image metadata, graphs or workflows, the JSON file includes the image name and type of data. Thanks @jstnlowe!
  • Add clear_queue_on_startup config setting to clear problematic queues. This is useful for a rare edge case where your queue is full of items that somehow crash the app. Set this to true, and the queue will clear before it has time to attempt to execute the problematic item. Thanks @steffy-lo!
  • Performance and memory efficiency improvements for LoRA patching and model offloading.
  • Addition of a simplified model installation methods to the Invocation API: download_and_cache_model, load_local_model and load_remote_model. These methods allow models to be used without needing them to be added to the model manager. For example, we are now using these methods to load ESRGAN models.
  • Support for probing and loading SDXL VAE checkpoint.
  • Updated gallery UI.
  • Checkpoint models work without conversion to diffusers.
  • When using a VAE in tiled mode, you may now select the tile size.

Fixes

  • Fix handling handling of 0-step denoising process.
  • If a control image’s processed version is missing when the app loads, it is now re-processed.
  • Fixed an issue where a model’s size could be misreported as 0, possibly causing memory issues.
  • Fixed an issue where images - especially large images - may fail to delete.

Performance improvements

  • Improved LoRA patching.
  • Improved RAM <-> VRAM model transfer performance.

Internal changes

  • The DenoiseLatentsInvocation has had its internal methods split up to support tiled upscaling via MultiDiffusion. This included some amount of file shuffling and renaming. The invokeai package’s exported classes should still be the same. Please let us know if this has broken an import for you.
  • Internal cleanup, intending to eliminate circular import issues. There’s a lot left to do for this issue, but we are making progress.

💾 Installation and Updating

To install or update to v4.2.6, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.4…v4.2.6

v4.2.4

v4.2.4 brings one frequently requested feature and a host of fixes and improvements, mostly focused on performance and internal code quality.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

Image Comparison

The image viewer now supports comparing two images using a Slider, Side-by-Side or Hover UI.

To enter the comparison UI, select a compare image using one of these methods:

  • Right click an image and click Select for Compare.
  • Hold alt (option on mac) while clicking a gallery image to select it as the compare image.
  • Hold alt (option on mac) and use the arrow keys to select the comparison image.

Press C to swap the images and M to cycle through the comparison modes. Press Escape or Z to exit the comparison UI and return to the single image viewer.

When comparing images of different aspect ratios or sizes, the compare image will be stretched to fit the viewer image. Disable the toggle button at the top-left to instead contain the compare image within the viewer image.

https://github.com/invoke-ai/InvokeAI/assets/4822129/4bcfb9c4-c31c-4e62-bfa4-510ab34b15c9

📈 Patch Nodes for v4.2.4

Enhancements

  • The queue item detail view now updates when it finishes. The finished (completed, failed or canceled) session is displayed.
  • Updated translations. @Harvester62 @Vasyanator @BrunoCdot @gallegonovato @Atalanttore @hugoalh
  • Docs updates. @hsm207 @cdpath

Fixes

  • Fixed problem when using a latents from the blend latents node for denoising with certain schedulers which made images drastically different, even with an alpha of 0.
  • Fixed unnecessarily strict constraints for ControlNet and IP Adapter weights in the Control Layers UI. This prevented layers with weights outside the range of 0-1 from recalling.
  • Fixed error when editing non-main models (e.g. LoRAs).
  • Fixed the SDXL prompt concat flag from not being set when recalling prompts.
  • Fixed model metadata recall not working when a model has a different key. This can happen if the model was uninstalled and reinstalled. When recalling, we fall back on the model’s name, base and type, if the key doesn’t match an existing model.

Performance improvements

Big thanks to @lstein for these very impactful improvements!

  • Substantially improved performance when moving models between RAM and VRAM. For example, an SDXL model RAM -> VRAM -> RAM roundtrip tested at ~0.8s, down from ~3s. That’s about 75% faster!
  • Fixed bug with VRAM lazy offloading which caused inefficient VRAM cache usage.
  • Reduced VRAM requirements when using IP Adapter.

Internal changes

  • Modularize the queue processor.
  • Use pydantic models for events instead of plain dicts.
  • Improved handling of pydantic invocation unions.
  • Updated ML dependencies. @Malrama

💾 Installation and Updating

To install or update to v4.2.4, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.3…v4.2.4

v4.2.3

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.3

  • Spellcheck is re-enabled on prompt boxes

  • DB maintenance script removed from launcher (it currently does not work)

  • Reworked toasts. When a toast of a given type is triggered, if another toast of that type is already being displayed, it is updated instead of creating another toast. The old behaviour was painful in situations where you queue up many generations that all immediately fail, or install a lot of models at once. In these situations, you’d get a wall of toasts. Now you get only 1.

  • Fixed: Control layer checkbox correctly indicates that it enables or disables the layer

  • Fixed: Disabling Regional Guidance layers didn’t work

  • Fixed: Excessive warnings in terminal when uploading images

  • Fixed: When loading a workflow, if an image, board or model for an input for that workflow no longer exists, the workflow will execute but error.

    For example, say you save a workflow that has a certain model set for a node, then delete the model. When you load that workflow, the model is missing but the workflow doesn’t detect this. You can run the workflow, and it will fail when it attempts to use the nonexistent model.

    With this fix, when a workflow is loaded, we check for the existence of all images, boards and models referenced by the workflow. If something is missing, that input is reset.

  • Docs updates @hsm207

  • Translations updates @gallegonovato @Harvester62 @dvanzoerlandt

💾 Installation and Updating

To install or update to v4.2.3, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.2post1…v4.2.3

v4.2.2post1

This release brings many fixes and enhancements, including two long-awaited features: undo/redo in workflows and load workflow from any image.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.2post1

v4.2.2 had a critical bug related to notes nodes & missing templates in workflows. That is fixed in v4.2.2post1.

✨ Undo/redo in Workflows

Undo/redo redo now available in the workflow editor. There’s some amount of tuning to be done with how actions are grouped.

For example, when you move a node around, do we allow you to undo each pixel of movement, or do we group the position changes as one action? When you are typing a prompt, do we undo each letter, word, or the whole change at once?

Currently, we group like changes together. It’s possible some things are grouped when they shouldn’t be, or should be grouped but are not. Your feedback will be very useful in tuning the behaviour so it un-does the right changes.

✨ Load Workflow from Any Image

Starting with v4.2.2, graphs are embedded in all images generated by Invoke. Images generated in the workflow editor also have the enriched workflow embedded separately. The Load Workflow button will load the enriched workflow if it exists, else it will load the graph.

You’ll see a new Graph tab in the metadata viewer showing the embedded graph.

Graph vs Workflow

Graphs are used by the backend and contain minimal data. Workflows are an enrich data format that includes a representation of the graph plus extra information, including things like:

  • Title, description, author, etc
  • Node positions
  • Custom node and field labels

This new feature embeds the graph in every image - including images generated on the Generation or Canvas tabs.

Canvas Caveat

This functionality is available only for individual canvas generations - not the full composition. Why is that?

Consider what goes into a full canvas composition. It’s the product of any number of graphs, with any amount of drawing and erasing between each graph execution. It’s not possible to consolidate this into a single graph.

When you generate on canvas, your images for the given bounding box are added to a staging area, which allows you to cycle through images and commit or discard the image. The staging area also allows you to save a candidate generation. It is these images that can be loaded as a workflow, because they are the product of a single graph execution.

👷 Other Fixes and Enhancements

  • Min/max LoRA weight values extended (-10 to +10) @H0onnn
  • Denoising strength and layer opacity are retained when sending image to initial image @steffy-lo
  • SDXL T2I Adapter only blocks invoking when dimensions aren’t multiple of 32 (was erroneously 64)
  • Improved UX when manipulating edges in workflows
  • Connected inputs on nodes collapse, hiding the nonfunctional UI component
  • Use ctrl/cmd-shift-v to paste copied nodes with input edges
  • Docs updates @hsm207
  • Fix: visible seams when outpainting
  • Fix: edge case that could prevent workflows from loading if user hadn’t opened the workflows tab yet
  • Fix: minor jank/inefficiency with control adapter auto-process (control layers only)
  • Internal: utility to create graph objects without going crazy
  • Internal: rewritten connection validation logic for workflows with full test coverage
  • Internal: rewritten edge connection interactions
  • Internal: revised field type format

💾 Installation and Updating

To install or update to v4.2.2post1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.1…v4.2.2post1

v4.2.2

This release brings many fixes and enhancements, including two long-awaited features: undo/redo in workflows and load workflow from any image.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.2

✨ Undo/redo in Workflows

Undo/redo redo now available in the workflow editor. There’s some amount of tuning to be done with how actions are grouped.

For example, when you move a node around, do we allow you to undo each pixel of movement, or do we group the position changes as one action? When you are typing a prompt, do we undo each letter, word, or the whole change at once?

Currently, we group like changes together. It’s possible some things are grouped when they shouldn’t be, or should be grouped but are not. Your feedback will be very useful in tuning the behaviour so it un-does the right changes.

✨ Load Workflow from Any Image

Starting with v4.2.2, graphs are embedded in all images generated by Invoke. Images generated in the workflow editor also have the enriched workflow embedded separately. The Load Workflow button will load the enriched workflow if it exists, else it will load the graph.

You’ll see a new Graph tab in the metadata viewer showing the embedded graph.

Graph vs Workflow

Graphs are used by the backend and contain minimal data. Workflows are an enrich data format that includes a representation of the graph plus extra information, including things like:

  • Title, description, author, etc
  • Node positions
  • Custom node and field labels

This new feature embeds the graph in every image - including images generated on the Generation or Canvas tabs.

Canvas Caveat

This functionality is available only for individual canvas generations - not the full composition. Why is that?

Consider what goes into a full canvas composition. It’s the product of any number of graphs, with any amount of drawing and erasing between each graph execution. It’s not possible to consolidate this into a single graph.

When you generate on canvas, your images for the given bounding box are added to a staging area, which allows you to cycle through images and commit or discard the image. The staging area also allows you to save a candidate generation. It is these images that can be loaded as a workflow, because they are the product of a single graph execution.

👷 Other Fixes and Enhancements

  • Min/max LoRA weight values extended (-10 to +10) @H0onnn
  • Denoising strength and layer opacity are retained when sending image to initial image @steffy-lo
  • SDXL T2I Adapter only blocks invoking when dimensions aren’t multiple of 32 (was erroneously 64)
  • Improved UX when manipulating edges in workflows
  • Connected inputs on nodes collapse, hiding the nonfunctional UI component
  • Use ctrl/cmd-shift-v to paste copied nodes with input edges
  • Docs updates @hsm207
  • Fix: visible seams when outpainting
  • Fix: edge case that could prevent workflows from loading if user hadn’t opened the workflows tab yet
  • Fix: minor jank/inefficiency with control adapter auto-process (control layers only)
  • Internal: utility to create graph objects without going crazy
  • Internal: rewritten connection validation logic for workflows with full test coverage
  • Internal: rewritten edge connection interactions
  • Internal: revised field type format

💾 Installation and Updating

To install or update to v4.2.2, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.1…v4.2.2

v4.2.1

This patch release brings a handful of fixes, plus docs and translation updates.

If you missed v4.2.0, please review its release notes to get up to speed on Control Layers.

📈 Patch Nodes for v4.2.1

  • Fixed seamless not being perfectly seamless sometimes
  • Fixed Control Adapter processor cancellation jank
  • Fixed Depth Anything processor drop-down jank
  • Fixed Control Adapter layers preventing interactions with layers below them (e.g. cannot move a Regional Guidance layer)
  • Fixed two issues with model cover images
    • When editing a model, the cover image disappeared, but reappeared on refresh
    • When converting a model to diffusers, the cover image was lost forever
  • Fixed NSFW checker for new installs
  • Prevent errors when using T2I adapter
    • May not invoke when image dimensions are not a multiple of 64
    • Control Adapter model select differentiates between ControlNet and T2I Adapter models
    • Reworked Invoke button tooltip describing why you may not Invoke when there is a configuration issue
  • Fixed translations for canvas layer select
  • Fixed Invoke button not showing loading state while queuing
  • Docs update @gogurtenjoyer
  • Translation updates @Harvester62 @Vasyanator @Pfannkuchensack @flower-elf @gallegonovato

💾 Installation and Updating

To install or update to v4.2.1, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.0…v4.2.1

v4.2.0

Since the very beginning, Invoke has been innovating where it matters for creatives. Today, we’re excited to do it again with Control Layers.

Invoke 4.2 brings a number of enhancements and fixes, with the addition of a major new feature - Control Layers.

🧪 Control Layers

Integrating some of the latest in open-source research, creatives can use Control Adapters, Image Prompts, and regional guidance to articulate and control the generation process from a single panel. With regional guidance, you can compose specific regions to apply a positive prompt, negative prompt, or any number of IP Adapters to be applied to the masked region. Control Adapters (ControlNet & T2I Adapters) and an Initial Image are visualized on the new Control Layers canvas.

You can read more about how to use Control Layers here - Control Layers

📈 Patch Nodes for v4.2.0

Enhancements

  • Control Layers
  • Add TCD scheduler @l0stl0rd
  • Image Viewer updates — You can easily switch to the Image Viewer on the Generations tab by tapping the Z hotkey, or double clicking on any image in the gallery.

Major Changes

Also known as the “who moved my 🧀?” section, this list details where certain features have moved.

  • Image to Image: The Image to Image pipeline can be executed using Control Layers by adding an Initial Image layer.
  • Control Adapters and IP Adapters: These have been moved to the Control Layers tab — with the added benefit of being able to visualize your control adapter’s processed images easily!

Fixes

  • Fixed inpainting models on canvas @dunkeroni
  • Fixed IP Adapter starter models
  • Fixed bug where temp files (tensors, conditioning) aren’t cleaned up properly
  • Fixed trigger phrase form submit @joshistoast
  • Fixed SDXL checkpoint inpainting models not installing
  • Fixed installing models on external SSDs on macOS
  • Fixed Control Adapter processors’ image size constraints being overly restrictive

💾 Installation and Updating

To install or update to v4.2.0, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data will not be touched.

Missing models after updating from v3 to v4

See this FAQ.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.1.0…v4.2.0

v4.1.0

Invoke v4.1.0 brings a many fixes and enhancements. The big ticket is Style and Composition IP Adapter.

🧪 Style and Composition IP Adapter (beta)

IP Adapter uses an image as a prompt. Images have two major components - their style and their composition - and you can choose either or both when using IP Adapter.

Use the new IP Adapter Method dropdown to select Full, Style, or Composition. The setting is applied per IP Adapter. You may need to delete and re-add active IP Adapters to see the dropdown.

No IP Adapter IP Adapter Image Full IP Adapter Style Only Composition Only

“a fierce wolf in an alpine forest”, all using same seed - note how the Full method turns the wolf into a mouse-canine hybrid

Shout-out to @blessedcoolant for this feature!

📈 Patch Nodes for v4.1.0

Enhancements

  • Backend and nodes implementation for regional prompting and regional IP Adapter (UI in v4.2.0)
  • Secret option in Workflow Editor to convert a graph into a workflow. See #6181 for how to use it.
  • Assortment of UI papercuts
  • Favicon & page title indicate generation status @jungleBadger
  • Delete hotkey and button work with gallery selection @jungleBadger
  • Workflow editor perf improvements
  • Edge labels in workflow editor
  • Updated translations @Harvester62, @symant233, @Vasyanator
  • Updated docs @sarashinai
  • Improved torch device and precision handling

Fixes

  • multipleOf for invocations (for example, the Noise invocation’s width and height have a step of 8)
  • Poor quality “fried” refiner outputs
  • Poor quality inpainting with gradient denoising and refiner
  • Canvas images appearing in the wrong places
  • The little eye defaulting to off in canvas staging toolbar
  • Premature OOM on windows (see shared GPU memory FAQ)
  • ~1s delay between queue items
  • Wonky model manager forms navigating away from UI @clsn

Invocation API

  • New method to get the filesystem path of an image: context.images.get_path(image_name: str, thumbnail: bool) @fieldOfView

Internal

  • Improved knip config @webpro
  • Updated python deps @Malrama

💾 Installation and Updating

To install or update to v4.1.0, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data will not be touched.

Missing models after updating from v3 to v4

See this FAQ.

🐛 Known Issues

  • Inpainting models on Canvas sometimes kinda give up and output mush. The fix didn’t make it in to v4.1.0, we will aim to release a patch in by the weekend.

What’s Changed

New Contributors

Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.0.4…v4.1.0

This site was designed and developed by Aether Fox Studio.