Skip to content

Hardware Requirements

Invoke runs on Windows 10+, macOS 14+ and Linux (Ubuntu 20.04+ is well-tested).

Hardware requirements vary significantly depending on model and image output size.

The requirements below are rough guidelines for best performance. GPUs with less VRAM typically still work, if a bit slower. Follow the Low VRAM Guide to optimize performance.

  • All Apple Silicon (M1, M2, etc) Macs work, but 16GB+ memory is recommended.
  • AMD GPUs are supported on Linux only. The VRAM requirements are the same as Nvidia GPUs.
Model FamilyBest resolutionGPU (series)VRAM (min)RAM (min)Notes
SD1.5512x512Nvidia 10xx+4GB8GB
SDXL1024x1024Nvidia 20xx+8GB16GB
FLUX.11024x1024Nvidia 20xx+10GB32GB
FLUX.2 Klein 4B1024x1024Nvidia 30xx+12GB16GBFP8 works with 8GB+; Diffusers + encoder
FLUX.2 Klein 9B1024x1024Nvidia 40xx24GB32GBFP8 works with 12GB+; Diffusers + encoder
Z-Image Turbo1024x1024Nvidia 20xx+8GB16GBQ4_K 8GB; Q8/BF16 16GB+

Invoke requires python 3.11 through 3.12. If you don’t already have one of these versions installed, we suggest installing 3.12, as it will be supported for longer.

Check that your system has an up-to-date Python installed by running python3 --version in the terminal (Linux, macOS) or cmd/powershell (Windows).

If you have an Nvidia or AMD GPU, you may need to manually install drivers or other support packages for things to work well or at all.

Run nvidia-smi on your system’s command line to verify that drivers and CUDA are installed. If this command fails, or doesn’t report versions, you will need to install drivers.

Go to the CUDA Toolkit Downloads and carefully follow the instructions for your system to get everything installed.

Confirm that nvidia-smi displays driver and CUDA versions after installation.

An alternative to installing CUDA locally is to use the Nvidia Container Runtime to run the application in a container.

An out-of-date cuDNN library can greatly hamper performance on 30-series and 40-series cards. Check with the community on discord to compare your it/s if you think you may need this fix.

First, locate the destination for the DLL files and make a quick back up:

  1. Find your InvokeAI installation folder, e.g. C:\Users\Username\InvokeAI\.
  2. Open the .venv folder, e.g. C:\Users\Username\InvokeAI\.venv (you may need to show hidden files to see it).
  3. Navigate deeper to the torch package, e.g. C:\Users\Username\InvokeAI\.venv\Lib\site-packages\torch.
  4. Copy the lib folder inside torch and back it up somewhere.

Next, download and copy the updated cuDNN DLLs:

  1. Go to the Cuda Docs.
  2. Create an account if needed and log in.
  3. Choose the newest version of cuDNN that works with your GPU architecture. Consult the cuDNN support matrix to determine the correct version for your GPU.
  4. Download the latest version and extract it.
  5. Find the bin folder, e.g. cudnn-windows-x86_64-SOME_VERSION\bin.
  6. Copy and paste the .dll files into the lib folder you located earlier. Replace files when prompted.

If, after restarting the app, this doesn’t improve your performance, either restore your back up or re-run the installer to reset torch back to its original state.

Run rocm-smi on your system’s command line verify that drivers and ROCm are installed. If this command fails, or doesn’t report versions, you will need to install them.

Go to the ROCm Documentation and carefully follow the instructions for your system to get everything installed.

Confirm that rocm-smi displays driver and CUDA versions after installation.

An alternative to installing ROCm locally is to use a ROCm docker container to run the application in a container.

This site was designed and developed by Aether Fox Studio.