Development Environment
Initial Setup
Section titled “Initial Setup”-
Refer to the system requirements.
System Requirements Please check the system requirements page to make sure your hardware is capable of running the desired models. -
Fork and clone the InvokeAI git repository.
Fork Repository
Next, clone your fork to your local machine. You can use either HTTPS or SSH, depending on your git configuration.
-
This repository uses Git LFS to manage large files. To ensure all assets are downloaded:
- Install git-lfs → Download here
- Enable automatic LFS fetching for this repository:
Terminal window git config lfs.fetchinclude "*" - Fetch files from LFS (only needs to be done once; subsequent
git pullwill fetch changes automatically):Terminal window git lfs pull
-
Create a directory for user data (images, models, db, etc). This is typically at
~/invokeai, but if you already have a non-dev install, you may want to create a separate directory for the dev install. -
Follow the manual install guide, with some modifications to the install command:
- Use
.instead ofinvokeaito install from the current directory. You don’t need to specify the version. - Add
-eafter theinstalloperation to make this an editable install. That means your changes to the python code will be reflected when you restart the Invoke server. - When installing the
invokeaipackage, add thedev,testanddocspackage options to the package specifier. You may or may not need thexformersoption - follow the manual install guide to figure that out. So, your package specifier will be either".[dev,test,docs]"or".[dev,test,docs,xformers]". Note the quotes!
With the modifications made, the install command should look something like this:
Terminal window uv pip install -e ".[dev,test,docs,xformers]" --python 3.12 --python-preference only-managed --index=https://download.pytorch.org/whl/cu128 --reinstall - Use
-
At this point, you should have Invoke installed, a venv set up and activated, and the server running. But you will see a warning in the terminal that no UI was found. If you go to the URL for the server, you won’t get a UI.
This is because the UI build is not distributed with the source code. You need to build it manually. End the running server instance.
(If you only want to edit the docs, you can stop here and skip to the Documentation section below.)
-
Install the frontend dev toolchain, paying attention to versions:
-
Do a production build of the frontend:
Terminal window cd <PATH_TO_INVOKEAI_REPO>/invokeai/frontend/webpnpm ipnpm build -
Restart the server and navigate to the URL. You should get a UI. After making changes to the python code, restart the server to see those changes.
Backend Development
Section titled “Backend Development”Experimenting with changes to the Python source code is a drag if you have to re-start the server and re-load multi-gigabyte models after every change.
For a faster development workflow, add the --dev_reload flag when starting the server. The server will watch for changes to all the Python files in the invokeai directory and apply those changes to the running server on the fly.
This will allow you to avoid restarting the server (and reloading models) in most cases, but there are some caveats; see the jurigged documentation for details.
Testing
Section titled “Testing”The backend tests require the test dependency group, which you installed during the initial setup.
See the Tests documentation for information about running and writing tests.
Frontend Development
Section titled “Frontend Development”You’ll need to run pnpm build every time you pull in new changes to the frontend.
Another option is to skip the build and instead run the UI in dev mode:
cd invokeai/frontend/webpnpm devThis starts a vite dev server for the UI at 127.0.0.1:5173, which you will use instead of 127.0.0.1:9090.
The dev mode is substantially slower than the production build but may be more convenient if you just need to test things out. It will hot-reload the UI as you make changes to the frontend code. Sometimes the hot-reload doesn’t work, and you need to manually refresh the browser tab.
Documentation
Section titled “Documentation”This documentation is built on Astro Starlight. It provides a pleasant developer environment for writing engaging documentation, and is built on top of the Astro static site generator, which provides a powerful and flexible framework for building fast, modern websites.
To contribute to the documentation, simply edit the markdown files in the ./docs directory. You can run a local dev server with hot-reloading for changes made to the docs.
Directorydocs
Directorypublic/
- …
Directorysrc
Directorycontent docs content lives here
- docs
Directorylib
Directorycomponents/
- …
Directoryutils/
- …
- content.config.ts
Directoryscripts/
- …
Directorytests/
- …
Directoryinvokeai/
- …
Directorydocker/
- …
Directorycoverage/
- …
-
Navigate to the
docsdirectory and install the dependencies:Terminal window cd docspnpm install -
Start the dev server:
Terminal window pnpm run dev
VSCode Setup
Section titled “VSCode Setup”VSCode offers excellent tools for InvokeAI development, including a python debugger, automatic virtual environment activation, and remote development capabilities.
Prerequisites
Section titled “Prerequisites”First, ensure you have the following extensions installed:
It’s also highly recommended to install the Jupyter extensions if you plan on working with notebooks:
Configuration
Section titled “Configuration”Creating a VSCode workspace for working on InvokeAI is highly recommended to hold InvokeAI-specific settings and configs.
- Open the InvokeAI repository directory in VSCode
- Go to
File>Save Workspace Asand save it outside the repository
Default Python Interpreter
To enable automatic virtual environment activation:
- Open the command palette (
Ctrl+Shift+P/Cmd+Shift+P) and runPreferences: Open Workspace Settings (JSON) - Add
python.defaultInterpreterPathto your settings, pointing to your virtual environment’s python executable:
{ "folders": [ { "path": "InvokeAI" }, { "path": "/path/to/invokeai_root" } ], "settings": { "python.defaultInterpreterPath": "/path/to/invokeai_root/.venv/bin/python" }}Now, opening the integrated terminal or running python will automatically use your InvokeAI virtual environment.
We use Python’s typing system in InvokeAI. PR reviews will include checking that types are present and correct.
Pylance provides type checking in the editor. To enable it:
- Open a Python file
- Look along the status bar in VSCode for
{ } Python - Click the
{ } - Turn type checking on (Basic is fine)
You’ll now see red squiggly lines where type issues are detected. Hover your cursor over the indicated symbols to see what’s wrong.
Debugging configs are managed in a launch.json file. Follow the official guide to set up your launch.json and try it out.
Add these InvokeAI debugging configurations to your launch.json:
{ "version": "0.2.0", "configurations": [ { "name": "InvokeAI Web", "type": "python", "request": "launch", "program": "scripts/invokeai-web.py", "args": [ "--root", "/path/to/invokeai_root", "--host", "0.0.0.0" ], "justMyCode": true }, { "name": "InvokeAI CLI", "type": "python", "request": "launch", "program": "scripts/invokeai-cli.py", "justMyCode": true }, { "name": "InvokeAI Test", "type": "python", "request": "launch", "module": "pytest", "args": ["--capture=no"], "justMyCode": true }, { "name": "InvokeAI Single Test", "type": "python", "request": "launch", "module": "pytest", "args": ["tests/nodes/test_invoker.py"], "justMyCode": true } ]}This provides a smooth experience for running the backend on a powerful Linux machine while developing on another device.
Consult the official guide to get it set up. We suggest using VSCode’s included settings sync so that your remote dev host has all the same app settings and extensions automatically.