YAML Config
Runtime settings, including the location of files and directories, memory usage, and performance, are managed via the invokeai.yaml config file or environment variables. A subset of settings may be set via commandline arguments.
Settings sources are used in this order:
- CLI args
- Environment variables
invokeai.yamlsettings- Fallback: defaults
InvokeAI Root Directory
Section titled “InvokeAI Root Directory”On startup, InvokeAI searches for its “root” directory. This is the directory that contains models, images, the database, and so on. It also contains a configuration file called invokeai.yaml.
Directorymodels/
- …
Directoryoutputs/
- …
Directorydatabases/
- …
Directoryworkflow_thumbnails/
- …
Directorystyle_presets/
- …
Directorynodes/
- …
Directoryconfigs/
- …
- invokeai.example.yaml
- invokeai.yaml
InvokeAI searches for the root directory in this order:
- The
--root <path>CLI arg. - The environment variable INVOKEAI_ROOT.
- The directory containing the currently active virtual environment.
- Fallback: a directory in the current user’s home directory named
invokeai.
InvokeAI Configuration File
Section titled “InvokeAI Configuration File”Inside the root directory, we read settings from the invokeai.yaml file.
It has two sections - one for internal use and one for user settings:
# Internal metadata - do not edit:schema_version: 4.0.2
# Put user settings here - see https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/:host: 0.0.0.0 # serve the app on your local networkmodels_dir: D:\invokeai\models # store models on an external driveprecision: float16 # always use fp16 precisionThe settings in this file will override the defaults. You only need to change this file if the default for a particular setting doesn’t work for you.
You’ll find an example file next to invokeai.yaml that shows the default values.
Some settings, like Model Marketplace API Keys, require the YAML to be formatted correctly. Here is a basic guide to YAML files.
Custom Config File Location
Section titled “Custom Config File Location”You can use any config file with the --config CLI arg. Pass in the path to the invokeai.yaml file you want to use.
Note that environment variables will trump any settings in the config file.
Model Marketplace API Keys
Section titled “Model Marketplace API Keys”Some model marketplaces require an API key to download models. You can provide a URL pattern and appropriate token in your invokeai.yaml file to provide that API key.
The pattern can be any valid regex (you may need to surround the pattern with quotes):
remote_api_tokens: # Any URL containing `models.com` will automatically use `your_models_com_token` - url_regex: models.com token: your_models_com_token # Any URL matching this contrived regex will use `some_other_token` - url_regex: '^[a-z]{3}whatever.*\.com$' token: some_other_tokenThe provided token will be added as a Bearer token to the network requests to download the model files. As far as we know, this works for all model marketplaces that require authorization.
Model Hashing
Section titled “Model Hashing”Models are hashed during installation, providing a stable identifier for models across all platforms. Hashing is a one-time operation.
hashing_algorithm: blake3_single # default valueYou might want to change this setting, depending on your system:
blake3_single(default): Single-threaded - best for spinning HDDs, still OK for SSDsblake3_multi: Parallelized, memory-mapped implementation - best for SSDs, terrible for spinning disksrandom: Skip hashing entirely - fastest but of course no hash
During the first startup after upgrading to v4, all of your models will be hashed. This can take a few minutes.
Most common algorithms are supported, like md5, sha256, and sha512. These are typically much, much slower than either of the BLAKE3 variants.
Path Settings
Section titled “Path Settings”These options set the paths of various directories and files used by InvokeAI. Any user-defined paths should be absolute paths.
Logging
Section titled “Logging”Several different log handler destinations are available, and multiple destinations are supported by providing a list:
log_handlers: - console - syslog=localhost - file=/var/log/invokeai.log-
consoleis the default. It prints log messages to the command-line window from which InvokeAI was launched. -
syslogis only available on Linux and Macintosh systems. It uses the operating system’s “syslog” facility to write log file entries locally or to a remote logging machine.syslogoffers a variety of configuration options:
syslog=/dev/log` - log to the /dev/log devicesyslog=localhost` - log to the network logger running on the local machinesyslog=localhost:512` - same as above, but using a non-standard portsyslog=fredserver,facility=LOG_USER,socktype=SOCK_DRAM`- Log to LAN-connected server "fredserver" using the facility LOG_USER and datagram packets.httpcan be used to log to a remote web server. The server must be properly configured to receive and act on log messages. The option accepts the URL to the web server, and amethodargument indicating whether the message should be submitted using the GET or POST method.
http=http://my.server/path/to/logger,method=POSTThe log_format option provides several alternative formats:
color- default format providing time, date and a message, using text colors to distinguish different log severitiesplain- same as above, but monochrome text onlysyslog- the log level and error message only, allowing the syslog system to attach the time and datelegacy- a format similar to the one used by the legacy 2.3 InvokeAI releases.
Environment Variables
Section titled “Environment Variables”All settings may be set via environment variables by prefixing INVOKEAI_
to the variable name. For example, INVOKEAI_HOST would set the host
setting.
For non-primitive values, pass a JSON-encoded string:
export INVOKEAI_REMOTE_API_TOKENS='[{"url_regex":"modelmarketplace", "token": "12345"}]'We suggest using invokeai.yaml, as it is more user-friendly.
CLI Args
Section titled “CLI Args”A subset of settings may be specified using CLI args:
--root: specify the root directory--config: override the defaultinvokeai.yamlfile location
Low-VRAM Mode
Section titled “Low-VRAM Mode”See the Low-VRAM mode docs for details on enabling this feature.
All Settings
Section titled “All Settings”The full settings reference is below. Additional explanations for selected settings appear earlier on this page.
Web8
host- Type
str - Default
127.0.0.1 - Env
INVOKEAI_HOST - IP address to bind to. Use `0.0.0.0` to serve to your local network.
port- Type
int - Default
9090 - Env
INVOKEAI_PORT - Port to bind to.
allow_origins- Type
list[str] - Default
[] - Env
INVOKEAI_ALLOW_ORIGINS - Allowed CORS origins.
allow_credentials- Type
bool - Default
true - Env
INVOKEAI_ALLOW_CREDENTIALS - Allow CORS credentials.
allow_methods- Type
list[str] - Default
["*"] - Env
INVOKEAI_ALLOW_METHODS - Methods allowed for CORS.
allow_headers- Type
list[str] - Default
["*"] - Env
INVOKEAI_ALLOW_HEADERS - Headers allowed for CORS.
ssl_certfile- Type
Optional[Path] - Default
null - Env
INVOKEAI_SSL_CERTFILE - SSL certificate file for HTTPS. See https://www.uvicorn.org/settings/#https.
ssl_keyfile- Type
Optional[Path] - Default
null - Env
INVOKEAI_SSL_KEYFILE - SSL key file for HTTPS. See https://www.uvicorn.org/settings/#https.
Misc Features2
log_tokenization- Type
bool - Default
false - Env
INVOKEAI_LOG_TOKENIZATION - Enable logging of parsed prompt tokens.
patchmatch- Type
bool - Default
true - Env
INVOKEAI_PATCHMATCH - Enable patchmatch inpaint code.
Paths9
models_dir- Type
Path - Default
models - Env
INVOKEAI_MODELS_DIR - Path to the models directory.
convert_cache_dir- Type
Path - Default
models/.convert_cache - Env
INVOKEAI_CONVERT_CACHE_DIR - Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).
download_cache_dir- Type
Path - Default
models/.download_cache - Env
INVOKEAI_DOWNLOAD_CACHE_DIR - Path to the directory that contains dynamically downloaded models.
legacy_conf_dir- Type
Path - Default
configs - Env
INVOKEAI_LEGACY_CONF_DIR - Path to directory of legacy checkpoint config files.
db_dir- Type
Path - Default
databases - Env
INVOKEAI_DB_DIR - Path to InvokeAI databases directory.
outputs_dir- Type
Path - Default
outputs - Env
INVOKEAI_OUTPUTS_DIR - Path to directory for outputs.
custom_nodes_dir- Type
Path - Default
nodes - Env
INVOKEAI_CUSTOM_NODES_DIR - Path to directory for custom nodes.
style_presets_dir- Type
Path - Default
style_presets - Env
INVOKEAI_STYLE_PRESETS_DIR - Path to directory for style presets.
workflow_thumbnails_dir- Type
Path - Default
workflow_thumbnails - Env
INVOKEAI_WORKFLOW_THUMBNAILS_DIR - Path to directory for workflow thumbnails.
Logging10
log_handlers- Type
list[str] - Default
["console"] - Env
INVOKEAI_LOG_HANDLERS - Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".
log_format- Type
Literal['plain', 'color', 'syslog', 'legacy'] - Default
color - Env
INVOKEAI_LOG_FORMAT - Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.
Values:
plaincolorsysloglegacy log_level- Type
Literal['debug', 'info', 'warning', 'error', 'critical'] - Default
info - Env
INVOKEAI_LOG_LEVEL - Emit logging messages at this level or higher.
Values:
debuginfowarningerrorcritical log_sql- Type
bool - Default
false - Env
INVOKEAI_LOG_SQL - Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.
log_level_network- Type
Literal['debug', 'info', 'warning', 'error', 'critical'] - Default
warning - Env
INVOKEAI_LOG_LEVEL_NETWORK - Log level for network-related messages. 'info' and 'debug' are very verbose.
Values:
debuginfowarningerrorcritical use_memory_db- Type
bool - Default
false - Env
INVOKEAI_USE_MEMORY_DB - Use in-memory database. Useful for development.
dev_reload- Type
bool - Default
false - Env
INVOKEAI_DEV_RELOAD - Automatically reload when Python sources are changed. Does not reload node definitions.
profile_graphs- Type
bool - Default
false - Env
INVOKEAI_PROFILE_GRAPHS - Enable graph profiling using `cProfile`.
profile_prefix- Type
Optional[str] - Default
null - Env
INVOKEAI_PROFILE_PREFIX - An optional prefix for profile output files.
profiles_dir- Type
Path - Default
profiles - Env
INVOKEAI_PROFILES_DIR - Path to profiles output directory.
Cache11
max_cache_ram_gb- Type
Optional[float] - Default
null - Env
INVOKEAI_MAX_CACHE_RAM_GB - The maximum amount of CPU RAM to use for model caching in GB. If unset, the limit will be configured based on the available RAM. In most cases, it is recommended to leave this unset.
max_cache_vram_gb- Type
Optional[float] - Default
null - Env
INVOKEAI_MAX_CACHE_VRAM_GB - The amount of VRAM to use for model caching in GB. If unset, the limit will be configured based on the available VRAM and the device_working_mem_gb. In most cases, it is recommended to leave this unset.
log_memory_usage- Type
bool - Default
false - Env
INVOKEAI_LOG_MEMORY_USAGE - If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
model_cache_keep_alive_min- Type
float - Default
0 - Env
INVOKEAI_MODEL_CACHE_KEEP_ALIVE_MIN - How long to keep models in cache after last use, in minutes. A value of 0 (the default) means models are kept in cache indefinitely. If no model generations occur within the timeout period, the model cache is cleared using the same logic as the 'Clear Model Cache' button.
device_working_mem_gb- Type
float - Default
3 - Env
INVOKEAI_DEVICE_WORKING_MEM_GB - The amount of working memory to keep available on the compute device (in GB). Has no effect if running on CPU. If you are experiencing OOM errors, try increasing this value.
enable_partial_loading- Type
bool - Default
false - Env
INVOKEAI_ENABLE_PARTIAL_LOADING - Enable partial loading of models. This enables models to run with reduced VRAM requirements (at the cost of slower speed) by streaming the model from RAM to VRAM as its used. In some edge cases, partial loading can cause models to run more slowly if they were previously being fully loaded into VRAM.
keep_ram_copy_of_weights- Type
bool - Default
true - Env
INVOKEAI_KEEP_RAM_COPY_OF_WEIGHTS - Whether to keep a full RAM copy of a model's weights when the model is loaded in VRAM. Keeping a RAM copy increases average RAM usage, but speeds up model switching and LoRA patching (assuming there is sufficient RAM). Set this to False if RAM pressure is consistently high.
ram- Type
Optional[float] - Default
null - Env
INVOKEAI_RAM - DEPRECATED: This setting is no longer used. It has been replaced by `max_cache_ram_gb`, but most users will not need to use this config since automatic cache size limits should work well in most cases. This config setting will be removed once the new model cache behavior is stable.
vram- Type
Optional[float] - Default
null - Env
INVOKEAI_VRAM - DEPRECATED: This setting is no longer used. It has been replaced by `max_cache_vram_gb`, but most users will not need to use this config since automatic cache size limits should work well in most cases. This config setting will be removed once the new model cache behavior is stable.
lazy_offload- Type
bool - Default
true - Env
INVOKEAI_LAZY_OFFLOAD - DEPRECATED: This setting is no longer used. Lazy-offloading is enabled by default. This config setting will be removed once the new model cache behavior is stable.
pytorch_cuda_alloc_conf- Type
Optional[str] - Default
null - Env
INVOKEAI_PYTORCH_CUDA_ALLOC_CONF - Configure the Torch CUDA memory allocator. This will impact peak reserved VRAM usage and performance. Setting to "backend:cudaMallocAsync" works well on many systems. The optimal configuration is highly dependent on the system configuration (device type, VRAM, CUDA driver version, etc.), so must be tuned experimentally.
Device2
device- Type
str - Default
auto - Env
INVOKEAI_DEVICE - Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `mps`, `cuda:N` (where N is a device number)
precision- Type
Literal['auto', 'float16', 'bfloat16', 'float32'] - Default
auto - Env
INVOKEAI_PRECISION - Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.
Values:
autofloat16bfloat16float32
Generation8
sequential_guidance- Type
bool - Default
false - Env
INVOKEAI_SEQUENTIAL_GUIDANCE - Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
attention_type- Type
Literal['auto', 'normal', 'xformers', 'sliced', 'torch-sdp'] - Default
auto - Env
INVOKEAI_ATTENTION_TYPE - Attention type.
Values:
autonormalxformersslicedtorch-sdp attention_slice_size- Type
Literal['auto', 'balanced', 'max', 1, 2, 3, 4, 5, 6, 7, 8] - Default
auto - Env
INVOKEAI_ATTENTION_SLICE_SIZE - Slice size, valid when attention_type=="sliced".
Values:
autobalancedmax12345678 force_tiled_decode- Type
bool - Default
false - Env
INVOKEAI_FORCE_TILED_DECODE - Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).
pil_compress_level- Type
int - Default
1 - Env
INVOKEAI_PIL_COMPRESS_LEVEL - The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.
max_queue_size- Type
int - Default
10000 - Env
INVOKEAI_MAX_QUEUE_SIZE - Maximum number of items in the session queue.
clear_queue_on_startup- Type
bool - Default
false - Env
INVOKEAI_CLEAR_QUEUE_ON_STARTUP - Empties session queue on startup. If true, disables `max_queue_history`.
max_queue_history- Type
Optional[int] - Default
null - Env
INVOKEAI_MAX_QUEUE_HISTORY - Keep the last N completed, failed, and canceled queue items. Older items are deleted on startup. Set to 0 to prune all terminal items. Ignored if `clear_queue_on_startup` is true.
Nodes3
allow_nodes- Type
Optional[list[str]] - Default
null - Env
INVOKEAI_ALLOW_NODES - List of nodes to allow. Omit to allow all.
deny_nodes- Type
Optional[list[str]] - Default
null - Env
INVOKEAI_DENY_NODES - List of nodes to deny. Omit to deny none.
node_cache_size- Type
int - Default
512 - Env
INVOKEAI_NODE_CACHE_SIZE - How many cached nodes to keep in memory.
Model Install5
hashing_algorithm- Type
Literal['blake3_multi', 'blake3_single', 'random', 'md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512', 'blake2b', 'blake2s', 'sha3_224', 'sha3_256', 'sha3_384', 'sha3_512', 'shake_128', 'shake_256'] - Default
blake3_single - Env
INVOKEAI_HASHING_ALGORITHM - Model hashing algorthim for model installs. 'blake3_multi' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3.
Values:
blake3_multiblake3_singlerandommd5sha1sha224sha256sha384sha512blake2bblake2ssha3_224sha3_256sha3_384sha3_512shake_128shake_256 remote_api_tokens- Type
Optional[list[URLRegexTokenPair]] - Default
null - Env
INVOKEAI_REMOTE_API_TOKENS - List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.
scan_models_on_startup- Type
bool - Default
false - Env
INVOKEAI_SCAN_MODELS_ON_STARTUP - Scan the models directory on startup, registering orphaned models. This is typically only used in conjunction with `use_memory_db` for testing purposes.
unsafe_disable_picklescan- Type
bool - Default
false - Env
INVOKEAI_UNSAFE_DISABLE_PICKLESCAN - UNSAFE. Disable the picklescan security check during model installation. Recommended only for development and testing purposes. This will allow arbitrary code execution during model installation, so should never be used in production.
allow_unknown_models- Type
bool - Default
true - Env
INVOKEAI_ALLOW_UNKNOWN_MODELS - Allow installation of models that we are unable to identify. If enabled, models will be marked as `unknown` in the database, and will not have any metadata associated with them. If disabled, unknown models will be rejected during installation.
Multiuser2
multiuser- Type
bool - Default
false - Env
INVOKEAI_MULTIUSER - Enable multiuser support. When disabled, the application runs in single-user mode using a default system account with administrator privileges. When enabled, requires user authentication and authorization.
strict_password_checking- Type
bool - Default
false - Env
INVOKEAI_STRICT_PASSWORD_CHECKING - Enforce strict password requirements. When True, passwords must contain uppercase, lowercase, and numbers. When False (default), any password is accepted but its strength (weak/moderate/strong) is reported to the user.