Skip to content

YAML Config

Runtime settings, including the location of files and directories, memory usage, and performance, are managed via the invokeai.yaml config file or environment variables. A subset of settings may be set via commandline arguments.

Settings sources are used in this order:

  • CLI args
  • Environment variables
  • invokeai.yaml settings
  • Fallback: defaults

On startup, InvokeAI searches for its “root” directory. This is the directory that contains models, images, the database, and so on. It also contains a configuration file called invokeai.yaml.

  • Directorymodels/
  • Directoryoutputs/
  • Directorydatabases/
  • Directoryworkflow_thumbnails/
  • Directorystyle_presets/
  • Directorynodes/
  • Directoryconfigs/
  • invokeai.example.yaml
  • invokeai.yaml

InvokeAI searches for the root directory in this order:

  1. The --root <path> CLI arg.
  2. The environment variable INVOKEAI_ROOT.
  3. The directory containing the currently active virtual environment.
  4. Fallback: a directory in the current user’s home directory named invokeai.

Inside the root directory, we read settings from the invokeai.yaml file.

It has two sections - one for internal use and one for user settings:

# Internal metadata - do not edit:
schema_version: 4.0.2
# Put user settings here - see https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/:
host: 0.0.0.0 # serve the app on your local network
models_dir: D:\invokeai\models # store models on an external drive
precision: float16 # always use fp16 precision

The settings in this file will override the defaults. You only need to change this file if the default for a particular setting doesn’t work for you.

You’ll find an example file next to invokeai.yaml that shows the default values.

Some settings, like Model Marketplace API Keys, require the YAML to be formatted correctly. Here is a basic guide to YAML files.

You can use any config file with the --config CLI arg. Pass in the path to the invokeai.yaml file you want to use.

Note that environment variables will trump any settings in the config file.

Some model marketplaces require an API key to download models. You can provide a URL pattern and appropriate token in your invokeai.yaml file to provide that API key.

The pattern can be any valid regex (you may need to surround the pattern with quotes):

remote_api_tokens:
# Any URL containing `models.com` will automatically use `your_models_com_token`
- url_regex: models.com
token: your_models_com_token
# Any URL matching this contrived regex will use `some_other_token`
- url_regex: '^[a-z]{3}whatever.*\.com$'
token: some_other_token

The provided token will be added as a Bearer token to the network requests to download the model files. As far as we know, this works for all model marketplaces that require authorization.

Models are hashed during installation, providing a stable identifier for models across all platforms. Hashing is a one-time operation.

hashing_algorithm: blake3_single # default value

You might want to change this setting, depending on your system:

  • blake3_single (default): Single-threaded - best for spinning HDDs, still OK for SSDs
  • blake3_multi: Parallelized, memory-mapped implementation - best for SSDs, terrible for spinning disks
  • random: Skip hashing entirely - fastest but of course no hash

During the first startup after upgrading to v4, all of your models will be hashed. This can take a few minutes.

Most common algorithms are supported, like md5, sha256, and sha512. These are typically much, much slower than either of the BLAKE3 variants.

These options set the paths of various directories and files used by InvokeAI. Any user-defined paths should be absolute paths.

Several different log handler destinations are available, and multiple destinations are supported by providing a list:

log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
  • console is the default. It prints log messages to the command-line window from which InvokeAI was launched.

  • syslog is only available on Linux and Macintosh systems. It uses the operating system’s “syslog” facility to write log file entries locally or to a remote logging machine. syslog offers a variety of configuration options:

syslog=/dev/log` - log to the /dev/log device
syslog=localhost` - log to the network logger running on the local machine
syslog=localhost:512` - same as above, but using a non-standard port
syslog=fredserver,facility=LOG_USER,socktype=SOCK_DRAM`
- Log to LAN-connected server "fredserver" using the facility LOG_USER and datagram packets.
  • http can be used to log to a remote web server. The server must be properly configured to receive and act on log messages. The option accepts the URL to the web server, and a method argument indicating whether the message should be submitted using the GET or POST method.
http=http://my.server/path/to/logger,method=POST

The log_format option provides several alternative formats:

  • color - default format providing time, date and a message, using text colors to distinguish different log severities
  • plain - same as above, but monochrome text only
  • syslog - the log level and error message only, allowing the syslog system to attach the time and date
  • legacy - a format similar to the one used by the legacy 2.3 InvokeAI releases.

All settings may be set via environment variables by prefixing INVOKEAI_ to the variable name. For example, INVOKEAI_HOST would set the host setting.

For non-primitive values, pass a JSON-encoded string:

Terminal window
export INVOKEAI_REMOTE_API_TOKENS='[{"url_regex":"modelmarketplace", "token": "12345"}]'

We suggest using invokeai.yaml, as it is more user-friendly.

A subset of settings may be specified using CLI args:

  • --root: specify the root directory
  • --config: override the default invokeai.yaml file location

See the Low-VRAM mode docs for details on enabling this feature.

The full settings reference is below. Additional explanations for selected settings appear earlier on this page.

Web8
host
Typestr
Default127.0.0.1
EnvINVOKEAI_HOST
IP address to bind to. Use `0.0.0.0` to serve to your local network.
port
Typeint
Default9090
EnvINVOKEAI_PORT
Port to bind to.
allow_origins
Typelist[str]
Default[]
EnvINVOKEAI_ALLOW_ORIGINS
Allowed CORS origins.
allow_credentials
Typebool
Defaulttrue
EnvINVOKEAI_ALLOW_CREDENTIALS
Allow CORS credentials.
allow_methods
Typelist[str]
Default["*"]
EnvINVOKEAI_ALLOW_METHODS
Methods allowed for CORS.
allow_headers
Typelist[str]
Default["*"]
EnvINVOKEAI_ALLOW_HEADERS
Headers allowed for CORS.
ssl_certfile
TypeOptional[Path]
Defaultnull
EnvINVOKEAI_SSL_CERTFILE
SSL certificate file for HTTPS. See https://www.uvicorn.org/settings/#https.
ssl_keyfile
TypeOptional[Path]
Defaultnull
EnvINVOKEAI_SSL_KEYFILE
SSL key file for HTTPS. See https://www.uvicorn.org/settings/#https.
Misc Features2
log_tokenization
Typebool
Defaultfalse
EnvINVOKEAI_LOG_TOKENIZATION
Enable logging of parsed prompt tokens.
patchmatch
Typebool
Defaulttrue
EnvINVOKEAI_PATCHMATCH
Enable patchmatch inpaint code.
Paths9
models_dir
TypePath
Defaultmodels
EnvINVOKEAI_MODELS_DIR
Path to the models directory.
convert_cache_dir
TypePath
Defaultmodels/.convert_cache
EnvINVOKEAI_CONVERT_CACHE_DIR
Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).
download_cache_dir
TypePath
Defaultmodels/.download_cache
EnvINVOKEAI_DOWNLOAD_CACHE_DIR
Path to the directory that contains dynamically downloaded models.
legacy_conf_dir
TypePath
Defaultconfigs
EnvINVOKEAI_LEGACY_CONF_DIR
Path to directory of legacy checkpoint config files.
db_dir
TypePath
Defaultdatabases
EnvINVOKEAI_DB_DIR
Path to InvokeAI databases directory.
outputs_dir
TypePath
Defaultoutputs
EnvINVOKEAI_OUTPUTS_DIR
Path to directory for outputs.
custom_nodes_dir
TypePath
Defaultnodes
EnvINVOKEAI_CUSTOM_NODES_DIR
Path to directory for custom nodes.
style_presets_dir
TypePath
Defaultstyle_presets
EnvINVOKEAI_STYLE_PRESETS_DIR
Path to directory for style presets.
workflow_thumbnails_dir
TypePath
Defaultworkflow_thumbnails
EnvINVOKEAI_WORKFLOW_THUMBNAILS_DIR
Path to directory for workflow thumbnails.
Logging10
log_handlers
Typelist[str]
Default["console"]
EnvINVOKEAI_LOG_HANDLERS
Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".
log_format
TypeLiteral['plain', 'color', 'syslog', 'legacy']
Defaultcolor
EnvINVOKEAI_LOG_FORMAT
Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style. Values: plain color syslog legacy
log_level
TypeLiteral['debug', 'info', 'warning', 'error', 'critical']
Defaultinfo
EnvINVOKEAI_LOG_LEVEL
Emit logging messages at this level or higher. Values: debug info warning error critical
log_sql
Typebool
Defaultfalse
EnvINVOKEAI_LOG_SQL
Log SQL queries. `log_level` must be `debug` for this to do anything. Extremely verbose.
log_level_network
TypeLiteral['debug', 'info', 'warning', 'error', 'critical']
Defaultwarning
EnvINVOKEAI_LOG_LEVEL_NETWORK
Log level for network-related messages. 'info' and 'debug' are very verbose. Values: debug info warning error critical
use_memory_db
Typebool
Defaultfalse
EnvINVOKEAI_USE_MEMORY_DB
Use in-memory database. Useful for development.
dev_reload
Typebool
Defaultfalse
EnvINVOKEAI_DEV_RELOAD
Automatically reload when Python sources are changed. Does not reload node definitions.
profile_graphs
Typebool
Defaultfalse
EnvINVOKEAI_PROFILE_GRAPHS
Enable graph profiling using `cProfile`.
profile_prefix
TypeOptional[str]
Defaultnull
EnvINVOKEAI_PROFILE_PREFIX
An optional prefix for profile output files.
profiles_dir
TypePath
Defaultprofiles
EnvINVOKEAI_PROFILES_DIR
Path to profiles output directory.
Cache11
max_cache_ram_gb
TypeOptional[float]
Defaultnull
EnvINVOKEAI_MAX_CACHE_RAM_GB
The maximum amount of CPU RAM to use for model caching in GB. If unset, the limit will be configured based on the available RAM. In most cases, it is recommended to leave this unset.
max_cache_vram_gb
TypeOptional[float]
Defaultnull
EnvINVOKEAI_MAX_CACHE_VRAM_GB
The amount of VRAM to use for model caching in GB. If unset, the limit will be configured based on the available VRAM and the device_working_mem_gb. In most cases, it is recommended to leave this unset.
log_memory_usage
Typebool
Defaultfalse
EnvINVOKEAI_LOG_MEMORY_USAGE
If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
model_cache_keep_alive_min
Typefloat
Default0
EnvINVOKEAI_MODEL_CACHE_KEEP_ALIVE_MIN
How long to keep models in cache after last use, in minutes. A value of 0 (the default) means models are kept in cache indefinitely. If no model generations occur within the timeout period, the model cache is cleared using the same logic as the 'Clear Model Cache' button.
device_working_mem_gb
Typefloat
Default3
EnvINVOKEAI_DEVICE_WORKING_MEM_GB
The amount of working memory to keep available on the compute device (in GB). Has no effect if running on CPU. If you are experiencing OOM errors, try increasing this value.
enable_partial_loading
Typebool
Defaultfalse
EnvINVOKEAI_ENABLE_PARTIAL_LOADING
Enable partial loading of models. This enables models to run with reduced VRAM requirements (at the cost of slower speed) by streaming the model from RAM to VRAM as its used. In some edge cases, partial loading can cause models to run more slowly if they were previously being fully loaded into VRAM.
keep_ram_copy_of_weights
Typebool
Defaulttrue
EnvINVOKEAI_KEEP_RAM_COPY_OF_WEIGHTS
Whether to keep a full RAM copy of a model's weights when the model is loaded in VRAM. Keeping a RAM copy increases average RAM usage, but speeds up model switching and LoRA patching (assuming there is sufficient RAM). Set this to False if RAM pressure is consistently high.
ram
TypeOptional[float]
Defaultnull
EnvINVOKEAI_RAM
DEPRECATED: This setting is no longer used. It has been replaced by `max_cache_ram_gb`, but most users will not need to use this config since automatic cache size limits should work well in most cases. This config setting will be removed once the new model cache behavior is stable.
vram
TypeOptional[float]
Defaultnull
EnvINVOKEAI_VRAM
DEPRECATED: This setting is no longer used. It has been replaced by `max_cache_vram_gb`, but most users will not need to use this config since automatic cache size limits should work well in most cases. This config setting will be removed once the new model cache behavior is stable.
lazy_offload
Typebool
Defaulttrue
EnvINVOKEAI_LAZY_OFFLOAD
DEPRECATED: This setting is no longer used. Lazy-offloading is enabled by default. This config setting will be removed once the new model cache behavior is stable.
pytorch_cuda_alloc_conf
TypeOptional[str]
Defaultnull
EnvINVOKEAI_PYTORCH_CUDA_ALLOC_CONF
Configure the Torch CUDA memory allocator. This will impact peak reserved VRAM usage and performance. Setting to "backend:cudaMallocAsync" works well on many systems. The optimal configuration is highly dependent on the system configuration (device type, VRAM, CUDA driver version, etc.), so must be tuned experimentally.
Device2
device
Typestr
Defaultauto
EnvINVOKEAI_DEVICE
Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `mps`, `cuda:N` (where N is a device number)
precision
TypeLiteral['auto', 'float16', 'bfloat16', 'float32']
Defaultauto
EnvINVOKEAI_PRECISION
Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system. Values: auto float16 bfloat16 float32
Generation8
sequential_guidance
Typebool
Defaultfalse
EnvINVOKEAI_SEQUENTIAL_GUIDANCE
Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
attention_type
TypeLiteral['auto', 'normal', 'xformers', 'sliced', 'torch-sdp']
Defaultauto
EnvINVOKEAI_ATTENTION_TYPE
Attention type. Values: auto normal xformers sliced torch-sdp
attention_slice_size
TypeLiteral['auto', 'balanced', 'max', 1, 2, 3, 4, 5, 6, 7, 8]
Defaultauto
EnvINVOKEAI_ATTENTION_SLICE_SIZE
Slice size, valid when attention_type=="sliced". Values: auto balanced max 1 2 3 4 5 6 7 8
force_tiled_decode
Typebool
Defaultfalse
EnvINVOKEAI_FORCE_TILED_DECODE
Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).
pil_compress_level
Typeint
Default1
EnvINVOKEAI_PIL_COMPRESS_LEVEL
The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.
max_queue_size
Typeint
Default10000
EnvINVOKEAI_MAX_QUEUE_SIZE
Maximum number of items in the session queue.
clear_queue_on_startup
Typebool
Defaultfalse
EnvINVOKEAI_CLEAR_QUEUE_ON_STARTUP
Empties session queue on startup. If true, disables `max_queue_history`.
max_queue_history
TypeOptional[int]
Defaultnull
EnvINVOKEAI_MAX_QUEUE_HISTORY
Keep the last N completed, failed, and canceled queue items. Older items are deleted on startup. Set to 0 to prune all terminal items. Ignored if `clear_queue_on_startup` is true.
Nodes3
allow_nodes
TypeOptional[list[str]]
Defaultnull
EnvINVOKEAI_ALLOW_NODES
List of nodes to allow. Omit to allow all.
deny_nodes
TypeOptional[list[str]]
Defaultnull
EnvINVOKEAI_DENY_NODES
List of nodes to deny. Omit to deny none.
node_cache_size
Typeint
Default512
EnvINVOKEAI_NODE_CACHE_SIZE
How many cached nodes to keep in memory.
Model Install5
hashing_algorithm
TypeLiteral['blake3_multi', 'blake3_single', 'random', 'md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512', 'blake2b', 'blake2s', 'sha3_224', 'sha3_256', 'sha3_384', 'sha3_512', 'shake_128', 'shake_256']
Defaultblake3_single
EnvINVOKEAI_HASHING_ALGORITHM
Model hashing algorthim for model installs. 'blake3_multi' is best for SSDs. 'blake3_single' is best for spinning disk HDDs. 'random' disables hashing, instead assigning a UUID to models. Useful when using a memory db to reduce model installation time, or if you don't care about storing stable hashes for models. Alternatively, any other hashlib algorithm is accepted, though these are not nearly as performant as blake3. Values: blake3_multi blake3_single random md5 sha1 sha224 sha256 sha384 sha512 blake2b blake2s sha3_224 sha3_256 sha3_384 sha3_512 shake_128 shake_256
remote_api_tokens
TypeOptional[list[URLRegexTokenPair]]
Defaultnull
EnvINVOKEAI_REMOTE_API_TOKENS
List of regular expression and token pairs used when downloading models from URLs. The download URL is tested against the regex, and if it matches, the token is provided in as a Bearer token.
scan_models_on_startup
Typebool
Defaultfalse
EnvINVOKEAI_SCAN_MODELS_ON_STARTUP
Scan the models directory on startup, registering orphaned models. This is typically only used in conjunction with `use_memory_db` for testing purposes.
unsafe_disable_picklescan
Typebool
Defaultfalse
EnvINVOKEAI_UNSAFE_DISABLE_PICKLESCAN
UNSAFE. Disable the picklescan security check during model installation. Recommended only for development and testing purposes. This will allow arbitrary code execution during model installation, so should never be used in production.
allow_unknown_models
Typebool
Defaulttrue
EnvINVOKEAI_ALLOW_UNKNOWN_MODELS
Allow installation of models that we are unable to identify. If enabled, models will be marked as `unknown` in the database, and will not have any metadata associated with them. If disabled, unknown models will be rejected during installation.
Multiuser2
multiuser
Typebool
Defaultfalse
EnvINVOKEAI_MULTIUSER
Enable multiuser support. When disabled, the application runs in single-user mode using a default system account with administrator privileges. When enabled, requires user authentication and authorization.
strict_password_checking
Typebool
Defaultfalse
EnvINVOKEAI_STRICT_PASSWORD_CHECKING
Enforce strict password requirements. When True, passwords must contain uppercase, lowercase, and numbers. When False (default), any password is accepted but its strength (weak/moderate/strong) is reported to the user.
This site was designed and developed by Aether Fox Studio.