Skip to content

Recall Parameters API

The Recall Parameters API is a REST endpoint on the InvokeAI backend that lets external processes set recallable generation parameters on the frontend. Supported parameters include:

  • Core text and numeric parameters (prompts, model, steps, CFG, dimensions, seed, …)
  • LoRAs
  • Control Layers (ControlNet, T2I Adapter, Control LoRA) with optional control images
  • IP Adapters and FLUX Redux reference images with optional images
  • Model-free reference images (FLUX.2 Klein, FLUX Kontext, Qwen Image Edit)

When parameters are updated via the API, the backend stores them in client state persistence for the target queue and broadcasts a recall_parameters_updated WebSocket event. Any frontend client subscribed to that queue applies the new values immediately — no manual reload required.

Typical use cases:

  • An external image browser that wants to “recall” or “remix” the generation parameters saved into a PNG’s metadata.
  • A script that pre-populates parameters before the user runs generation.
  • Automated testing or batch workflows that want to reuse existing model and adapter configurations.
  1. API request — your client POSTs a JSON body of parameters to /api/v1/recall/{queue_id}.
  2. Storage — non-null parameters are stored under recall_* keys in the client state persistence service, scoped to the given queue_id.
  3. Resolution — models are resolved from human-readable names to the internal model keys used by the frontend, and image filenames are validated against {INVOKEAI_ROOT}/outputs/images.
  4. Broadcast — a recall_parameters_updated event is emitted on the websocket room for queue_id.
  5. Frontend update — any connected client subscribed to that queue applies the update to its Redux store, so UI fields, LoRAs, control layers, IP adapters, and reference images all populate immediately.

Base URL: http://localhost:9090/api/v1/recall/{queue_id}

The queue id is usually default.

Updates recallable parameters for the given queue_id.

POST /api/v1/recall/{queue_id}
Content-Type: application/json
{
"positive_prompt": "a beautiful landscape",
"negative_prompt": "blurry, low quality",
"model": "sd-1.5",
"steps": 20,
"cfg_scale": 7.5,
"width": 512,
"height": 512,
"seed": 12345
}

All parameters are optional — only send the fields you want to update.

GET /api/v1/recall/{queue_id}
{
"status": "success",
"queue_id": "queue_123",
"note": "Use the frontend to access stored recall parameters, or set specific parameters using POST"
}
ParameterTypeDescription
positive_promptstringPositive prompt text
negative_promptstringNegative prompt text
modelstringMain model name/identifier
refiner_modelstringRefiner model name/identifier
vae_modelstringVAE model name/identifier
schedulerstringScheduler name
stepsintegerNumber of generation steps (≥1)
refiner_stepsintegerNumber of refiner steps (≥0)
cfg_scalenumberCFG scale for guidance
cfg_rescale_multipliernumberCFG rescale multiplier
refiner_cfg_scalenumberRefiner CFG scale
guidancenumberGuidance scale
widthintegerImage width in pixels (≥64)
heightintegerImage height in pixels (≥64)
seedintegerRandom seed (≥0)
denoise_strengthnumberDenoising strength (0–1)
refiner_denoise_startnumberRefiner denoising start (0–1)
clip_skipintegerCLIP skip layers (≥0)
seamless_xbooleanEnable seamless X tiling
seamless_ybooleanEnable seamless Y tiling
refiner_positive_aesthetic_scorenumberRefiner positive aesthetic score
refiner_negative_aesthetic_scorenumberRefiner negative aesthetic score
{
// LoRAs
loras?: Array<{
model_name: string; // LoRA model name
weight?: number; // Default: 0.75, Range: -10 to 10
is_enabled?: boolean; // Default: true
}>;
// Control Layers (ControlNet, T2I Adapter, Control LoRA)
control_layers?: Array<{
model_name: string; // Control adapter model name
image_name?: string; // Optional image filename from outputs/images
weight?: number; // Default: 1.0, Range: -1 to 2
begin_step_percent?: number; // Default: 0.0, Range: 0 to 1
end_step_percent?: number; // Default: 1.0, Range: 0 to 1
control_mode?: "balanced" | "more_prompt" | "more_control"; // ControlNet only
}>;
// IP Adapters (includes FLUX Redux)
ip_adapters?: Array<{
model_name: string; // IP Adapter / FLUX Redux model name
image_name?: string; // Optional reference image filename from outputs/images
weight?: number; // Default: 1.0, Range: -1 to 2
begin_step_percent?: number; // Default: 0.0, Range: 0 to 1
end_step_percent?: number; // Default: 1.0, Range: 0 to 1
method?: "full" | "style" | "composition"; // Default: "full"
image_influence?: "lowest" | "low" | "medium" | "high" | "highest"; // FLUX Redux only
}>;
// Model-free reference images (FLUX.2 Klein, FLUX Kontext, Qwen Image Edit)
reference_images?: Array<{
image_name: string; // Reference image filename from outputs/images
}>;
}

The backend resolves model names to their internal keys:

  1. Main models — resolved from the name to the model key.
  2. LoRAs — searched in the LoRA model database.
  3. Control adapters — tried in order: ControlNet → T2I Adapter → Control LoRA.
  4. IP Adapters — searched in the IP Adapter database; falls back to FLUX Redux.

Models that cannot be resolved are skipped with a warning in the logs — the rest of the parameters are still applied.

When an image_name is supplied, the backend:

  1. Resolves {INVOKEAI_ROOT}/outputs/images/{image_name} via the image files service (which also validates the path).
  2. Opens the image to extract width/height.
  3. Includes the image metadata in the event sent to the frontend.
  4. Logs whether the image was found.

Images must be referenced by their filename as it appears in the outputs/images directory:

  • "image_name": "example.png"
  • "image_name": "my_control_image_20240110.jpg"
  • "image_name": "outputs/images/example.png" (no prefix)
  • "image_name": "/full/path/to/example.png" (no absolute paths)

Missing images are logged as warnings but do not fail the request — remaining parameters are still applied.

  • Existing LoRAs are cleared before new ones are added.
  • Each LoRA’s model config is fetched and applied with the specified weight.
  • LoRAs appear in the LoRA selector panel.
  • Fully supported with optional images from outputs/images.
  • Configuration includes model, weights, step percentages, control mode, and an image reference.
  • Image availability is logged in the frontend console.
  • Reference images loaded from outputs/images are validated and passed through.
  • Configuration includes model, weights, step percentages, method, and an image reference.
  • FLUX Redux uses image_influence instead of a numeric weight.

Used by architectures that consume a reference image directly, with no separate adapter model:

  • FLUX.2 Klein — built-in reference image support.
  • FLUX Kontext — reference image associated with the main model.
  • Qwen Image Edit — reference image associated with the main model.

Because there is no adapter model to resolve, these entries carry only image_name. When the frontend receives them, it picks the appropriate config flavor (flux2_reference_image, flux_kontext_reference_image, or qwen_image_reference_image) based on the currently-selected main model, matching the behavior of a manual drag-and-drop.

Terminal window
# Core parameters
curl -X POST http://localhost:9090/api/v1/recall/default \
-H "Content-Type: application/json" \
-d '{
"positive_prompt": "a cyberpunk city at night",
"negative_prompt": "dark, unclear",
"model": "sd-1.5",
"steps": 30
}'
# Just the seed
curl -X POST http://localhost:9090/api/v1/recall/default \
-H "Content-Type: application/json" \
-d '{"seed": 99999}'
Terminal window
curl -X POST http://localhost:9090/api/v1/recall/default \
-H "Content-Type: application/json" \
-d '{
"loras": [
{"model_name": "add-detail-xl", "weight": 0.8, "is_enabled": true},
{"model_name": "sd_xl_offset_example-lora_1.0", "weight": 0.5}
]
}'
Terminal window
curl -X POST http://localhost:9090/api/v1/recall/default \
-H "Content-Type: application/json" \
-d '{
"control_layers": [
{
"model_name": "controlnet-canny-sdxl-1.0",
"image_name": "my_control_image.png",
"weight": 0.75,
"begin_step_percent": 0.0,
"end_step_percent": 0.8,
"control_mode": "balanced"
}
]
}'
Terminal window
curl -X POST http://localhost:9090/api/v1/recall/default \
-H "Content-Type: application/json" \
-d '{
"ip_adapters": [
{
"model_name": "ip-adapter-plus-face_sd15",
"image_name": "reference_face.png",
"weight": 0.7,
"method": "composition"
}
]
}'

Model-free reference images (FLUX.2 Klein / FLUX Kontext / Qwen Image Edit)

Section titled “Model-free reference images (FLUX.2 Klein / FLUX Kontext / Qwen Image Edit)”
Terminal window
curl -X POST http://localhost:9090/api/v1/recall/default \
-H "Content-Type: application/json" \
-d '{
"model": "FLUX.2 Klein",
"reference_images": [
{"image_name": "style_reference.png"}
]
}'
Terminal window
curl -X POST http://localhost:9090/api/v1/recall/default \
-H "Content-Type: application/json" \
-d '{
"positive_prompt": "masterpiece, detailed photo with specific style",
"negative_prompt": "blurry, low quality",
"model": "FLUX Schnell",
"steps": 25,
"cfg_scale": 8.0,
"width": 1024,
"height": 768,
"seed": 42,
"loras": [
{"model_name": "add-detail-xl", "weight": 0.6}
],
"control_layers": [
{
"model_name": "controlnet-depth-sdxl-1.0",
"image_name": "depth_map.png",
"weight": 1.0,
"end_step_percent": 0.7
}
],
"ip_adapters": [
{
"model_name": "ip-adapter-plus-face_sd15",
"image_name": "style_reference.png",
"weight": 0.5,
"method": "style"
}
]
}'
import requests
API_URL = "http://localhost:9090/api/v1/recall/default"
params = {
"positive_prompt": "a serene forest",
"negative_prompt": "people, buildings",
"steps": 25,
"cfg_scale": 7.0,
"seed": 42,
}
response = requests.post(API_URL, json=params)
result = response.json()
print(f"Status: {result['status']}")
print(f"Updated {result['updated_count']} parameters")
const API_URL = 'http://localhost:9090/api/v1/recall/default';
fetch(API_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
positive_prompt: 'a beautiful sunset',
steps: 20,
width: 768,
height: 768,
seed: 12345,
}),
})
.then((res) => res.json())
.then((data) => console.log(data));
{
"status": "success",
"queue_id": "default",
"updated_count": 15,
"parameters": {
"positive_prompt": "...",
"steps": 25,
"loras": [
{"model_key": "abc123...", "weight": 0.6, "is_enabled": true}
],
"control_layers": [
{
"model_key": "controlnet-xyz...",
"weight": 1.0,
"image": {"image_name": "depth_map.png", "width": 1024, "height": 768}
}
],
"ip_adapters": [
{
"model_key": "ip-adapter-xyz...",
"weight": 0.5,
"image": {"image_name": "style_reference.png", "width": 1024, "height": 1024}
}
],
"reference_images": [
{"image": {"image_name": "style_reference.png", "width": 1024, "height": 1024}}
]
}
}

Parameter updates emit a recall_parameters_updated event to the queue room. Connected frontend clients automatically:

  1. Apply standard parameters (prompts, steps, dimensions, etc.).
  2. Load and add LoRAs to the LoRA list.
  3. Apply control-layer configurations.
  4. Apply IP Adapter / FLUX Redux configurations with their images.
  5. Append model-free reference images, using the config flavor that matches the currently-selected main model.
  • 400 Bad Request — invalid parameters or parameter values.
  • 500 Internal Server Error — server-side storage or retrieval failure.

Errors include detailed messages. Missing images and unresolved model names are not errors — they are logged and the remaining parameters are still applied.

INFO: Resolved ControlNet model name 'controlnet-canny-sdxl-1.0' to key 'controlnet-xyz...'
INFO: Found image file: depth_map.png (1024x768)
INFO: Updated 12 recall parameters for queue default
INFO: Resolved 1 LoRA(s)
INFO: Resolved 1 control layer(s)
INFO: Resolved 1 IP adapter(s)
INFO: Resolved 1 reference image(s)

Set localStorage.ROARR_FILTER = 'debug' in the browser to see all debug messages under the events namespace.

INFO: Applied 5 recall parameters to store
INFO: Applied 1 IP adapter(s), replacing existing list
INFO: Applied 1 model-free reference image(s)
DEBUG: Built IP adapter ref image state: ip-adapter-xyz... (weight: 0.7)
DEBUG: IP adapter image: outputs/images/depth_map.png (1024x768)
  • Parameters are stored in the client state persistence service under recall_* keys, scoped to the queue_id.
  • Numeric validation runs at the FastAPI layer (e.g. steps ≥ 1, width ≥ 64).
  • Only non-null parameters are processed, stored, and broadcast.
  • Model-key resolution runs after the raw parameters are stored, so an unresolvable model name simply drops out of the broadcast but does not corrupt the persisted state.
  • The broadcast payload contains resolved model keys and image metadata (width/height) so the frontend can populate its store without extra round-trips.

If you see “Image file not found” in the logs:

  1. Verify the filename matches exactly (case-sensitive).
  2. Ensure the image is in {INVOKEAI_ROOT}/outputs/images/.
  3. Check that the filename does not include the outputs/images/ prefix.

If you see “Could not find model”:

  1. Verify the model name matches exactly (case-sensitive).
  2. Ensure the model is installed.
  3. Check the name via the Models Manager panel.
  1. Check the browser console for socket connection errors.
  2. Verify the queue_id matches the frontend’s queue (usually default).
  3. Check backend logs for event emission errors.
  • Model availability — models referenced in the payload must be installed.
  • Image availability — images must exist in outputs/images; remote URLs are not supported.
  • Canvas auto-layer creation — control layers and IP adapters with images populate the recall state, but creating a canvas layer from them still happens through the UI.

Potential improvements not yet implemented:

  1. Auto-create canvas layers from control-layer images in the payload.
  2. Auto-create reference-image layers from IP Adapter images in the payload.
  3. Support remote image URLs in addition to local outputs/images filenames.
  4. Image upload capability (accept base64 or file upload directly via the API).
  5. Batch operations that target multiple queue_ids in a single request.
This site was designed and developed by Aether Fox Studio.