{"version":1,"count":18,"entries":[{"code":"missing_authorization","type":"authentication_error","http_status":401,"title":"Authorization header is missing","description":"All `/v1/*` and `/c1/*` endpoints require a Bearer token.","remediation":"Add `Authorization: Bearer <RUST_API_BEARER>` to the request.","typical_param":"headers.authorization"},{"code":"invalid_authorization","type":"authentication_error","http_status":401,"title":"Authorization header is invalid","description":"The Bearer token did not match the configured value.","remediation":"Check `RUST_API_BEARER` in your environment matches the server. The comparison is constant-time (timing-attack safe).","typical_param":"headers.authorization"},{"code":"rate_limit_exceeded","type":"rate_limit_exceeded","http_status":429,"title":"Rate limit exceeded","description":"Per-token rate limits are enforced (default 1 request/sec, 30 burst). The bucket is keyed on the SHA-256 hash of the bearer token, not the IP.","remediation":"Back off and retry. Tune `RATE_LIMIT_PER_SECOND` and `RATE_LIMIT_BURST` env vars on the server if your workload needs more.","typical_param":null},{"code":"body_too_large","type":"invalid_request_error","http_status":413,"title":"Request body exceeds size limit","description":"Bodies are capped at `MAX_BODY_BYTES` (default 32 MB). Triggered most often by very large base64-encoded image inputs.","remediation":"Resize the image, switch to a smaller model, or split the conversation. For very large media, consider `gpt-image` workflow patterns.","typical_param":null},{"code":"invalid_json","type":"invalid_request_error","http_status":400,"title":"Request body is not valid JSON","description":"The body could not be parsed as JSON or did not match the expected schema.","remediation":"Validate your JSON; check the OpenAPI schema at `/openapi.json` for the endpoint.","typical_param":null},{"code":"missing_field","type":"invalid_request_error","http_status":400,"title":"Required field is missing","description":"The endpoint schema requires a field that was absent.","remediation":"See `param` for which field; consult `/openapi.json`.","typical_param":"<varies>"},{"code":"invalid_field","type":"invalid_request_error","http_status":400,"title":"Field value is invalid","description":"A field's value did not match the expected type, range, or enum.","remediation":"See `param` for which field; check `/openapi.json` for valid values.","typical_param":"<varies>"},{"code":"model_not_in_allowlist","type":"invalid_request_error","http_status":400,"title":"Requested model is not allowed","description":"The `model` field references an alias that the server does not whitelist. The allowlist is hard-coded in `rust-api/src/config.rs::ALLOWED_MODELS`.","remediation":"Use one of the models from `/v1/info` or `/v1/models`.","typical_param":"model"},{"code":"max_tokens_below_minimum","type":"invalid_request_error","http_status":400,"title":"max_tokens is below the model's minimum","description":"Some upstream providers reject low values: OpenAI requires `max_output_tokens >= 16`, reasoning-capable models need `>= 200` so they have room to think before producing visible output.\n\nThe rust-api auto-floors `max_tokens` to the model's documented minimum *silently* and adds a response header `x-rust-api-applied: max_tokens_floored=<N>`. Only when even the floor would still be invalid does this error surface.","remediation":"Set `max_tokens` to at least the value listed under `constraints.min_max_tokens` for the chosen model in `/v1/info`. For reasoning models (gpt-5.5-pro, gemini-3.1-pro, flagship), `200` is a safe default.","typical_param":"max_tokens"},{"code":"image_url_not_supported","type":"invalid_request_error","http_status":400,"title":"This model does not accept image URLs (use base64 data URI)","description":"Cloud-side multimodal providers (Anthropic, Google, OpenAI, xAI, OpenRouter) refuse `image_url.url` values that are `http://` or `https://` URLs. They block on anti-bot, TLS probing, or size limits.\n\nThe rust-api validates against `constraints.accepts_image_url` from the model catalog and rejects up-front rather than letting the upstream reject with an opaque 400.\n\nThe local Llama-4-Scout vLLM backend behaves the same way and is configured identically.","remediation":"Encode your image as a base64 data URI: `data:image/jpeg;base64,/9j/4AAQ...`. Example with `curl`:\n```sh\nB64=$(base64 -w 0 image.jpg)\ncurl ... -d \"{\\\"messages\\\":[{\\\"role\\\":\\\"user\\\",\\\"content\\\":[{\\\"type\\\":\\\"text\\\",\\\"text\\\":\\\"Describe this\\\"},{\\\"type\\\":\\\"image_url\\\",\\\"image_url\\\":{\\\"url\\\":\\\"data:image/jpeg;base64,$B64\\\"}}]}]}\"\n```","typical_param":"messages[].content[].image_url.url"},{"code":"image_decode_error","type":"invalid_request_error","http_status":400,"title":"Could not decode the supplied image data","description":"The base64-encoded data URI could not be parsed, the MIME type was missing, or the decoded bytes were not a valid image.","remediation":"Verify the data URI format `data:image/<jpeg|png|webp|gif>;base64,<data>`. Re-encode with `base64 -w 0` (no line wrapping).","typical_param":"messages[].content[].image_url.url"},{"code":"conversation_not_found","type":"not_found","http_status":404,"title":"Conversation does not exist","description":"Used by `/c1/*`. The supplied `conversation_id` was never persisted (or was deleted, or belongs to a different `user_id`).","remediation":"Omit `conversation_id` to start a fresh conversation (the server returns the new id in the `x-conversation-id` response header). Or list existing conversations via `GET /c1/conversations`.","typical_param":"conversation_id"},{"code":"route_not_found","type":"not_found","http_status":404,"title":"Route not found","description":"Path/method combination does not exist on this server.","remediation":"Check `/openapi.json` for the list of routes.","typical_param":null},{"code":"upstream_error","type":"upstream_error","http_status":502,"title":"Upstream provider returned an error","description":"LiteLLM, vLLM, or a cloud provider answered with a non-2xx response. The server includes the upstream message inline in `message` for debugging.","remediation":"Read the inline upstream message. Common causes: model rejected an oversize prompt, content moderation flag, provider-side outage. Retry with adjusted input or wait.","typical_param":null},{"code":"upstream_timeout","type":"upstream_error","http_status":504,"title":"Upstream provider timed out","description":"Reqwest hit the configured `HTTP_TOTAL_TIMEOUT_SECS` (default 600). Most likely cause is a slow image-generation model — `gpt-image` regularly takes 100-180 s.","remediation":"Increase your client timeout; check `constraints.typical_response_seconds` per model in `/v1/info`. For `gpt-image`, set client timeout ≥ 240 s.","typical_param":null},{"code":"upstream_unavailable","type":"upstream_error","http_status":503,"title":"Upstream provider is unavailable","description":"Could not establish a TCP/TLS connection to LiteLLM (`litellm:4000`) or vLLM (`vllm:8000`).","remediation":"Check `/readyz` to see which backend is unreachable, then `docker ps`/`docker logs` on the failing service.","typical_param":null},{"code":"internal_error","type":"internal_error","http_status":500,"title":"An internal error occurred","description":"Server-side bug, panic, or unexpected condition. The server logs the full detail with `tracing::error!`; the response intentionally omits internals.","remediation":"Retry. If reproducible, capture request id from `x-request-id` and check server logs.","typical_param":null},{"code":"storage_error","type":"internal_error","http_status":500,"title":"Conversation storage error","description":"SQLite read/write failed for `/c1/*` endpoints. Most common cause: the volume mount `data/sqlite/` is owned by the wrong UID (rust-api runs as 65532).","remediation":"On the server: `sudo chown -R 65532:65532 /home/dietmar/dgx-llm/data/sqlite && docker compose restart rust-api`.","typical_param":null}]}