API Overview
What is the MultiRoute API?
The MultiRoute API provides a unified interface to multiple AI providers and models. It standardizes request and response formats so that your backend can call a single API while MultiRoute handles routing, fallback, and provider-specific details.
Use this overview to understand the base URL, versioning, environments, and the major endpoint families.
Base URL
The canonical production base URL for management (config, API keys) is:
https://api.multiroute.ai/v1
OpenAI-compatible inference (chat completions, responses, images) is under:
https://api.multiroute.ai/openai/v1
All examples in this documentation use the appropriate base URL for each endpoint family.
Versioning
The MultiRoute API uses a path-based versioning strategy:
- The
/v1prefix indicates the current stable version of the API. - Backwards-incompatible changes will be released under a new prefix (for example,
/v2), while/v1continues to function for a deprecation period.
When constructing requests, always include the version prefix in the path (for example, POST /openai/v1/chat/completions).
Environments
MultiRoute supports different environments for development and production:
-
Production (recommended for live traffic)
- Base URL:
https://api.multiroute.ai/v1
- Base URL:
-
Local / development (for testing with a locally running instance of this service)
- Base URL (example):
http://localhost:8000/v1 - OpenAI inference (example):
http://localhost:8000/openai/v1
- Base URL (example):
Your actual local base URL may vary depending on how you run the service (Docker, Kubernetes, etc.).
Authentication (high level)
All requests to the MultiRoute API must be authenticated. The primary mechanism is an API key passed in the Authorization header:
Authorization: Bearer <your-api-key>
For some administrative endpoints (such as configuration and key management), a JWT-based flow may also be available:
Authorization: Bearer <access-token>
See Authentication for complete details, including how to obtain keys, token lifetimes, and best practices for secure storage.
Major endpoints
The table below summarizes the primary endpoint families:
| Endpoint family | Example path | Methods | Description |
|---|---|---|---|
| Chat completions | /openai/v1/chat/completions |
POST |
Text and chat-based completions, with support for streaming and non-streaming responses. |
| Responses | /openai/v1/responses |
POST |
Higher-level response API that handles prompt formatting and response shaping on top of underlying model calls. |
| Images (beta) | /openai/v1/images/generations |
POST |
Generate images from text prompts. This endpoint is experimental and may change. |
| Configs | /v1/configs |
GET, POST, PUT, PATCH, DELETE |
Manage provider configs: list, get, create, update, delete. Use the Providers page in the app or the config API. |
| API keys | /v1/api-keys |
GET, POST, DELETE |
Manage API keys: list, create, revoke, and rotate keys. |
For detailed request and response schemas, see the dedicated pages: