Documentation
Everything you need to use the enowxai proxy
API Reference
enowxai exposes an OpenAI-compatible API. All endpoints are served at http://localhost:1430.
GET
/v1/models
List all available models. Returns an OpenAI-compatible model list.
Response
{
"object": "list",
"data": [
{"id": "claude-sonnet-4", "object": "model", "owned_by": "enowxlabs"},
{"id": "gemini-2.5-pro", "object": "model", "owned_by": "enowxlabs"},
...
]
}
POST
/v1/chat/completions
Create a chat completion. Supports both streaming and non-streaming responses.
Request Body
{
"model": "claude-sonnet-4",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"stream": true,
"temperature": 0.7,
"max_tokens": 4096
}
model
required
— Model ID from /v1/models
messages
required
— Array of message objects with role and content
stream
optional
— Enable SSE streaming (default: false)
GET
/health
Health check endpoint. Returns 200 if the proxy is running.