Skip to content

Translation Proxy

The translation proxy is the core of Claudex. It sits between Claude Code and your AI providers, transparently converting between the Anthropic Messages API and the OpenAI Chat Completions API (or Responses API).

Claude Code → Anthropic Messages API request
└── Claudex Proxy (127.0.0.1:13456)
├── DirectAnthropic provider → forward with headers
├── OpenAICompatible provider
│ ├── Translate request: Anthropic → OpenAI Chat Completions
│ ├── Apply query_params, strip_params, custom_headers
│ ├── Forward to provider
│ └── Translate response: OpenAI → Anthropic
└── OpenAIResponses provider
├── Translate request: Anthropic → OpenAI Responses API
├── Forward to provider
└── Translate response: Responses → Anthropic

Claudex uses a ProviderAdapter trait to handle differences between provider APIs. Three adapters are implemented:

AdapterTranslationUsed By
DirectAnthropicNone (passthrough)Anthropic, MiniMax, Vertex AI
ChatCompletionsFull Anthropic ↔ OpenAI translationGrok, OpenAI, DeepSeek, Kimi, GLM, OpenRouter, Groq, Mistral, Together AI, Perplexity, Cerebras, Azure OpenAI, GitHub Copilot, GitLab Duo, Ollama, vLLM, LM Studio
ResponsesAnthropic ↔ OpenAI Responses APIChatGPT/Codex subscriptions

Request Translation (Anthropic → OpenAI)

Section titled “Request Translation (Anthropic → OpenAI)”
AnthropicOpenAI
system fieldSystem message in messages array
messages[].content blocks (text, image, tool_use, tool_result)messages[].content + tool_calls
tools array (JSON Schema with input_schema)tools array (function format with parameters)
tool_choice (auto, any, {name})tool_choice (auto, required, {function: {name}})
max_tokensmax_tokens (capped by max_tokens profile setting if set)
temperature, top_pDirect mapping (stripped if strip_params matches)

Response Translation (OpenAI → Anthropic)

Section titled “Response Translation (OpenAI → Anthropic)”
OpenAIAnthropic
choices[0].message.contentcontent blocks (type: text)
choices[0].message.tool_callscontent blocks (type: tool_use)
finish_reason: stopstop_reason: end_turn
finish_reason: tool_callsstop_reason: tool_use
usage.prompt_tokens / completion_tokensusage.input_tokens / output_tokens

Claude Code can generate tool names longer than 64 characters (e.g., mcp__server-name__very-long-tool-name-that-exceeds-the-limit). OpenAI and many providers enforce a 64-character limit.

Claudex automatically:

  1. Truncates names exceeding 64 characters in outgoing requests
  2. Builds a mapping table of truncated → original names
  3. Restores original names in provider responses

This roundtrip is fully transparent.

Claudex fully supports SSE (Server-Sent Events) streaming, translating OpenAI stream chunks into Anthropic stream events in real time:

OpenAI SSEAnthropic SSE
First chunkmessage_start + content_block_start
choices[0].delta.contentcontent_block_delta (text_delta)
choices[0].delta.tool_callscontent_block_delta (input_json_delta)
finish_reason presentcontent_block_stop + message_delta + message_stop

The streaming translator maintains a state machine to properly handle tool call accumulation and content block boundaries.

Azure OpenAI uses a different authentication and URL scheme:

  • Authentication: api-key header instead of Authorization: Bearer
  • URL format: https://{resource}.openai.azure.com/openai/deployments/{deployment}
  • API version: Required via query_params

Claudex auto-detects Azure by checking if base_url contains openai.azure.com and adjusts authentication accordingly.

ProviderTypeBase URL
AnthropicDirectAnthropichttps://api.anthropic.com
MiniMaxDirectAnthropichttps://api.minimax.io/anthropic
Google Vertex AIDirectAnthropichttps://REGION-aiplatform.googleapis.com/v1/projects/...
OpenRouterOpenAICompatiblehttps://openrouter.ai/api/v1
Grok (xAI)OpenAICompatiblehttps://api.x.ai/v1
OpenAIOpenAICompatiblehttps://api.openai.com/v1
DeepSeekOpenAICompatiblehttps://api.deepseek.com
Kimi/MoonshotOpenAICompatiblehttps://api.moonshot.ai/v1
GLM (Zhipu)OpenAICompatiblehttps://api.z.ai/api/paas/v4
GroqOpenAICompatiblehttps://api.groq.com/openai/v1
Mistral AIOpenAICompatiblehttps://api.mistral.ai/v1
Together AIOpenAICompatiblehttps://api.together.xyz/v1
PerplexityOpenAICompatiblehttps://api.perplexity.ai
CerebrasOpenAICompatiblehttps://api.cerebras.ai/v1
Azure OpenAIOpenAICompatiblehttps://{resource}.openai.azure.com/...
GitHub CopilotOpenAICompatiblehttps://api.githubcopilot.com
GitLab DuoOpenAICompatiblehttps://gitlab.com/api/v4/ai/llm/proxy
OllamaOpenAICompatiblehttp://localhost:11434/v1
vLLMOpenAICompatiblehttp://localhost:8000/v1
LM StudioOpenAICompatiblehttp://localhost:1234/v1
ChatGPT/Codex subOpenAIResponseshttps://chatgpt.com/backend-api/codex

The proxy exposes a /v1/models endpoint that lists all enabled profiles. Each entry includes custom fields:

  • x-claudex-profile: profile name
  • x-claudex-provider: provider type (anthropic, openai-compatible, openai-responses)

Claude Code queries this endpoint to discover available models.

Terminal window
# Start proxy as a daemon
claudex proxy start -d
# Check proxy status
claudex proxy status
# Stop proxy daemon
claudex proxy stop
# Start on a custom port
claudex proxy start -p 8080

When you run claudex run <profile>, the proxy is automatically started in the background if not already running.