Skip to content

Configuration

Claudex searches for config files in this order:

  1. $CLAUDEX_CONFIG environment variable
  2. ./claudex.toml (current directory)
  3. ./.claudex/config.toml (current directory)
  4. Parent directories (up to 10 levels), checking both patterns
  5. ~/.config/claudex/config.toml (XDG — checked before platform-specific paths)
Terminal window
# Show loaded config path and search order
claudex config
# Create a local config in the current directory
claudex config --init
# Path to claude binary (default: "claude" from PATH)
# claude_binary = "/usr/local/bin/claude"
# Proxy settings
proxy_port = 13456
proxy_host = "127.0.0.1"
# Log level: trace, debug, info, warn, error
log_level = "info"
# Model aliases (shorthand → full model name)
[model_aliases]
grok3 = "grok-3-beta"
gpt4o = "gpt-4o"
ds3 = "deepseek-chat"

Each profile represents an AI provider connection. There are three provider types:

For providers that natively support the Anthropic Messages API. Requests are forwarded with minimal modification.

[[profiles]]
name = "anthropic"
provider_type = "DirectAnthropic"
base_url = "https://api.anthropic.com"
api_key = "sk-ant-..."
default_model = "claude-sonnet-4-20250514"
priority = 100
enabled = true

Compatible providers: Anthropic, MiniMax

For providers using the OpenAI Chat Completions API. Claudex automatically translates between Anthropic and OpenAI protocols.

[[profiles]]
name = "grok"
provider_type = "OpenAICompatible"
base_url = "https://api.x.ai/v1"
api_key = "xai-..."
default_model = "grok-3-beta"
backup_providers = ["deepseek"]
priority = 100
enabled = true

Compatible providers: Grok (xAI), OpenAI, DeepSeek, Kimi/Moonshot, GLM (Zhipu), OpenRouter, Ollama, vLLM, LM Studio

For providers using the OpenAI Responses API (e.g., ChatGPT/Codex subscriptions). Claudex translates between Anthropic Messages API and the OpenAI Responses API.

[[profiles]]
name = "codex-sub"
provider_type = "OpenAIResponses"
base_url = "https://chatgpt.com/backend-api/codex"
default_model = "gpt-4o"
auth_type = "oauth"
oauth_provider = "openai"

Compatible providers: ChatGPT/Codex subscriptions (via Codex CLI)

FieldDefaultDescription
namerequiredUnique profile identifier
provider_typeDirectAnthropicDirectAnthropic, OpenAICompatible, or OpenAIResponses
base_urlrequiredProvider API endpoint
api_key""API key (plaintext)
api_key_keyringRead API key from OS keychain instead
default_modelrequiredDefault model to use
auth_type"api-key""api-key" or "oauth"
oauth_providerOAuth provider (claude, openai, google, qwen, kimi, github). Required when auth_type = "oauth"
backup_providers[]Failover profile names
custom_headers{}Extra HTTP headers
extra_env{}Extra environment variables for Claude
priority100Priority for smart routing
enabledtrueWhether this profile is active
[profiles.models]Model slot mapping table (haiku, sonnet, opus fields)

The easiest way to add a profile is the interactive wizard:

Terminal window
claudex profile add

It guides you through provider selection, API key entry (with optional keyring storage), model selection, and connectivity testing.

Store API keys securely in your OS keychain instead of plaintext config:

[[profiles]]
name = "grok"
api_key_keyring = "grok-api-key" # reads from OS keychain

Supported backends:

  • macOS: Keychain
  • Linux: Secret Service (GNOME Keyring / KDE Wallet)

Instead of API keys, you can authenticate with your existing provider subscription via OAuth. This is useful if you have a Claude Pro/Team, ChatGPT Plus, or other subscription plan.

  1. Set the profile’s auth_type to "oauth" and specify the oauth_provider:
[[profiles]]
name = "chatgpt-oauth"
provider_type = "OpenAICompatible"
base_url = "https://api.openai.com/v1"
default_model = "gpt-4o"
auth_type = "oauth"
oauth_provider = "openai"
  1. Log in with the auth command:
Terminal window
claudex auth login openai
  1. Check your auth status:
Terminal window
claudex auth status
Provideroauth_providerToken Source
ClaudeclaudeReads from ~/.claude (Claude Code’s native config)
OpenAIopenaiReads from ~/.codex/auth.json (Codex CLI)
GooglegoogleOAuth device code flow
QwenqwenOAuth device code flow
KimikimiOAuth device code flow
GitHubgithubOAuth device code flow

When using OAuth profiles, Claudex sets ANTHROPIC_AUTH_TOKEN (not ANTHROPIC_API_KEY) when launching Claude Code. This prevents conflicts with Claude Code’s own subscription login mechanism, which uses ANTHROPIC_API_KEY internally.

The proxy automatically refreshes OAuth tokens before they expire. You can also manually refresh with:

Terminal window
claudex auth refresh openai

Claude Code has a built-in /model switcher with three slots: haiku, sonnet, and opus. By default, these map to Anthropic models, but with Claudex you can map them to any provider’s models.

Add a [profiles.models] table to any profile:

[[profiles]]
name = "grok"
provider_type = "OpenAICompatible"
base_url = "https://api.x.ai/v1"
api_key = "xai-..."
default_model = "grok-3-beta"
[profiles.models]
haiku = "grok-3-mini-beta"
sonnet = "grok-3-beta"
opus = "grok-3-beta"

When you type /model sonnet inside Claude Code, Claudex translates the request to use grok-3-beta. The /model opus command maps to grok-3-beta, and so on.

# OpenAI model mapping
[profiles.models]
haiku = "gpt-4o-mini"
sonnet = "gpt-4o"
opus = "o1"
# DeepSeek model mapping
[profiles.models]
haiku = "deepseek-chat"
sonnet = "deepseek-chat"
opus = "deepseek-reasoner"
# Google Gemini model mapping
[profiles.models]
haiku = "gemini-2.0-flash"
sonnet = "gemini-2.5-pro"
opus = "gemini-2.5-pro"

Some providers (notably OpenAI) enforce a 64-character limit on tool (function) names. Claude Code can generate tool names that exceed this limit.

Claudex automatically truncates tool names longer than 64 characters when sending requests to OpenAI-compatible providers and transparently restores the original names when processing responses. This roundtrip is fully transparent — no configuration needed.

Claudex supports one-shot (non-interactive) execution for use in CI/CD pipelines, scripts, and automation:

Terminal window
# Print response and exit
claudex run grok "Explain this codebase" --print
# Skip all permission prompts (for fully automated pipelines)
claudex run grok "Fix lint errors" --print --dangerously-skip-permissions

In non-interactive mode, logs are written to per-instance log files at ~/Library/Caches/claudex/proxy-{timestamp}-{pid}.log instead of stderr, keeping the stdout output clean for piping and automation.

Each Claudex proxy instance writes logs to its own file:

~/Library/Caches/claudex/proxy-{timestamp}-{pid}.log

This avoids log interleaving when running multiple instances and keeps stderr clean during claudex run (especially in non-interactive mode). Logs include proxy startup, request translation, OAuth token refresh events, and circuit breaker state changes.

See config.example.toml for a complete configuration file with all providers and options.

For step-by-step setup instructions for each provider (including API key links and OAuth flows), see the Provider Setup Guide.