Configuration Reference
Config File Location
Section titled “Config File Location”Claudex searches for config files in this order:
$CLAUDEX_CONFIGenvironment variable./claudex.toml(current directory)./.claudex/config.toml(current directory)- Parent directories (up to 10 levels), checking both patterns
~/.config/claudex/config.toml(XDG — checked before platform-specific paths)
See Configuration for full details.
Global Settings
Section titled “Global Settings”# Path to claude binary (default: "claude" from PATH)claude_binary = "claude"
# Proxy server bind portproxy_port = 13456
# Proxy server bind addressproxy_host = "127.0.0.1"
# Log level: trace, debug, info, warn, errorlog_level = "info"| Field | Type | Default | Description |
|---|---|---|---|
claude_binary | string | "claude" | Path to the Claude Code CLI binary |
proxy_port | integer | 13456 | Port the translation proxy listens on |
proxy_host | string | "127.0.0.1" | Address the proxy binds to |
log_level | string | "info" | Minimum log level |
Model Aliases
Section titled “Model Aliases”Define shorthand names for model identifiers:
[model_aliases]grok3 = "grok-3-beta"gpt4o = "gpt-4o"ds3 = "deepseek-chat"claude = "claude-sonnet-4-20250514"Use aliases with -m:
claudex run grok -m grok3Profile Configuration
Section titled “Profile Configuration”[[profiles]]name = "grok"provider_type = "OpenAICompatible"base_url = "https://api.x.ai/v1"api_key = "xai-..."# api_key_keyring = "grok-api-key"default_model = "grok-3-beta"auth_type = "api-key" # "api-key" (default) or "oauth"# oauth_provider = "openai" # required when auth_type = "oauth"backup_providers = ["deepseek"]custom_headers = {}extra_env = {}priority = 100enabled = true
# Model slot mapping (optional)[profiles.models]haiku = "grok-3-mini-beta"sonnet = "grok-3-beta"opus = "grok-3-beta"| Field | Type | Default | Description |
|---|---|---|---|
name | string | required | Unique profile identifier |
provider_type | string | "DirectAnthropic" | "DirectAnthropic", "OpenAICompatible", or "OpenAIResponses" |
base_url | string | required | Provider API endpoint URL |
api_key | string | "" | API key in plaintext |
api_key_keyring | string | — | OS keychain entry name (overrides api_key) |
default_model | string | required | Model identifier to use by default |
auth_type | string | "api-key" | Authentication method: "api-key" or "oauth" |
oauth_provider | string | — | OAuth provider name (required when auth_type = "oauth"). One of: claude, openai, google, qwen, kimi, github |
backup_providers | string[] | [] | Profile names for failover, tried in order |
custom_headers | map | {} | Additional HTTP headers sent with every request |
extra_env | map | {} | Environment variables set when launching Claude |
priority | integer | 100 | Priority weight for smart routing (higher = preferred) |
enabled | boolean | true | Whether this profile is active |
Model Slot Mapping
Section titled “Model Slot Mapping”The optional [profiles.models] table maps Claude Code’s /model switcher slots to provider-specific model names. When you switch models inside Claude Code (e.g., /model opus), Claudex translates the request to the mapped model.
[profiles.models]haiku = "grok-3-mini-beta" # maps /model haikusonnet = "grok-3-beta" # maps /model sonnetopus = "grok-3-beta" # maps /model opus| Field | Type | Description |
|---|---|---|
haiku | string | Model to use when Claude Code selects haiku |
sonnet | string | Model to use when Claude Code selects sonnet |
opus | string | Model to use when Claude Code selects opus |
Provider Examples
Section titled “Provider Examples”# Anthropic (DirectAnthropic — no translation)[[profiles]]name = "anthropic"provider_type = "DirectAnthropic"base_url = "https://api.anthropic.com"api_key = "sk-ant-..."default_model = "claude-sonnet-4-20250514"
# MiniMax (DirectAnthropic — no translation)[[profiles]]name = "minimax"provider_type = "DirectAnthropic"base_url = "https://api.minimax.io/anthropic"api_key = "..."default_model = "claude-sonnet-4-20250514"backup_providers = ["anthropic"]
# OpenRouter (OpenAICompatible — needs translation)[[profiles]]name = "openrouter"provider_type = "OpenAICompatible"base_url = "https://openrouter.ai/api/v1"api_key = "..."default_model = "anthropic/claude-sonnet-4"
# Grok (OpenAICompatible — needs translation)[[profiles]]name = "grok"provider_type = "OpenAICompatible"base_url = "https://api.x.ai/v1"api_key = "xai-..."default_model = "grok-3-beta"backup_providers = ["deepseek"]
# OpenAI (OpenAICompatible — needs translation)[[profiles]]name = "chatgpt"provider_type = "OpenAICompatible"base_url = "https://api.openai.com/v1"api_key = "sk-..."default_model = "gpt-4o"
# DeepSeek (OpenAICompatible — needs translation)[[profiles]]name = "deepseek"provider_type = "OpenAICompatible"base_url = "https://api.deepseek.com"api_key = "..."default_model = "deepseek-chat"backup_providers = ["grok"]
# Kimi / Moonshot (OpenAICompatible — needs translation)[[profiles]]name = "kimi"provider_type = "OpenAICompatible"base_url = "https://api.moonshot.cn/v1"api_key = "..."default_model = "moonshot-v1-128k"
# GLM / 智谱 (OpenAICompatible — needs translation)[[profiles]]name = "glm"provider_type = "OpenAICompatible"base_url = "https://open.bigmodel.cn/api/paas/v4"api_key = "..."default_model = "glm-4-plus"
# Ollama (local, no API key needed)[[profiles]]name = "local-qwen"provider_type = "OpenAICompatible"base_url = "http://localhost:11434/v1"api_key = ""default_model = "qwen2.5:72b"enabled = false
# vLLM / LM Studio (local)[[profiles]]name = "local-llama"provider_type = "OpenAICompatible"base_url = "http://localhost:8000/v1"api_key = ""default_model = "llama-3.3-70b"enabled = false
# ChatGPT/Codex subscription (OpenAIResponses — Responses API translation)[[profiles]]name = "codex-sub"provider_type = "OpenAIResponses"base_url = "https://chatgpt.com/backend-api/codex"default_model = "gpt-4o"auth_type = "oauth"oauth_provider = "openai"OAuth Profile Examples
Section titled “OAuth Profile Examples”# OpenAI via OAuth (reads token from Codex CLI ~/.codex/auth.json)[[profiles]]name = "chatgpt-oauth"provider_type = "OpenAICompatible"base_url = "https://api.openai.com/v1"default_model = "gpt-4o"auth_type = "oauth"oauth_provider = "openai"
[profiles.models]haiku = "gpt-4o-mini"sonnet = "gpt-4o"opus = "o1"
# Claude subscription (skips proxy, uses Claude's native OAuth from ~/.claude)[[profiles]]name = "claude-sub"provider_type = "DirectAnthropic"base_url = "https://api.anthropic.com"default_model = "claude-sonnet-4-20250514"auth_type = "oauth"oauth_provider = "claude"
[profiles.models]haiku = "claude-haiku-4-20250514"sonnet = "claude-sonnet-4-20250514"opus = "claude-opus-4-20250514"
# Google Gemini via OAuth[[profiles]]name = "gemini"provider_type = "OpenAICompatible"base_url = "https://generativelanguage.googleapis.com/v1beta/openai"default_model = "gemini-2.5-pro"auth_type = "oauth"oauth_provider = "google"
# Kimi via OAuth[[profiles]]name = "kimi-oauth"provider_type = "OpenAICompatible"base_url = "https://api.moonshot.cn/v1"default_model = "moonshot-v1-128k"auth_type = "oauth"oauth_provider = "kimi"
# Qwen via OAuth[[profiles]]name = "qwen-oauth"provider_type = "OpenAICompatible"base_url = "https://chat.qwenlm.ai/api/chat/v1"default_model = "qwen-max"auth_type = "oauth"oauth_provider = "qwen"
# GitHub Copilot via OAuth[[profiles]]name = "github-copilot"provider_type = "OpenAICompatible"base_url = "https://api.githubcopilot.com"default_model = "gpt-4o"auth_type = "oauth"oauth_provider = "github"
# ChatGPT/Codex subscription via OAuth (OpenAIResponses)[[profiles]]name = "codex-sub"provider_type = "OpenAIResponses"base_url = "https://chatgpt.com/backend-api/codex"default_model = "gpt-4o"auth_type = "oauth"oauth_provider = "openai"
[profiles.models]haiku = "gpt-4o-mini"sonnet = "gpt-4o"opus = "o1-pro"Smart Router
Section titled “Smart Router”[router]enabled = falseprofile = "local-qwen" # reuse a profile's base_url + api_keymodel = "qwen2.5:3b" # override model (optional)| Field | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable smart routing |
profile | string | "" | Profile name to reuse for classification (uses its base_url + api_key) |
model | string | "" | Model override for classification (defaults to profile’s default_model) |
Routing Rules
Section titled “Routing Rules”[router.rules]code = "deepseek"analysis = "grok"creative = "chatgpt"search = "kimi"math = "deepseek"default = "grok"| Key | Description |
|---|---|
code | Profile for coding tasks |
analysis | Profile for analysis and reasoning |
creative | Profile for creative writing |
search | Profile for search and research |
math | Profile for math and logic |
default | Fallback when intent is unclassified |
Context Engine
Section titled “Context Engine”Compression
Section titled “Compression”[context.compression]enabled = falsethreshold_tokens = 50000keep_recent = 10profile = "local-qwen" # reuse a profile's base_url + api_keymodel = "qwen2.5:3b" # override model (optional)| Field | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable conversation compression |
threshold_tokens | integer | 50000 | Compress when token count exceeds this |
keep_recent | integer | 10 | Always keep the last N messages uncompressed |
profile | string | "" | Profile name to reuse for summarization (uses its base_url + api_key) |
model | string | "" | Model override for summarization (defaults to profile’s default_model) |
Cross-Profile Sharing
Section titled “Cross-Profile Sharing”[context.sharing]enabled = falsemax_context_size = 2000| Field | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable cross-profile context sharing |
max_context_size | integer | 2000 | Max tokens to inject from other profiles |
Local RAG
Section titled “Local RAG”[context.rag]enabled = falseindex_paths = ["./src", "./docs"]profile = "local-qwen" # reuse a profile's base_url + api_keymodel = "nomic-embed-text" # embedding modelchunk_size = 512top_k = 5| Field | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable local RAG |
index_paths | string[] | [] | Directories to index |
profile | string | "" | Profile name to reuse for embeddings (uses its base_url + api_key) |
model | string | "" | Embedding model name (defaults to profile’s default_model) |
chunk_size | integer | 512 | Text chunk size in tokens |
top_k | integer | 5 | Number of results to inject |