Skip to content

Configuration

Claudex uses figment for layered configuration. Sources are merged in this order (later sources override earlier ones):

  1. Programmatic defaults (built-in fallbacks)
  2. Global config (~/.config/claudex/config.toml or config.yaml)
  3. Project config (claudex.toml or claudex.yaml in the current directory or parent directories up to 10 levels, or $CLAUDEX_CONFIG)
  4. Environment variables (CLAUDEX_ prefix, __ as separator)

Both TOML and YAML formats are supported. The file format is detected by extension (.toml or .yaml/.yml).

Terminal window
# Show loaded config path and all search locations
claudex config show
# Show just the config file path
claudex config path
# Create a new config in the current directory
claudex config init
# Recreate config from config.example.toml
claudex config recreate
# Open config in your $EDITOR
claudex config edit
# Validate config syntax and profile references
claudex config validate
# Get a specific config value
claudex config get proxy_port
# Set a specific config value
claudex config set proxy_port 8080
# Export current config to stdout
claudex config export
# Path to claude binary (default: "claude" from PATH)
# claude_binary = "/usr/local/bin/claude"
# Proxy settings
proxy_port = 13456
proxy_host = "127.0.0.1"
# Log level: trace, debug, info, warn, error
log_level = "info"
# Terminal hyperlinks (OSC 8): "auto" | true | false
# "auto" detects terminal support; true/false force on/off
hyperlinks = "auto"
# Model aliases (shorthand → full model name)
[model_aliases]
grok3 = "grok-3-beta"
gpt4o = "gpt-4o"
ds3 = "deepseek-chat"

Each profile represents an AI provider connection. There are three provider types:

For providers that natively support the Anthropic Messages API. Requests are forwarded with minimal modification.

[[profiles]]
name = "anthropic"
provider_type = "DirectAnthropic"
base_url = "https://api.anthropic.com"
api_key = "sk-ant-..."
default_model = "claude-sonnet-4-20250514"
priority = 100
enabled = true

Compatible providers: Anthropic, MiniMax, Google Vertex AI

For providers using the OpenAI Chat Completions API. Claudex automatically translates between Anthropic and OpenAI protocols.

[[profiles]]
name = "grok"
provider_type = "OpenAICompatible"
base_url = "https://api.x.ai/v1"
api_key = "xai-..."
default_model = "grok-3-beta"
backup_providers = ["deepseek"]
priority = 100
enabled = true

Compatible providers: Grok (xAI), OpenAI, DeepSeek, Kimi/Moonshot, GLM (Zhipu), OpenRouter, Groq, Mistral, Together AI, Perplexity, Cerebras, Azure OpenAI, GitHub Copilot, GitLab Duo, Ollama, vLLM, LM Studio

For providers using the OpenAI Responses API (e.g., ChatGPT/Codex subscriptions). Claudex translates between Anthropic Messages API and the OpenAI Responses API.

[[profiles]]
name = "codex-sub"
provider_type = "OpenAIResponses"
base_url = "https://chatgpt.com/backend-api/codex"
default_model = "gpt-5.3-codex"
auth_type = "oauth"
oauth_provider = "openai"

Compatible providers: ChatGPT/Codex subscriptions (via Codex CLI)

FieldDefaultDescription
namerequiredUnique profile identifier
provider_typeDirectAnthropicDirectAnthropic, OpenAICompatible, or OpenAIResponses
base_urlrequiredProvider API endpoint
api_key""API key (plaintext)
api_key_keyringRead API key from OS keychain instead
default_modelrequiredDefault model to use
auth_type"api-key""api-key" or "oauth"
oauth_providerOAuth provider (claude, openai, google, qwen, kimi, github, gitlab). Required when auth_type = "oauth"
backup_providers[]Failover profile names
custom_headers{}Extra HTTP headers
extra_env{}Extra environment variables for Claude
priority100Priority for smart routing
enabledtrueWhether this profile is active
max_tokensCap max output tokens sent to provider (optional)
strip_params"auto""auto", "none", or ["temperature", "top_p"]. Auto-detects unsupported params (e.g., ChatGPT Codex endpoint)
[profiles.query_params]{}URL query parameters (e.g., Azure api-version)
[profiles.models]Model slot mapping table (haiku, sonnet, opus fields)

The easiest way to add a profile is the interactive wizard:

Terminal window
claudex profile add

It guides you through provider selection, API key entry (with optional keyring storage), model selection, and connectivity testing.

Store API keys securely in your OS keychain instead of plaintext config:

[[profiles]]
name = "grok"
api_key_keyring = "grok-api-key" # reads from OS keychain

Supported backends:

  • macOS: Keychain
  • Linux: Secret Service (GNOME Keyring / KDE Wallet)

Instead of API keys, you can authenticate with your existing provider subscription via OAuth. This is useful if you have a Claude Pro/Team, ChatGPT Plus, or other subscription plan.

  1. Set the profile’s auth_type to "oauth" and specify the oauth_provider:
[[profiles]]
name = "codex-sub"
provider_type = "OpenAIResponses"
base_url = "https://chatgpt.com/backend-api/codex"
default_model = "gpt-5.3-codex"
auth_type = "oauth"
oauth_provider = "openai"
  1. Log in with the auth command:
Terminal window
claudex auth login openai
  1. Check your auth status:
Terminal window
claudex auth status
Provideroauth_providerToken Source
ClaudeclaudeReads from ~/.claude/.credentials.json (Claude Code’s native config)
ChatGPTopenaiBrowser PKCE or Device Code, falls back to ~/.codex/auth.json (Codex CLI)
GooglegoogleReads from Gemini CLI credentials
QwenqwenDevice Code flow
KimikimiReads from Kimi CLI credentials
GitHubgithubDevice Code flow, falls back to ~/.config/github-copilot/
GitLabgitlabGITLAB_TOKEN environment variable

For details on each provider’s OAuth flow, see OAuth Subscriptions.

When using OAuth profiles, Claudex sets ANTHROPIC_AUTH_TOKEN (not ANTHROPIC_API_KEY) when launching Claude Code. This prevents conflicts with Claude Code’s own subscription login mechanism, which uses ANTHROPIC_API_KEY internally.

The proxy automatically refreshes OAuth tokens before they expire. You can also manually refresh with:

Terminal window
claudex auth refresh openai

Some providers require URL query parameters (e.g., Azure OpenAI’s api-version). Use the [profiles.query_params] table:

[[profiles]]
name = "azure-openai"
provider_type = "OpenAICompatible"
base_url = "https://YOUR_RESOURCE.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT"
api_key = "YOUR_AZURE_KEY"
default_model = "gpt-4o"
[profiles.query_params]
api-version = "2024-12-01-preview"

Claudex appends these parameters to every request URL for the profile. Azure OpenAI is auto-detected by base_url containing openai.azure.com and uses the api-key header for authentication instead of Authorization: Bearer.

Some providers do not support certain parameters (e.g., ChatGPT Codex endpoint rejects temperature, top_p). The strip_params field controls which parameters are removed before sending:

strip_params = "auto" # auto-detect and strip unsupported params (default)
strip_params = "none" # send all params as-is
strip_params = ["temperature", "top_p", "top_k"] # strip specific params

When set to "auto", Claudex detects known endpoints (e.g., chatgpt.com) and strips parameters that would cause errors.

Claude Code has a built-in /model switcher with three slots: haiku, sonnet, and opus. See Model Slot Mapping for details.

Some providers (notably OpenAI) enforce a 64-character limit on tool (function) names. Claude Code can generate tool names that exceed this limit.

Claudex automatically truncates tool names longer than 64 characters when sending requests to OpenAI-compatible providers and transparently restores the original names when processing responses. This roundtrip is fully transparent.

Claudex supports one-shot (non-interactive) execution for use in CI/CD pipelines, scripts, and automation:

Terminal window
# Print response and exit
claudex run grok "Explain this codebase" --print
# Skip all permission prompts (for fully automated pipelines)
claudex run grok "Fix lint errors" --print --dangerously-skip-permissions

In non-interactive mode, logs are written to per-instance log files at ~/Library/Caches/claudex/proxy-{timestamp}-{pid}.log instead of stderr, keeping the stdout output clean for piping and automation.

Claudex supports OSC 8 clickable hyperlinks in terminal output. See Terminal Hyperlinks for details.

# "auto" detects terminal support; true/false force on/off
hyperlinks = "auto"

Install reusable bundles of rules, skills, and MCP servers. See Configuration Sets for details.

Terminal window
claudex sets add ./my-set
claudex sets list

See config.example.toml for a complete configuration file with all providers and options.

For step-by-step setup instructions for each provider (including API key links and OAuth flows), see the Provider Setup Guide.