設定參考
Claudex 使用 figment 進行分層設定。來源按以下順序合併(後者覆寫前者):
- 程式內建預設值
- 全域設定(
~/.config/claudex/config.toml或.yaml) - 專案設定(CWD 或父目錄中的
claudex.toml/.claudex/config.toml,或$CLAUDEX_CONFIG) - 環境變數(
CLAUDEX_前綴,__分隔符)
同時支援 TOML 和 YAML 格式。
詳見設定。
# claude 二進位檔路徑(預設:PATH 中的 "claude")claude_binary = "claude"
# 代理伺服器繫結連接埠proxy_port = 13456
# 代理伺服器繫結位址proxy_host = "127.0.0.1"
# 日誌層級:trace, debug, info, warn, errorlog_level = "info"
# 終端超連結 (OSC 8):"auto" | true | falsehyperlinks = "auto"| 欄位 | 類型 | 預設值 | 說明 |
|---|---|---|---|
claude_binary | string | "claude" | Claude Code CLI 二進位檔路徑 |
proxy_port | integer | 13456 | 翻譯代理監聽的連接埠 |
proxy_host | string | "127.0.0.1" | 代理繫結的位址 |
log_level | string | "info" | 最低日誌層級 |
hyperlinks | string/bool | "auto" | 終端超連結:"auto"(偵測)、true(強制開啟)、false(強制關閉) |
定義模型識別符的簡稱:
[model_aliases]grok3 = "grok-3-beta"gpt4o = "gpt-4o"ds3 = "deepseek-chat"claude = "claude-sonnet-4-20250514"使用別名搭配 -m:
claudex run grok -m grok3Profile 設定
Section titled “Profile 設定”[[profiles]]name = "grok"provider_type = "OpenAICompatible"base_url = "https://api.x.ai/v1"api_key = "xai-..."# api_key_keyring = "grok-api-key"default_model = "grok-3-beta"auth_type = "api-key" # "api-key"(預設)或 "oauth"# oauth_provider = "openai" # auth_type = "oauth" 時必填backup_providers = ["deepseek"]custom_headers = {}extra_env = {}priority = 100enabled = truemax_tokens = 16384 # 選用:限制輸出 token 數strip_params = "auto" # "auto" | "none" | ["temperature", "top_p"]
# URL 查詢參數(例如 Azure api-version)[profiles.query_params]# api-version = "2024-12-01-preview"
# 模型 slot 映射(選用)[profiles.models]haiku = "grok-3-mini-beta"sonnet = "grok-3-beta"opus = "grok-3-beta"| 欄位 | 類型 | 預設值 | 說明 |
|---|---|---|---|
name | string | 必填 | 唯一 profile 識別名稱 |
provider_type | string | "DirectAnthropic" | "DirectAnthropic"、"OpenAICompatible" 或 "OpenAIResponses" |
base_url | string | 必填 | 提供商 API 端點 URL |
api_key | string | "" | 明文 API 金鑰 |
api_key_keyring | string | — | 作業系統金鑰鏈項目名稱(覆寫 api_key) |
default_model | string | 必填 | 預設使用的模型識別符 |
auth_type | string | "api-key" | 認證方式:"api-key" 或 "oauth" |
oauth_provider | string | — | OAuth 提供商名稱。可選:claude、openai、google、qwen、kimi、github、gitlab |
backup_providers | string[] | [] | 故障轉移的 profile 名稱,按順序嘗試 |
custom_headers | map | {} | 每次請求附帶的額外 HTTP 標頭 |
extra_env | map | {} | 啟動 Claude 時設定的環境變數 |
priority | integer | 100 | 智慧路由的優先權權重(越高越優先) |
enabled | boolean | true | 此 profile 是否啟用 |
max_tokens | integer | — | 傳送給提供商的最大輸出 token 數上限。設定時會覆寫請求中的 max_tokens |
strip_params | string/array | "auto" | 從請求中移除的參數。"auto" 偵測已知端點;"none" 傳送全部;陣列移除特定參數 |
[profiles.query_params]api-version = "2024-12-01-preview"| 欄位 | 類型 | 說明 |
|---|---|---|
query_params | map | 附加到每個請求 URL 的鍵值對 |
主要用於 Azure OpenAI(api-version),但適用於任何提供商。
模型 Slot 映射
Section titled “模型 Slot 映射”選用的 [profiles.models] 表將 Claude Code 的 /model 切換器 slot 映射到提供商特定的模型名稱。在 Claude Code 中切換模型時(例如 /model opus),Claudex 會將請求翻譯為映射的模型。
[profiles.models]haiku = "grok-3-mini-beta" # 映射 /model haikusonnet = "grok-3-beta" # 映射 /model sonnetopus = "grok-3-beta" # 映射 /model opus| 欄位 | 類型 | 說明 |
|---|---|---|
haiku | string | Claude Code 選取 haiku 時使用的模型 |
sonnet | string | Claude Code 選取 sonnet 時使用的模型 |
opus | string | Claude Code 選取 opus 時使用的模型 |
# Anthropic(DirectAnthropic — 不需翻譯)[[profiles]]name = "anthropic"provider_type = "DirectAnthropic"base_url = "https://api.anthropic.com"api_key = "sk-ant-..."default_model = "claude-sonnet-4-20250514"
# MiniMax(DirectAnthropic — 不需翻譯)[[profiles]]name = "minimax"provider_type = "DirectAnthropic"base_url = "https://api.minimax.io/anthropic"api_key = "..."default_model = "claude-sonnet-4-20250514"backup_providers = ["anthropic"]
# OpenRouter(OpenAICompatible — 需要翻譯)[[profiles]]name = "openrouter"provider_type = "OpenAICompatible"base_url = "https://openrouter.ai/api/v1"api_key = "..."default_model = "anthropic/claude-sonnet-4"
# Grok(OpenAICompatible — 需要翻譯)[[profiles]]name = "grok"provider_type = "OpenAICompatible"base_url = "https://api.x.ai/v1"api_key = "xai-..."default_model = "grok-3-beta"backup_providers = ["deepseek"]
# OpenAI(OpenAICompatible — 需要翻譯)[[profiles]]name = "chatgpt"provider_type = "OpenAICompatible"base_url = "https://api.openai.com/v1"api_key = "sk-..."default_model = "gpt-4o"
# DeepSeek(OpenAICompatible — 需要翻譯)[[profiles]]name = "deepseek"provider_type = "OpenAICompatible"base_url = "https://api.deepseek.com"api_key = "..."default_model = "deepseek-chat"backup_providers = ["grok"]
# Kimi / Moonshot(OpenAICompatible — 需要翻譯)[[profiles]]name = "kimi"provider_type = "OpenAICompatible"base_url = "https://api.moonshot.ai/v1"api_key = "..."default_model = "kimi-k2-0905-preview"
# GLM / 智譜(OpenAICompatible — 需要翻譯)[[profiles]]name = "glm"provider_type = "OpenAICompatible"base_url = "https://api.z.ai/api/paas/v4"api_key = "..."default_model = "glm-4.6"
# Groq(OpenAICompatible — 快速推論)[[profiles]]name = "groq"provider_type = "OpenAICompatible"base_url = "https://api.groq.com/openai/v1"api_key = "gsk_..."default_model = "llama-3.3-70b-versatile"
# Mistral AI(OpenAICompatible — 需要翻譯)[[profiles]]name = "mistral"provider_type = "OpenAICompatible"base_url = "https://api.mistral.ai/v1"api_key = "..."default_model = "mistral-large-latest"
# Together AI(OpenAICompatible — 需要翻譯)[[profiles]]name = "together"provider_type = "OpenAICompatible"base_url = "https://api.together.xyz/v1"api_key = "..."default_model = "meta-llama/Llama-3.3-70B-Instruct-Turbo"
# Perplexity(OpenAICompatible — 線上搜尋 + LLM)[[profiles]]name = "perplexity"provider_type = "OpenAICompatible"base_url = "https://api.perplexity.ai"api_key = "pplx-..."default_model = "sonar-pro"
# Cerebras(OpenAICompatible — 快速推論)[[profiles]]name = "cerebras"provider_type = "OpenAICompatible"base_url = "https://api.cerebras.ai/v1"api_key = "..."default_model = "llama-3.3-70b"
# Azure OpenAI(OpenAICompatible — api-key 標頭 + query_params)[[profiles]]name = "azure-openai"provider_type = "OpenAICompatible"base_url = "https://YOUR_RESOURCE.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT"api_key = "YOUR_AZURE_KEY"default_model = "gpt-4o"[profiles.query_params]api-version = "2024-12-01-preview"
# Google Vertex AI(DirectAnthropic)[[profiles]]name = "vertex-ai"provider_type = "DirectAnthropic"base_url = "https://us-east5-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT/locations/us-east5/publishers/anthropic/models"api_key = "YOUR_GCLOUD_TOKEN"default_model = "claude-sonnet-4@20250514"
# Ollama(本地,不需 API 金鑰)[[profiles]]name = "local-qwen"provider_type = "OpenAICompatible"base_url = "http://localhost:11434/v1"api_key = ""default_model = "qwen2.5:72b"enabled = false
# vLLM / LM Studio(本地)[[profiles]]name = "local-llama"provider_type = "OpenAICompatible"base_url = "http://localhost:8000/v1"api_key = ""default_model = "llama-3.3-70b"enabled = falseOAuth Profile 範例
Section titled “OAuth Profile 範例”# Claude Max(跳過代理,使用 Claude 從 ~/.claude 讀取的原生 OAuth)[[profiles]]name = "claude-max"provider_type = "DirectAnthropic"base_url = "https://api.claude.ai"default_model = "claude-sonnet-4-20250514"auth_type = "oauth"oauth_provider = "claude"
[profiles.models]haiku = "claude-haiku-4-20250514"sonnet = "claude-sonnet-4-20250514"opus = "claude-opus-4-20250514"
# ChatGPT/Codex 訂閱(OpenAIResponses)[[profiles]]name = "codex-sub"provider_type = "OpenAIResponses"base_url = "https://chatgpt.com/backend-api/codex"default_model = "gpt-5.3-codex"auth_type = "oauth"oauth_provider = "openai"
[profiles.models]haiku = "codex-mini-latest"sonnet = "gpt-5.3-codex"opus = "gpt-5.3-codex"
# 透過 OAuth 使用 Google Gemini[[profiles]]name = "gemini-sub"provider_type = "OpenAICompatible"base_url = "https://generativelanguage.googleapis.com/v1beta/openai"default_model = "gemini-2.5-pro"auth_type = "oauth"oauth_provider = "google"
# 透過 OAuth 使用 Kimi[[profiles]]name = "kimi-oauth"provider_type = "OpenAICompatible"base_url = "https://api.moonshot.cn/v1"default_model = "moonshot-v1-128k"auth_type = "oauth"oauth_provider = "kimi"
# 透過 OAuth 使用 Qwen[[profiles]]name = "qwen-oauth"provider_type = "OpenAICompatible"base_url = "https://chat.qwen.ai/api"default_model = "qwen3-235b-a22b"auth_type = "oauth"oauth_provider = "qwen"
# 透過 OAuth 使用 GitHub Copilot[[profiles]]name = "copilot"provider_type = "OpenAICompatible"base_url = "https://api.githubcopilot.com"default_model = "gpt-4o"auth_type = "oauth"oauth_provider = "github"
# 透過 GITLAB_TOKEN 使用 GitLab Duo[[profiles]]name = "gitlab-duo"provider_type = "OpenAICompatible"base_url = "https://gitlab.com/api/v4/ai/llm/proxy"default_model = "claude-sonnet-4-20250514"auth_type = "oauth"oauth_provider = "gitlab"[router]enabled = falseprofile = "local-qwen" # 重用某個 profile 的 base_url + api_keymodel = "qwen2.5:3b" # 覆寫模型(選用)| 欄位 | 類型 | 預設值 | 說明 |
|---|---|---|---|
enabled | boolean | false | 啟用智慧路由 |
profile | string | "" | 用於分類的 profile 名稱(使用其 base_url + api_key) |
model | string | "" | 分類用的模型覆寫(預設使用 profile 的 default_model) |
[router.rules]code = "deepseek"analysis = "grok"creative = "chatgpt"search = "kimi"math = "deepseek"default = "grok"| 鍵 | 說明 |
|---|---|
code | 程式設計任務的 profile |
analysis | 分析和推理任務的 profile |
creative | 創意寫作的 profile |
search | 搜尋和研究的 profile |
math | 數學和邏輯的 profile |
default | 意圖未分類時的回退 |
[context.compression]enabled = falsethreshold_tokens = 50000keep_recent = 10profile = "local-qwen" # 重用某個 profile 的 base_url + api_keymodel = "qwen2.5:3b" # 覆寫模型(選用)| 欄位 | 類型 | 預設值 | 說明 |
|---|---|---|---|
enabled | boolean | false | 啟用對話壓縮 |
threshold_tokens | integer | 50000 | token 數超過此值時壓縮 |
keep_recent | integer | 10 | 始終保留最後 N 條訊息不壓縮 |
profile | string | "" | 用於摘要的 profile 名稱 |
model | string | "" | 摘要用的模型覆寫 |
跨 Profile 共享
Section titled “跨 Profile 共享”[context.sharing]enabled = falsemax_context_size = 2000| 欄位 | 類型 | 預設值 | 說明 |
|---|---|---|---|
enabled | boolean | false | 啟用跨 profile 上下文共享 |
max_context_size | integer | 2000 | 從其他 profile 注入的最大 token 數 |
本地 RAG
Section titled “本地 RAG”[context.rag]enabled = falseindex_paths = ["./src", "./docs"]profile = "local-qwen" # 重用某個 profile 的 base_url + api_keymodel = "nomic-embed-text" # 嵌入模型chunk_size = 512top_k = 5| 欄位 | 類型 | 預設值 | 說明 |
|---|---|---|---|
enabled | boolean | false | 啟用本地 RAG |
index_paths | string[] | [] | 要索引的目錄 |
profile | string | "" | 用於嵌入的 profile 名稱 |
model | string | "" | 嵌入模型名稱 |
chunk_size | integer | 512 | 以 token 計的文字區塊大小 |
top_k | integer | 5 | 要注入的結果數量 |