Providers
Supported providers and how to connect them.
Arctic supports 130+ AI providers in one unified interface, making it the most comprehensive multi-provider AI coding CLI available.
Quick Start
OAuth/Device Login (Coding Plans)
arctic auth loginChoose your provider and follow the browser flow.
API Keys
Set environment variables and launch Arctic:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
arcticFor detailed credential management, see Authentication.
Provider Categories
Coding Plans (Subscription-Based)
Premium coding-focused subscriptions with OAuth authentication:
- Codex (ChatGPT)
- Gemini CLI (Google)
- Antigravity
- GitHub Copilot
- GitHub Copilot Enterprise
- Claude Code (Anthropic)
- Z.AI
- Kimi for Coding
- Amp Code
- Qwen Code (Alibaba)
- MiniMax Coding Plan
API Providers
Pay-per-use API access with flexible authentication:
- OpenAI —
OPENAI_API_KEY - Anthropic —
ANTHROPIC_API_KEY - Google (Gemini API) —
GOOGLE_API_KEY - Amazon Bedrock —
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION - Azure OpenAI —
AZURE_API_KEY,AZURE_RESOURCE_NAME - Google Vertex AI —
GOOGLE_CLOUD_PROJECT,GOOGLE_CLOUD_LOCATION - Google Vertex Anthropic —
GOOGLE_CLOUD_PROJECT,GOOGLE_CLOUD_LOCATION - Perplexity —
PERPLEXITY_API_KEY - OpenRouter —
OPENROUTER_API_KEY - Ollama — Configure host and port
- Groq —
GROQ_API_KEY - Together AI —
TOGETHER_API_KEY - DeepSeek —
DEEPSEEK_API_KEY - Cerebras —
CEREBRAS_API_KEY - Mistral AI —
MISTRAL_API_KEY - Cohere —
COHERE_API_KEY - xAI (Grok) —
XAI_API_KEY
Multiple Accounts Per Provider
Arctic supports connecting multiple accounts for the same provider:
# Add a second account
arctic auth login anthropic --name work
# List all accounts
arctic auth list
# Use specific account
arctic run --model anthropic:work/claude-sonnet-4-5 "..."Benefits:
- Separate personal and work accounts
- Independent usage tracking
- Easy switching between accounts
- Isolated rate limits
Custom Providers (OpenAI-Compatible)
Add any OpenAI-compatible endpoint:
Basic Configuration
{
"provider": {
"my-provider": {
"api": "https://api.example.com/v1",
"npm": "@ai-sdk/openai-compatible",
"env": ["MY_PROVIDER_KEY"],
"models": {
"my-model": {
"id": "my-model-id",
"name": "My Custom Model",
"temperature": true,
"reasoning": false,
"attachment": true,
"tool_call": true,
"limit": {
"context": 128000,
"output": 4096
},
"cost": {
"input": 0.5,
"output": 1.5
}
}
}
}
}
}Advanced Configuration
{
"provider": {
"my-provider": {
"api": "https://api.example.com/v1",
"npm": "@ai-sdk/openai-compatible",
"options": {
"apiKey": "$MY_PROVIDER_KEY",
"headers": {
"X-Custom-Header": "value"
},
"timeout": 60000,
"baseURL": "https://api.example.com/v1"
},
"models": {
"fast-model": {
"id": "fast",
"name": "Fast Model",
"temperature": true,
"reasoning": false,
"attachment": false,
"tool_call": true,
"limit": {
"context": 32000,
"output": 2048
}
}
}
}
}
}Local Providers
{
"provider": {
"local-llm": {
"api": "http://localhost:8080/v1",
"npm": "@ai-sdk/openai-compatible",
"options": {
"apiKey": "not-needed"
}
}
}
}Provider Configuration
Enable/Disable Providers
{
"enabled_providers": ["openai", "anthropic", "google"],
"disabled_providers": ["provider-to-disable"]
}Model Filtering
{
"provider": {
"openai": {
"whitelist": ["gpt-4o", "gpt-4-turbo"],
"blacklist": ["gpt-3.5-turbo"]
}
}
}Provider Options
{
"provider": {
"openai": {
"options": {
"timeout": 120000,
"headers": {
"X-Custom": "value"
}
}
}
}
}Switching Providers
In TUI
Press Ctrl+X M to open the model picker and select any model from any provider.
In CLI
# Use specific model
arctic run --model openai/gpt-4o "..."
# Switch mid-conversation
arctic run --continue --model anthropic/claude-sonnet-4-5 "..."Default Model
Set in config:
{
"model": "anthropic/claude-sonnet-4-5",
"small_model": "openai/gpt-4o-mini"
}Troubleshooting
Provider Not Showing
- Check authentication:
arctic auth list - Verify environment variables
- Check enabled/disabled lists in config
- Refresh models:
arctic models --refresh
Model Not Available
- Verify provider is authenticated
- Check model whitelist/blacklist
- Ensure model is supported by provider
- Check for experimental model flag
- Refresh models:
arctic models --refresh
Refreshing Models
When providers release new models, refresh your local model list:
arctic models --refreshThis fetches the latest available models from all configured providers. Run this after:
- New model releases from your provider
- Provider adds new model tiers
- Configuration changes to custom providers
You can also refresh models for a specific provider:
arctic models --refresh --provider anthropicIn the TUI, use /models and press r to refresh.
Rate Limits
- View usage:
arctic usageor/usagein TUI - Switch to different provider
- Use multiple accounts
- Wait for limit reset
Connection Issues
- Check internet connection
- Verify API endpoint is accessible
- Check for firewall/proxy issues
- Try different provider
Token Refresh Failed
- Re-authenticate:
arctic auth logout && arctic auth login - Check token expiration
- Verify OAuth credentials
Full list
Show all supported providers
- aihubmix
- alibaba
- alibaba-cn
- amazon-bedrock
- azure
- azure-cognitive-services
- bailing
- baseten
- cerebras
- chutes
- cloudflare-ai-gateway
- cloudflare-workers-ai
- cohere
- cortecs
- deepinfra
- deepseek
- fastrouter
- fireworks-ai
- github-models
- google-vertex
- google-vertex-anthropic
- groq
- helicone
- huggingface
- iflowcn
- inception
- inference
- io-net
- kimi-for-coding
- llama
- lmstudio
- lucidquery
- minimax
- minimax-cn
- mistral
- modelscope
- moonshotai
- moonshotai-cn
- morph
- nebius
- nvidia
- ollama
- ollama-cloud
- opencode
- ovhcloud
- perplexity
- poe
- requesty
- sap-ai-core
- scaleway
- siliconflow
- siliconflow-cn
- submodel
- synthetic
- togetherai
- upstage
- v0
- venice
- vercel
- vultr
- wandb
- xai
- xiaomi
- zai-coding-plan
- zenmux
- zhipuai
- zhipuai-coding-plan