Arctic

Providers

Supported providers and how to connect them.

Arctic supports 130+ AI providers in one unified interface, making it the most comprehensive multi-provider AI coding CLI available.

Quick Start

OAuth/Device Login (Coding Plans)

arctic auth login

Choose your provider and follow the browser flow.

API Keys

Set environment variables and launch Arctic:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
arctic

For detailed credential management, see Authentication.

Provider Categories

Coding Plans (Subscription-Based)

Premium coding-focused subscriptions with OAuth authentication:

  • Codex (ChatGPT)
  • Gemini CLI (Google)
  • Antigravity
  • GitHub Copilot
  • GitHub Copilot Enterprise
  • Claude Code (Anthropic)
  • Z.AI
  • Kimi for Coding
  • Amp Code
  • Qwen Code (Alibaba)
  • MiniMax Coding Plan

API Providers

Pay-per-use API access with flexible authentication:

  • OpenAIOPENAI_API_KEY
  • AnthropicANTHROPIC_API_KEY
  • Google (Gemini API)GOOGLE_API_KEY
  • Amazon BedrockAWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION
  • Azure OpenAIAZURE_API_KEY, AZURE_RESOURCE_NAME
  • Google Vertex AIGOOGLE_CLOUD_PROJECT, GOOGLE_CLOUD_LOCATION
  • Google Vertex AnthropicGOOGLE_CLOUD_PROJECT, GOOGLE_CLOUD_LOCATION
  • PerplexityPERPLEXITY_API_KEY
  • OpenRouterOPENROUTER_API_KEY
  • Ollama — Configure host and port
  • GroqGROQ_API_KEY
  • Together AITOGETHER_API_KEY
  • DeepSeekDEEPSEEK_API_KEY
  • CerebrasCEREBRAS_API_KEY
  • Mistral AIMISTRAL_API_KEY
  • CohereCOHERE_API_KEY
  • xAI (Grok)XAI_API_KEY

Multiple Accounts Per Provider

Arctic supports connecting multiple accounts for the same provider:

# Add a second account
arctic auth login anthropic --name work

# List all accounts
arctic auth list

# Use specific account
arctic run --model anthropic:work/claude-sonnet-4-5 "..."

Benefits:

  • Separate personal and work accounts
  • Independent usage tracking
  • Easy switching between accounts
  • Isolated rate limits

Custom Providers (OpenAI-Compatible)

Add any OpenAI-compatible endpoint:

Basic Configuration

{
  "provider": {
    "my-provider": {
      "api": "https://api.example.com/v1",
      "npm": "@ai-sdk/openai-compatible",
      "env": ["MY_PROVIDER_KEY"],
      "models": {
        "my-model": {
          "id": "my-model-id",
          "name": "My Custom Model",
          "temperature": true,
          "reasoning": false,
          "attachment": true,
          "tool_call": true,
          "limit": {
            "context": 128000,
            "output": 4096
          },
          "cost": {
            "input": 0.5,
            "output": 1.5
          }
        }
      }
    }
  }
}

Advanced Configuration

{
  "provider": {
    "my-provider": {
      "api": "https://api.example.com/v1",
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "apiKey": "$MY_PROVIDER_KEY",
        "headers": {
          "X-Custom-Header": "value"
        },
        "timeout": 60000,
        "baseURL": "https://api.example.com/v1"
      },
      "models": {
        "fast-model": {
          "id": "fast",
          "name": "Fast Model",
          "temperature": true,
          "reasoning": false,
          "attachment": false,
          "tool_call": true,
          "limit": {
            "context": 32000,
            "output": 2048
          }
        }
      }
    }
  }
}

Local Providers

{
  "provider": {
    "local-llm": {
      "api": "http://localhost:8080/v1",
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "apiKey": "not-needed"
      }
    }
  }
}

Provider Configuration

Enable/Disable Providers

{
  "enabled_providers": ["openai", "anthropic", "google"],
  "disabled_providers": ["provider-to-disable"]
}

Model Filtering

{
  "provider": {
    "openai": {
      "whitelist": ["gpt-4o", "gpt-4-turbo"],
      "blacklist": ["gpt-3.5-turbo"]
    }
  }
}

Provider Options

{
  "provider": {
    "openai": {
      "options": {
        "timeout": 120000,
        "headers": {
          "X-Custom": "value"
        }
      }
    }
  }
}

Switching Providers

In TUI

Press Ctrl+X M to open the model picker and select any model from any provider.

In CLI

# Use specific model
arctic run --model openai/gpt-4o "..."

# Switch mid-conversation
arctic run --continue --model anthropic/claude-sonnet-4-5 "..."

Default Model

Set in config:

{
  "model": "anthropic/claude-sonnet-4-5",
  "small_model": "openai/gpt-4o-mini"
}

Troubleshooting

Provider Not Showing

  1. Check authentication: arctic auth list
  2. Verify environment variables
  3. Check enabled/disabled lists in config
  4. Refresh models: arctic models --refresh

Model Not Available

  1. Verify provider is authenticated
  2. Check model whitelist/blacklist
  3. Ensure model is supported by provider
  4. Check for experimental model flag
  5. Refresh models: arctic models --refresh

Refreshing Models

When providers release new models, refresh your local model list:

arctic models --refresh

This fetches the latest available models from all configured providers. Run this after:

  • New model releases from your provider
  • Provider adds new model tiers
  • Configuration changes to custom providers

You can also refresh models for a specific provider:

arctic models --refresh --provider anthropic

In the TUI, use /models and press r to refresh.

Rate Limits

  1. View usage: arctic usage or /usage in TUI
  2. Switch to different provider
  3. Use multiple accounts
  4. Wait for limit reset

Connection Issues

  1. Check internet connection
  2. Verify API endpoint is accessible
  3. Check for firewall/proxy issues
  4. Try different provider

Token Refresh Failed

  1. Re-authenticate: arctic auth logout && arctic auth login
  2. Check token expiration
  3. Verify OAuth credentials

Full list

Show all supported providers
  • aihubmix
  • alibaba
  • alibaba-cn
  • amazon-bedrock
  • azure
  • azure-cognitive-services
  • bailing
  • baseten
  • cerebras
  • chutes
  • cloudflare-ai-gateway
  • cloudflare-workers-ai
  • cohere
  • cortecs
  • deepinfra
  • deepseek
  • fastrouter
  • fireworks-ai
  • github-models
  • google-vertex
  • google-vertex-anthropic
  • groq
  • helicone
  • huggingface
  • iflowcn
  • inception
  • inference
  • io-net
  • kimi-for-coding
  • llama
  • lmstudio
  • lucidquery
  • minimax
  • minimax-cn
  • mistral
  • modelscope
  • moonshotai
  • moonshotai-cn
  • morph
  • nebius
  • nvidia
  • ollama
  • ollama-cloud
  • opencode
  • ovhcloud
  • perplexity
  • poe
  • requesty
  • sap-ai-core
  • scaleway
  • siliconflow
  • siliconflow-cn
  • submodel
  • synthetic
  • togetherai
  • upstage
  • v0
  • venice
  • vercel
  • vultr
  • wandb
  • xai
  • xiaomi
  • zai-coding-plan
  • zenmux
  • zhipuai
  • zhipuai-coding-plan

On this page