Providers Overview
Tarsk supports 29 model providers. Each provider requires its own API key (or OAuth for some). You can enable multiple providers simultaneously and switch between models per thread.
Direct Providers
Section titled “Direct Providers”These providers host their own models. Bring an API key from the provider’s website.
Anthropic
Section titled “Anthropic”Models: Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5 Get a key: console.anthropic.com
Claude models are strong at coding, reasoning, and following complex instructions. Claude Sonnet 4 offers an excellent balance of capability and speed.
OpenAI
Section titled “OpenAI”Models: GPT-5, GPT-4.1, GPT-4o, GPT-4, o3-mini, o4-mini, Codex series
Get a key: platform.openai.com
Models: Gemini 3 Pro Preview, Gemini 3 Flash Preview, Gemini 2.5 Pro, Gemini 2.5 Flash
Get a key: aistudio.google.com
Gemini 2.5 Flash is a fast, cost-effective option with a large context window.
Models: Grok-4, Grok-4-1-Fast, Grok-Code-Fast-1
Get a key: console.x.ai
DeepSeek
Section titled “DeepSeek”Models: DeepSeek Chat (DeepSeek-V3), DeepSeek Reasoner (R1)
Get a key: platform.deepseek.com
DeepSeek R1 is a strong reasoning model. DeepSeek V3 is competitive with frontier models for coding tasks.
Models: Kimi K2 Instruct, GPT-OSS 120B
Get a key: console.groq.com
Groq provides very fast inference via custom hardware.
Cerebras
Section titled “Cerebras”Models: GPT-OSS 120B, ZAI-GLM-4.7, Qwen-3-235B
Get a key: cloud.cerebras.ai
MoonshotAI
Section titled “MoonshotAI”Models: Kimi K2.5, Kimi K2 Thinking
Variants: moonshotai (international), moonshotai-cn (China endpoint)
Get a key: platform.moonshot.cn
MiniMax
Section titled “MiniMax”Models: MiniMax M2, M2.1, M2.5
Variants: minimax (international), minimax-cn (China endpoint)
ZhipuAI
Section titled “ZhipuAI”Models: GLM-4.5 through GLM-4.7
Variants: zhipuai, zhipuai-coding-plan
Models: Qwen Coder
Get a key: dashscope.aliyuncs.com
NVIDIA
Section titled “NVIDIA”Models: MiniMax M2.1, Kimi K2, GPT-OSS 120B, Qwen3 Coder
Get a key: build.nvidia.com
HuggingFace
Section titled “HuggingFace”Models: Qwen3 Coder, Kimi K2.5, GLM-4.7
Get a key: huggingface.co/settings/tokens
VolcEngine
Section titled “VolcEngine”Models: DeepSeek V3.1, Doubao Seed
Baidu/ByteDance cloud inference platform
Xiaomi
Section titled “Xiaomi”Models: Mimo V2 Flash
Aggregator Providers
Section titled “Aggregator Providers”Aggregators route requests to multiple underlying models through a single API key. They are useful for accessing many models without managing separate keys.
OpenRouter
Section titled “OpenRouter”Models: Hundreds of models from Anthropic, OpenAI, Google, Meta, and more
Get a key: openrouter.ai
Credits: Balance visible in Settings
OpenRouter is the most comprehensive aggregator. A single key accesses almost every major model. Some models are free; paid models are charged per-token. See the dedicated OpenRouter guide for setup details.
AIHubMix
Section titled “AIHubMix”Models: DeepSeek, Claude, Kimi, and more via a single API
Get a key: aihubmix.com
Credits: Balance visible in Settings
SiliconFlow
Section titled “SiliconFlow”Models: Qwen, DeepSeek, Kimi
Variants: siliconflow (international), siliconflow-cn (China endpoint)
ModelScope
Section titled “ModelScope”Models: Qwen, GLM, DeepSeek
Models: DeepSeek R1, DeepSeek V3, GLM series, Kimi, Qwen3
CanopyWave
Section titled “CanopyWave”Models: DeepSeek, MiniMax, Kimi, GPT-OSS
OpenCode
Section titled “OpenCode”Models: Multi-provider aggregator
ModelWatch
Section titled “ModelWatch”Models: Multi-provider aggregator
ZenMux
Section titled “ZenMux”Models: Multi-provider aggregator
Models: Claude, GPT, Gemini, Grok (via Poe subscription)
Get a key: poe.com
OAuth Providers
Section titled “OAuth Providers”These providers use OAuth rather than static API keys.
GitHub Copilot
Section titled “GitHub Copilot”Models: Claude, Gemini, GPT-5, Grok
Auth: Connect via your GitHub account — no separate API key needed if you have a Copilot subscription
Models: GPT-5.1 Codex variants
Auth: OAuth via your OpenAI account
Specialised Providers
Section titled “Specialised Providers”| Provider | Models | Notes |
|---|---|---|
zai-coding-plan | GLM-4.5 through GLM-5 | Planning-focused variants |
kimi-coding-plan | Kimi for Coding | Specialised coding mode |
zhipuai-coding-plan | GLM coding variants | Coding + planning split |