Skip to content

Providers Overview

Tarsk supports 29 model providers. Each provider requires its own API key (or OAuth for some). You can enable multiple providers simultaneously and switch between models per thread.

These providers host their own models. Bring an API key from the provider’s website.

Models: Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5 Get a key: console.anthropic.com

Claude models are strong at coding, reasoning, and following complex instructions. Claude Sonnet 4 offers an excellent balance of capability and speed.

Models: GPT-5, GPT-4.1, GPT-4o, GPT-4, o3-mini, o4-mini, Codex series
Get a key: platform.openai.com

Models: Gemini 3 Pro Preview, Gemini 3 Flash Preview, Gemini 2.5 Pro, Gemini 2.5 Flash
Get a key: aistudio.google.com

Gemini 2.5 Flash is a fast, cost-effective option with a large context window.

Models: Grok-4, Grok-4-1-Fast, Grok-Code-Fast-1
Get a key: console.x.ai

Models: DeepSeek Chat (DeepSeek-V3), DeepSeek Reasoner (R1)
Get a key: platform.deepseek.com

DeepSeek R1 is a strong reasoning model. DeepSeek V3 is competitive with frontier models for coding tasks.

Models: Kimi K2 Instruct, GPT-OSS 120B
Get a key: console.groq.com

Groq provides very fast inference via custom hardware.

Models: GPT-OSS 120B, ZAI-GLM-4.7, Qwen-3-235B
Get a key: cloud.cerebras.ai

Models: Kimi K2.5, Kimi K2 Thinking
Variants: moonshotai (international), moonshotai-cn (China endpoint)
Get a key: platform.moonshot.cn

Models: MiniMax M2, M2.1, M2.5
Variants: minimax (international), minimax-cn (China endpoint)

Models: GLM-4.5 through GLM-4.7
Variants: zhipuai, zhipuai-coding-plan

Models: Qwen Coder
Get a key: dashscope.aliyuncs.com

Models: MiniMax M2.1, Kimi K2, GPT-OSS 120B, Qwen3 Coder
Get a key: build.nvidia.com

Models: Qwen3 Coder, Kimi K2.5, GLM-4.7
Get a key: huggingface.co/settings/tokens

Models: DeepSeek V3.1, Doubao Seed
Baidu/ByteDance cloud inference platform

Models: Mimo V2 Flash


Aggregators route requests to multiple underlying models through a single API key. They are useful for accessing many models without managing separate keys.

Models: Hundreds of models from Anthropic, OpenAI, Google, Meta, and more
Get a key: openrouter.ai
Credits: Balance visible in Settings

OpenRouter is the most comprehensive aggregator. A single key accesses almost every major model. Some models are free; paid models are charged per-token. See the dedicated OpenRouter guide for setup details.

Models: DeepSeek, Claude, Kimi, and more via a single API
Get a key: aihubmix.com
Credits: Balance visible in Settings

Models: Qwen, DeepSeek, Kimi
Variants: siliconflow (international), siliconflow-cn (China endpoint)

Models: Qwen, GLM, DeepSeek

Models: DeepSeek R1, DeepSeek V3, GLM series, Kimi, Qwen3

Models: DeepSeek, MiniMax, Kimi, GPT-OSS

Models: Multi-provider aggregator

Models: Multi-provider aggregator

Models: Multi-provider aggregator

Models: Claude, GPT, Gemini, Grok (via Poe subscription)
Get a key: poe.com


These providers use OAuth rather than static API keys.

Models: Claude, Gemini, GPT-5, Grok
Auth: Connect via your GitHub account — no separate API key needed if you have a Copilot subscription

Models: GPT-5.1 Codex variants
Auth: OAuth via your OpenAI account


ProviderModelsNotes
zai-coding-planGLM-4.5 through GLM-5Planning-focused variants
kimi-coding-planKimi for CodingSpecialised coding mode
zhipuai-coding-planGLM coding variantsCoding + planning split

Next: Configure providers and enable models →