Skip to content

Models & Providers

tama uses a role-based model system that decouples agent definitions from specific model choices.

Instead of hardcoding model names in agents, you define roles:

# in AGENT.md
call:
model:
role: thinker

Then map roles to models:

Terminal window
# env var (runtime)
export TAMA_MODEL_THINKER=anthropic:claude-opus-4-6
# tama.toml (project defaults)
[models]
thinker = "anthropic:claude-opus-4-6"
writer = "anthropic:claude-sonnet-4-6"
fast = "anthropic:claude-haiku-4-5"

Environment variables take priority over tama.toml.

Skip roles and specify a model directly in an agent:

call:
model:
name: anthropic:claude-opus-4-6

When both role and name are present, name wins.

All models use provider:model-id format:

anthropic:claude-opus-4-6
openai:gpt-4o
google:gemini-2.0-flash
Terminal window
export ANTHROPIC_API_KEY=sk-ant-...
ModelID
Claude Opus 4.6anthropic:claude-opus-4-6
Claude Sonnet 4.6anthropic:claude-sonnet-4-6
Claude Haiku 4.5anthropic:claude-haiku-4-5
Terminal window
export OPENAI_API_KEY=sk-...
ModelID
GPT-4oopenai:gpt-4o
GPT-4o miniopenai:gpt-4o-mini
Terminal window
export GEMINI_API_KEY=...
ModelID
Gemini 2.0 Flashgoogle:gemini-2.0-flash
Gemini 2.0 Progoogle:gemini-2.0-pro

Configure at the agent level or per-step:

call:
model:
role: writer
temperature: 0.9 # 0.0 (deterministic) to 1.0 (creative)
max_tokens: 4096
<!-- In a step file -->
---
call:
model:
role: thinker
temperature: 0.0 # deterministic for this step only
---

Role names with hyphens map to env vars with underscores:

RoleEnv var
thinkerTAMA_MODEL_THINKER
my-fastTAMA_MODEL_MY_FAST
code-writerTAMA_MODEL_CODE_WRITER

These aren’t required, but make multi-model projects easier to manage:

RoleIntended useExample model
thinkerComplex reasoning, main agentsclaude-opus-4-6
writerCreative writing, draftingclaude-sonnet-4-6
fastSimple tasks, step functionsclaude-haiku-4-5
judgeEvaluation and selectionclaude-opus-4-6