Installation
Prerequisites
Section titled “Prerequisites”- Rust 1.76+ (for building from source)
- Docker (for
tama brew) - An API key for at least one supported LLM provider
Build from source
Section titled “Build from source”git clone https://github.com/your-org/tamacd tamacargo build --releaseThis produces two binaries:
target/release/tama— developer tooltarget/release/tamar— runtime
Add them to your PATH:
export PATH="$PATH:/path/to/tama/target/release"Verify
Section titled “Verify”tama --helptama run --helpEnvironment variables
Section titled “Environment variables”tama uses environment variables to configure LLM providers and model roles.
API keys
Section titled “API keys”| Variable | Provider |
|---|---|
ANTHROPIC_API_KEY | Anthropic (Claude) |
OPENAI_API_KEY | OpenAI (GPT-4, etc.) |
GEMINI_API_KEY | Google (Gemini) |
Model roles
Section titled “Model roles”tama uses a role-based model system. Instead of hardcoding a model name in each agent, you assign roles (like thinker, writer, fast) and map them to models at runtime:
export TAMA_MODEL_THINKER="anthropic:claude-opus-4-6"export TAMA_MODEL_WRITER="anthropic:claude-sonnet-4-6"export TAMA_MODEL_FAST="anthropic:claude-haiku-4-5"Agents reference roles:
call: model: role: thinkerThis lets you swap models without editing any agent files.
Supported providers
Section titled “Supported providers”| Provider | Format | Example |
|---|---|---|
| Anthropic | anthropic:model-id | anthropic:claude-sonnet-4-6 |
| OpenAI | openai:model-id | openai:gpt-4o |
google:model-id | google:gemini-2.0-flash |