Skip to content

Installation

  • Rust 1.76+ (for building from source)
  • Docker (for tama brew)
  • An API key for at least one supported LLM provider
Terminal window
git clone https://github.com/your-org/tama
cd tama
cargo build --release

This produces two binaries:

  • target/release/tama — developer tool
  • target/release/tamar — runtime

Add them to your PATH:

Terminal window
export PATH="$PATH:/path/to/tama/target/release"
Terminal window
tama --help
tama run --help

tama uses environment variables to configure LLM providers and model roles.

VariableProvider
ANTHROPIC_API_KEYAnthropic (Claude)
OPENAI_API_KEYOpenAI (GPT-4, etc.)
GEMINI_API_KEYGoogle (Gemini)

tama uses a role-based model system. Instead of hardcoding a model name in each agent, you assign roles (like thinker, writer, fast) and map them to models at runtime:

Terminal window
export TAMA_MODEL_THINKER="anthropic:claude-opus-4-6"
export TAMA_MODEL_WRITER="anthropic:claude-sonnet-4-6"
export TAMA_MODEL_FAST="anthropic:claude-haiku-4-5"

Agents reference roles:

call:
model:
role: thinker

This lets you swap models without editing any agent files.

ProviderFormatExample
Anthropicanthropic:model-idanthropic:claude-sonnet-4-6
OpenAIopenai:model-idopenai:gpt-4o
Googlegoogle:model-idgoogle:gemini-2.0-flash