13 built-in patterns
From a single LLM call (oneshot) to multi-agent debates and plan-execute loops. Declare the pattern — tama implements it.
You write agents as .md files. tama run runs them. tama brew ships them.
No frameworks. No orchestration runtime to manage. No Python glue.
13 built-in patterns
From a single LLM call (oneshot) to multi-agent debates and plan-execute loops. Declare the pattern — tama implements it.
Full observability
Every agent step, every LLM call, every artifact traced to DuckDB. Diff runs, replay with identical input, debug interactively.
Skills as Markdown
Tools are .md files following the Anthropic Agent Skills spec. Human-readable, git-diffable, composable.
Lean production deploys
tama brew compiles your agents into a distroless Docker image (~8MB). One command from laptop to cloud.
# Scaffoldtama init my-projectcd my-projecttama add react my-agenttama add skill search-web
# IterateANTHROPIC_API_KEY=sk-... tama run "research fusion energy trends"
# Shiptama brewdocker push my-project:latest# agents/my-agent/AGENT.md---name: my-agentdescription: Research agent with web search.version: "1.0.0"pattern: reactcall: model: role: thinker uses: - search-web---
You are a research assistant. Use `search-web` to find information.When done, call finish with key="done" and a comprehensive summary.