Skip to content

tamar (runtime)

tamar is the runtime binary that executes your agent graph. It reads tama.toml, builds the agent graph, and runs the entrypoint agent with the provided input.

Terminal window
tamar "<task input>"
tamar --debug "<task input>"

Run from the project root (where tama.toml lives).

ArgumentDescription
<task>The input string passed to the entrypoint agent’s start()
FlagDescription
--debugEnable interactive step-through debugger
VariableDescription
TAMA_ENTRYPOINT_AGENTOverride the entrypoint from tama.toml
TAMA_MODEL_<ROLE>Set model for a role (e.g. TAMA_MODEL_THINKER)
ANTHROPIC_API_KEYAnthropic API key
OPENAI_API_KEYOpenAI API key
GEMINI_API_KEYGoogle API key

tamar writes the final agent output to stdout. All trace/debug output goes to stderr.

Terminal window
# capture output only
tamar "research fusion energy" > output.txt
# see trace output in real time
tamar "research fusion energy" 2>&1 | tee full-output.txt
Terminal window
tamar --debug "research fusion energy"

The debugger pauses before each LLM call and lets you:

  • Inspect the current input/context
  • Edit the system prompt before the call
  • Press Enter to continue
  • After each agent completes: proceed or retry the entire agent
tamatamar
PurposeDeveloper toolRuntime
Commandsinit, add, lint, brew(takes task as argument)
Runs agents?NoYes
In Docker image?NoYes

tamar is the binary that runs in production Docker images. tama stays on your development machine.

Every tamar run produces a trace in .tama/runs.duckdb. View runs with the web UI:

Terminal window
tama runs # not yet implemented — use web UI

Or query directly:

duckdb .tama/runs.duckdb
SELECT * FROM spans ORDER BY started_at DESC LIMIT 20;

See Tracing & Observability for details.