Skip to content

Core Concepts

Everything in tama is built from two kinds of file:

FileWhat it is
AGENT.mdAn execution unit. Has a pattern + system prompt. LLM-driven.
SKILL.mdA tool. Does one thing. Deterministic. No LLM.
my-project/
├── agents/
│ ├── researcher/AGENT.md ← LLM-driven, pattern: react
│ └── summarizer/AGENT.md ← LLM-driven, pattern: oneshot
└── skills/
├── search-web/SKILL.md ← tool: DuckDuckGo search
└── fetch-url/SKILL.md ← tool: HTTP fetch

A pattern describes the control flow of an agent. You declare the pattern in AGENT.md frontmatter — tama implements it.

There are 13 patterns, from the simplest:

PatternWhat it does
oneshotSingle LLM call. Input → LLM → output.
reactTool-use loop. Runs until the model calls finish.

…to compositions:

PatternWhat it does
criticdraft → critique → refine
scattermap → parallel workers → reduce
debateposition-a → position-b → judge
fsmUser-defined state machine

See Patterns overview for the full list.

Agents are stateless. Data flows only through two mechanisms:

Every agent’s first action is to call start() to receive its input. What start() returns depends on where the agent sits:

Agent positionstart() returns
Entry agentThe CLI input from tama run "..."
FSM non-initial statevalue from the previous agent’s finish
Scatter workerOne item from the map phase’s output array
Parallel workerThe same input all workers received

To complete, an agent calls finish(key, value):

  • key — routing word used by FSM to select the next state
  • value — the data passed to the next agent via start()
agent-a calls: finish(key="approved", value="Here is the result...")
FSM routes "approved" → agent-b
agent-b calls: start() → "Here is the result..."

Skills follow a two-level disclosure model:

Level 1 — always visible: the skill’s name and description are injected into the system prompt. The agent sees what’s available.

Level 2 — on demand: the agent calls read_skill("skill-name") to load the full instructions. This also unlocks any built-in runtime tools the skill declares.

This keeps the context window lean — agents only load the full instructions for skills they actually decide to use.

Every react agent always has access to three tools:

ToolPurpose
start()Get the input assigned to this agent
finish(key, value)Signal completion and pass output
read_skill(name)Load a skill’s full instructions and unlock its tools

Additional tools come exclusively through skills. Tools are never given to agents by default.

When you run tama run "task", it looks up the entrypoint in tama.toml and starts there:

[project]
name = "my-project"
entrypoint = "researcher"

Override at runtime:

Terminal window
TAMA_ENTRYPOINT_AGENT=summarizer tama run "summarize this text..."

Agents reference models by role, not by name:

call:
model:
role: thinker

Roles map to actual models via environment variables:

Terminal window
export TAMA_MODEL_THINKER=anthropic:claude-opus-4-6

This decouples agent definitions from model choices — swap models without editing any agent files. You can also override a specific agent:

call:
model:
name: anthropic:claude-haiku-4-5 # direct override
temperature: 0.3
max_tokens: 1024