Skip to content

oneshot

oneshot is the simplest possible pattern: one LLM call, no tools, no loop.

The agent’s system prompt comes from the AGENT.md body. The input is passed directly as the user message. The LLM response is the output.

  • Text transformations (summarize, translate, reformat)
  • Classification or labeling
  • Generating short content (titles, descriptions, tags)
  • Any task that needs a single focused LLM response
---
name: summarizer
description: Summarizes text concisely.
version: "1.0.0"
pattern: oneshot
call:
model:
role: thinker
---
You are a concise summarizer. Given any text, return a clear 2-3 sentence summary.
Focus on the key insights. Do not pad the response.
Terminal window
tama add oneshot my-agent
  1. Input is received as the user message
  2. System prompt is the AGENT.md body
  3. Single LLM call is made
  4. LLM response is returned with key="result"

No start(), no finish(), no tools. The runtime handles everything.

The output key is always "result". When used as a step inside an fsm, the FSM transitions unconditionally from oneshot agents (since they always produce the same key).

When used as a step inside critic, reflexion, or other multi-step patterns, each step file can declare oneshot behavior implicitly — the step is a single LLM call with the step file’s body as the system prompt.

call:
model:
role: thinker # use role-based model selection
temperature: 0.0 # deterministic output
max_tokens: 512 # short response
---
name: title-generator
description: Generates a compelling title for an essay.
version: "1.0.0"
pattern: oneshot
call:
model:
role: writer
temperature: 0.9
---
Generate a single compelling, specific title for the following essay.
Return only the title, no explanation, no quotes.

Run:

Terminal window
tama run "Rust's borrow checker enforces memory safety at compile time..."

Output:

The Compile-Time Memory Safety Guarantee: How Rust's Borrow Checker Changed Systems Programming