Anti-Pattern Single-Step Agents: one-step agents

Anti-pattern where an agent performs only one step and does not use iteration.
On this page
  1. Idea In 30 Seconds
  2. Anti-Pattern Example
  3. Why It Happens And What Goes Wrong
  4. Correct Approach
  5. Quick Test
  6. How It Differs From Other Anti-Patterns
  7. Agent Everywhere Problem vs Single-Step Agents
  8. No Stop Conditions vs Single-Step Agents
  9. Tool Calling for Everything vs Single-Step Agents
  10. Self-Check: Do You Have This Anti-Pattern?
  11. FAQ
  12. What Next

Idea In 30 Seconds

Single-Step Agents becomes an anti-pattern when one step is used for tasks with tools, side effects, or ambiguity.

As a result, there is no space for validation, recovery, and controlled stopping. For tasks with tools or side effects (state changes), this quickly becomes fragile in production.

For safe read-only cases, single-step can be fine, but for tasks with tools or side effects you need a bounded loop with explicit stop_reason.


Anti-Pattern Example

The team builds a support agent that should search data and execute actions in external systems.

But the implementation makes only one step: one action decision and one execution.

PYTHON
decision = agent.decide(user_message)
result = run_tool(decision.tool, decision.args)
return result

In this setup, there is no re-evaluation after the tool result:

PYTHON
# no validate_output(...)
# no no_progress(...)
# no stop_reason

For this case, you need a controlled loop with boundaries:

PYTHON
for step in range(MAX_STEPS):
    decision = agent.next_step(state)
    ...

In this case, the single-step approach adds:

  • early risk of wrong action
  • no recovery after tool failure
  • weak quality control of the final answer

Why It Happens And What Goes Wrong

This anti-pattern often appears when the team optimizes "to keep it simple" and reduces the agent system to a single model call.

Typical causes:

  • desire to reduce latency at any cost
  • mixing up "single LLM call" and a "full agent"
  • no requirements for stop reasons and output validation
  • hope that the prompt alone will cover recovery scenarios

As a result, teams face:

  • no recovery loop - after a tool error, the agent has nowhere to move
  • premature write risk - side effect can happen before validation
  • fragile output - no stage to check "is the task actually closed"
  • hard debugging - run ends without a transparent reason
  • production instability - one bad step immediately becomes an incident

Unlike No Stop Conditions, here there is not even a controlled loop: the issue starts earlier, in the design itself, "one step and done".

Typical production signals that single-step is already dangerous:

  • tasks with tools and side effects run without max_steps/stop_reason
  • a failed tool-call ends the run immediately without safe recovery attempts
  • one routing error leads to an external action without additional validation
  • the team cannot explain why this exact action was chosen in this run

It is important that every agent step is part of LLM inference. In a single-step design, you effectively let one inference make a critical decision without verification.

Correct Approach

Start with a minimal bounded loop for all scenarios that include tools or side effects. Keep single-step only for truly safe read-only low-risk cases without tool-call.

Practical framework:

  • split routes: read_only_single_step and loop_required
  • for the loop path, define max_steps, timeout, stop_reason
  • add validate_output and no_progress checks
  • execute write actions only after explicit policy checks
PYTHON
MAX_STEPS = 6

def run_support_flow(user_message: str):
    route = classify_intent(user_message)  # simple classifier or rules

    if route == "read_only_faq":
        return run_single_step_answer(user_message)  # no tools, no side effects

    state = init_state(user_message)

    for step in range(MAX_STEPS):  # hard limit for unsafe loops
        decision = agent.next_step(state)

        if decision.type == "final_answer":
            if validate_output(decision.output):  # format and required fields
                return decision.output
            return stop("invalid_output")

        result = run_tool(decision.tool, decision.args)
        if no_progress(state, result):  # repeated pattern or no meaningful state change
            return stop("no_progress")
        state.append(result)

    return stop("max_steps_exceeded")

In this setup, risks become manageable: the system either closes the task or stops with a transparent reason.

Quick Test

If the answer to these questions is "yes", you have Single-Step Agents anti-pattern risk:

  • Are tasks with tools and side effects executed via one model call?
  • After a failed tool result, does the run end without a recovery step?
  • Is there no explicit stop_reason when the scenario is not closed correctly?

How It Differs From Other Anti-Patterns

Agent Everywhere Problem vs Single-Step Agents

Agent Everywhere ProblemSingle-Step Agents
Main problem: an agent is used even for deterministic tasks.Main problem: even when an agent is already needed, it is executed in one step without a loop.
When it appears: when a simple workflow is replaced with agent reasoning.When it appears: when tasks with tools/write run without recovery and stop logic.

In short: Agent Everywhere Problem is about choosing an agent unnecessarily, while Single-Step Agents is about an unsafe way of executing the agent.

No Stop Conditions vs Single-Step Agents

No Stop ConditionsSingle-Step Agents
Main problem: there is a loop, but no clear completion conditions.Main problem: there is no loop at all, so there is no room for controlled recovery.
When it appears: when a run enters infinite or very long repeats.When it appears: when one wrong step immediately ends the scenario or triggers an unwanted action.

In short: No Stop Conditions is about an uncontrolled loop, while Single-Step Agents is about missing a loop where it is required.

Tool Calling for Everything vs Single-Step Agents

Tool Calling for EverythingSingle-Step Agents
Main problem: unnecessary tool-calls even in simple scenarios.Main problem: a critical tool-call is executed in one step without result validation.
When it appears: when tool-call becomes the default route.When it appears: when there is no loop after tool-result for validation or correction.

In short: Tool Calling for Everything increases unnecessary call count, while Single-Step Agents increases the risk of one uncontrolled critical call.

Self-Check: Do You Have This Anti-Pattern?

Quick check for the anti-pattern Single-Step Agents.
Mark items for your system and check status below.

Check your system:

Progress: 0/8

⚠ There are signs of this anti-pattern

Move simple steps into a workflow and keep the agent only for complex decisions.

FAQ

Q: Is single-step always bad?
A: No. It fits safe read-only scenarios without tools and side effects. The problem starts when it is used for tasks that require recovery and control.

Q: How do we know we need to switch to a loop?
A: If there are tool-calls, external actions, ambiguous output, or high-impact failure risk, you need a bounded loop with explicit stop_reason.

Q: Will latency increase a lot after moving to a loop?
A: It can increase, but this is managed with budget limits. In production, a controlled and safe result is usually more important than "fast but fragile".


What Next

Related anti-patterns:

What to build instead:

⏱️ 7 min read β€’ Updated March 17, 2026Difficulty: β˜…β˜…β˜…
Implement in OnceOnly
Safe defaults for tool permissions + write gating.
Use in OnceOnly
# onceonly guardrails (concept)
version: 1
tools:
  default_mode: read_only
  allowlist:
    - search.read
    - kb.read
    - http.get
writes:
  enabled: false
  require_approval: true
  idempotency: true
controls:
  kill_switch: { enabled: true, mode: disable_writes }
audit:
  enabled: true
Integrated: production controlOnceOnly
Add guardrails to tool-calling agents
Ship this pattern with governance:
  • Budgets (steps / spend caps)
  • Tool permissions (allowlist / blocklist)
  • Kill switch & incident stop
  • Idempotency & dedupe
  • Audit logs & traceability
Integrated mention: OnceOnly is a control layer for production agent systems.

Author

Nick β€” engineer building infrastructure for production AI agents.

Focus: agent patterns, failure modes, runtime control, and system reliability.

πŸ”— GitHub: https://github.com/mykolademyanov


Editorial note

This documentation is AI-assisted, with human editorial responsibility for accuracy, clarity, and production relevance.

Content is grounded in real-world failures, post-mortems, and operational incidents in deployed AI agent systems.