Build Your First AI Agent

We write the simplest agent that actually works - no magic, no frameworks.
On this page
  1. Imagine an agent like a child
  2. Task for the agent
  3. Code: agent without LLM
  4. What is happening here
  5. Model vs Agent
  6. Why is this already an agent, not just a function?
  7. What if we connect a real LLM?
  8. In short
  9. FAQ
  10. What is next
  11. Want to run this yourself?

Until this point, we talked about agents as a system that:

  • has a goal
  • tries to act
  • checks the result
  • and tries again if it fails

But this still sounds like theory.

So let's write the simplest agent that actually works.

No frameworks. No memory. No complex logic.

Just a loop.


Imagine an agent like a child

Build Your First AI Agent

A child wants to open a door.

They:

  • try the handle -> it does not open
  • try harder -> still does not work
  • try one more time -> it opened!

An agent works the same way. It does not "think" in the human sense.

It simply:

-> Tries
-> Looks at what happened
-> Changes the action
-> Tries again


Task for the agent

Let's give the agent a simple task:

Write a number greater than 10

But let's make it tricky. Instead of a model, we start with random to see the mechanics without extra noise.

Sometimes the agent gets 3, sometimes 7, sometimes 15. It must tell the difference and either stop or try again.


Code: agent without LLM

PYTHON
import random

goal = 10
max_steps = 5

for step in range(max_steps):
    print(f"\nπŸ€– Step {step + 1}: Agent is trying...")

    # "Model" generates an answer
    number = random.randint(1, 20)
    print(f"πŸ’¬ Generated: {number}")

    if number > goal:
        print(f"βœ… Goal reached! {number} > {goal}")
        break
    else:
        print(f"❌ Not enough. {number} ≀ {goal}. Trying again...")
else:
    print("\n⚠️ Max steps reached without success")

Run it a few times. Watch how the agent decides on its own whether to continue or stop.


What is happening here

  1. The agent gets a goal - find a number > 10
  2. Tries - "generates" an answer
  3. Checks - is the goal reached?
  4. If not - tries again (up to 5 times)
  5. If yes - stops

That is the whole loop:

Diagram

Model vs Agent

ModelAgent
Generates an answerβœ…βŒ
Checks the resultβŒβœ…
Decides what to do nextβŒβœ…

The model is responsible for Act.
The agent is responsible for Check -> Retry -> Stop.


Why is this already an agent, not just a function?

A function would make one attempt and stop.

An agent:

  • has a goal
  • checks the result
  • can act again without your participation

You gave the task, and it works on its own. Even when it makes mistakes.


What if we connect a real LLM?

Replacing random.randint() with an AI API call is one change.

The agent loop stays exactly the same.

This is the core point: an agent is not about a "smart model". It is about structure: goal -> action -> check -> repeat.


In short

Quick take

You just learned the basic agent loop:

  • Goal - find a number > 10
  • Act - generates an answer
  • Check - is the goal reached?
  • Retry - if needed
  • Stop - when the goal is reached or steps are exhausted

This is the foundation. Everything else is a complication of this pattern.


FAQ

Q: Why is max_steps = 5 used here instead of an infinite loop?
A: An agent that does not stop by itself is dangerous. It can spend money on API calls, get stuck in a loop, or loop forever if the goal is unreachable. max_steps is a safety guard.

Q: Why did we start with random instead of using an LLM right away?
A: To see the agent mechanics without extra noise. The model is only one detail. The loop itself is more important.

Q: Why does the agent not know that the number is "bad" before checking?
A: The model just generates. It does not know the goal. The goal is the agent's responsibility, not the model's.


What is next

Did you notice max_steps = 5?

This is not accidental. An agent that does not stop by itself can:

  • run forever
  • spend money on API calls
  • get stuck in a loop if the goal is unreachable

That is why every agent must have boundaries.

-> Read next: When an agent needs boundaries


Want to run this yourself?

If you want to see a full implementation with a real LLM, split into modules and ready to run, it is here:

-> First AI Agent - Python (full implementation)

⏱️ 4 min read β€’ Updated Mar, 2026Difficulty: β˜…β˜†β˜†
Practical continuation

Pattern implementation examples

Continue with implementation using example projects.

Integrated: production controlOnceOnly
Add guardrails to tool-calling agents
Ship this pattern with governance:
  • Budgets (steps / spend caps)
  • Tool permissions (allowlist / blocklist)
  • Kill switch & incident stop
  • Idempotency & dedupe
  • Audit logs & traceability
Integrated mention: OnceOnly is a control layer for production agent systems.
Author

This documentation is curated and maintained by engineers who ship AI agents in production.

The content is AI-assisted, with human editorial responsibility for accuracy, clarity, and production relevance.

Patterns and recommendations are grounded in post-mortems, failure modes, and operational incidents in deployed systems, including during the development and operation of governance infrastructure for agents at OnceOnly.