Restricting Tool Access in Python: Full Example

Full runnable example with tool allowlist, action allowlist, and blocked forbidden calls.
On this page
  1. What this example demonstrates
  2. Project structure
  3. How to run
  4. What we build in code
  5. Code
  6. tools.py β€” real tools
  7. gateway.py β€” policy gateway (key layer)
  8. llm.py β€” model and tool schemas
  9. main.py β€” agent loop with policy checks
  10. requirements.txt
  11. Example output
  12. Why this is a production approach
  13. Where to dig next
  14. Full code on GitHub

This is the full implementation of the example from the article How to Restrict Tool Access.

If you haven't read the article yet, start there. The focus here is code: how the restrictions work at runtime.


What this example demonstrates

  • Level 1 (tool access): which tools the agent can call at all
  • Level 2 (action access): which actions are allowed inside an accessible tool
  • Policy gateway: centralized request checks before execution
  • Fallback behavior: the agent gets an error and chooses a safe next step

Project structure

TEXT
foundations/
└── tool-calling/
    └── python/
        β”œβ”€β”€ main.py           # agent loop
        β”œβ”€β”€ llm.py            # model + tool schemas
        β”œβ”€β”€ gateway.py        # policy checks + execution
        β”œβ”€β”€ tools.py          # tools (system actions)
        └── requirements.txt

This split is important: the model proposes an action, and gateway.py decides whether that action will be executed at all.


How to run

1. Clone the repository and go to the folder:

BASH
git clone https://github.com/AgentPatterns-tech/agentpatterns.git
cd foundations/tool-calling/python

2. Install dependencies:

BASH
pip install -r requirements.txt

3. Set the API key:

BASH
export OPENAI_API_KEY="sk-..."

4. Run:

BASH
python main.py

What we build in code

We build a careful robot that is not allowed to do everything blindly.

  • AI can request any action
  • a special "guard" (gateway) checks whether it is allowed
  • if forbidden, the robot does nothing dangerous and explains a safe option

This is like a locked door: without permission, a command does not pass.


Code

tools.py β€” real tools

PYTHON
from typing import Any

CUSTOMERS = {
    101: {"id": 101, "name": "Anna", "tier": "free", "email": "anna@gmail.com"},
    202: {"id": 202, "name": "Max", "tier": "pro", "email": "max@company.local"},
}


def customer_db(action: str, customer_id: int, new_tier: str | None = None) -> dict[str, Any]:
    customer = CUSTOMERS.get(customer_id)
    if not customer:
        return {"ok": False, "error": f"customer {customer_id} not found"}

    if action == "read":
        return {"ok": True, "customer": customer}

    if action == "update_tier":
        if not new_tier:
            return {"ok": False, "error": "new_tier is required"}
        customer["tier"] = new_tier
        return {"ok": True, "customer": customer}

    return {"ok": False, "error": f"unknown action '{action}'"}


def email_service(to: str, subject: str, body: str) -> dict[str, Any]:
    return {
        "ok": True,
        "status": "queued",
        "to": to,
        "subject": subject,
        "preview": body[:80],
    }

These are ordinary Python functions. Risk starts when the agent can call them without control.


gateway.py β€” policy gateway (key layer)

PYTHON
import json
from typing import Any

from tools import customer_db, email_service

TOOL_REGISTRY = {
    "customer_db": customer_db,
    "email_service": email_service,
}

# Level 1: which tools are visible to the agent
ALLOWED_TOOLS = {"customer_db"}

# Level 2: which actions are allowed inside each tool
ALLOWED_ACTIONS = {
    "customer_db": {"read"},  # update_tier is blocked
}


def execute_tool_call(tool_name: str, arguments_json: str) -> dict[str, Any]:
    if tool_name not in ALLOWED_TOOLS:
        return {"ok": False, "error": f"tool '{tool_name}' is not allowed"}

    tool = TOOL_REGISTRY.get(tool_name)
    if tool is None:
        return {"ok": False, "error": f"tool '{tool_name}' not found"}

    try:
        args = json.loads(arguments_json or "{}")
    except json.JSONDecodeError:
        return {"ok": False, "error": "invalid JSON arguments"}

    if tool_name == "customer_db":
        action = args.get("action")
        if action not in ALLOWED_ACTIONS["customer_db"]:
            return {
                "ok": False,
                "error": f"action '{action}' is not allowed for tool '{tool_name}'",
            }

    try:
        result = tool(**args)
    except TypeError as exc:
        return {"ok": False, "error": f"invalid arguments: {exc}"}

    return {"ok": True, "tool": tool_name, "result": result}

Rules are enforced right here. The model cannot bypass this layer through a prompt.


llm.py β€” model and tool schemas

PYTHON
import os
from openai import OpenAI

api_key = os.environ.get("OPENAI_API_KEY")

if not api_key:
    raise EnvironmentError(
        "OPENAI_API_KEY is not set.\n"
        "Run: export OPENAI_API_KEY='sk-...'"
    )

client = OpenAI(api_key=api_key)

SYSTEM_PROMPT = """
You are a support agent.
Use tools when data is missing.
If a tool or action is blocked, do not argue; suggest a safe manual next step.
Reply briefly in English.
""".strip()

TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "customer_db",
            "description": "Customer data operations: read or update tier",
            "parameters": {
                "type": "object",
                "properties": {
                    "action": {"type": "string", "enum": ["read", "update_tier"]},
                    "customer_id": {"type": "integer"},
                    "new_tier": {"type": "string"},
                },
                "required": ["action", "customer_id"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "email_service",
            "description": "Sends an email to the customer",
            "parameters": {
                "type": "object",
                "properties": {
                    "to": {"type": "string"},
                    "subject": {"type": "string"},
                    "body": {"type": "string"},
                },
                "required": ["to", "subject", "body"],
            },
        },
    },
]


def ask_model(messages: list[dict]):
    completion = client.chat.completions.create(
        model="gpt-4.1-mini",
        messages=[{"role": "system", "content": SYSTEM_PROMPT}, *messages],
        tools=TOOLS,
        tool_choice="auto",
    )
    return completion.choices[0].message

Note: llm.py may show more tools to the model, but gateway.py still blocks disallowed ones.


main.py β€” agent loop with policy checks

PYTHON
import json

from gateway import execute_tool_call
from llm import ask_model

MAX_STEPS = 6

TASK = (
    "For customer_id=101, check the profile, upgrade tier to pro, "
    "and send a confirmation email to anna@gmail.com. "
    "If any action is blocked, explain the safe manual next step."
)


def to_assistant_message(message) -> dict:
    tool_calls = []
    for tc in message.tool_calls or []:
        tool_calls.append(
            {
                "id": tc.id,
                "type": "function",
                "function": {
                    "name": tc.function.name,
                    "arguments": tc.function.arguments,
                },
            }
        )

    return {
        "role": "assistant",
        "content": message.content or "",
        "tool_calls": tool_calls,
    }


def run():
    messages: list[dict] = [{"role": "user", "content": TASK}]

    for step in range(1, MAX_STEPS + 1):
        print(f"\n=== STEP {step} ===")
        assistant = ask_model(messages)
        messages.append(to_assistant_message(assistant))

        if assistant.content and assistant.content.strip():
            print("Assistant:", assistant.content.strip())

        tool_calls = assistant.tool_calls or []
        if not tool_calls:
            print("\nDone: model finished the task.")
            return

        for tc in tool_calls:
            print(f"Tool call: {tc.function.name}({tc.function.arguments})")
            execution = execute_tool_call(
                tool_name=tc.function.name,
                arguments_json=tc.function.arguments,
            )
            print("Gateway result:", execution)

            messages.append(
                {
                    "role": "tool",
                    "tool_call_id": tc.id,
                    "content": json.dumps(execution, ensure_ascii=False),
                }
            )

    print("\nStop: MAX_STEPS reached.")


if __name__ == "__main__":
    run()

You can clearly see the boundary here: model proposes, gateway decides whether an action happens at all.


requirements.txt

TEXT
openai>=1.0.0

Example output

TEXT
=== STEP 1 ===
Tool call: customer_db({"action":"read","customer_id":101})
Gateway result: {'ok': True, 'tool': 'customer_db', 'result': {'ok': True, 'customer': {'id': 101, 'name': 'Anna', 'tier': 'free', 'email': 'anna@gmail.com'}}}

=== STEP 2 ===
Tool call: customer_db({"action":"update_tier","customer_id":101,"new_tier":"pro"})
Gateway result: {'ok': False, 'error': "action 'update_tier' is not allowed for tool 'customer_db'"}

=== STEP 3 ===
Tool call: email_service({"to":"anna@gmail.com","subject":"Tier updated","body":"..."})
Gateway result: {'ok': False, 'error': "tool 'email_service' is not allowed"}

=== STEP 4 ===
Assistant: I can only read the profile. Tier updates and email sending require manual operator action.

Done: model finished the task.

Note: the number of STEP entries may differ between runs.
In one STEP, the model may return several tool_calls, so sometimes you will see 2 steps, and sometimes 4.
This is normal LLM nondeterminism. What matters is that the policy gateway consistently blocks update_tier and email_service.


Why this is a production approach

Naive tool callingWith policy gateway
The model decides everything by itselfβœ…βŒ
There is centralized access controlβŒβœ…
You can separate read/write actionsβŒβœ…
Errors are transformed into controlled fallbackβŒβœ…

Where to dig next

  • Add ALLOWED_TOOLS_BY_ROLE (for example viewer, operator, admin)
  • Add an approval flow for update_tier instead of a full deny
  • Add max_tool_calls and max_cost next to MAX_STEPS
  • Log tool_name, args_hash, decision, reason for audit

Full code on GitHub

The repository contains the full version of this demo: tool loop, allowlist checks, and controlled fallback.

View full code on GitHub β†—
⏱️ 7 min read β€’ Updated March 2, 2026Difficulty: β˜…β˜…β˜†
Integrated: production controlOnceOnly
Add guardrails to tool-calling agents
Ship this pattern with governance:
  • Budgets (steps / spend caps)
  • Tool permissions (allowlist / blocklist)
  • Kill switch & incident stop
  • Idempotency & dedupe
  • Audit logs & traceability
Integrated mention: OnceOnly is a control layer for production agent systems.

Author

Nick β€” engineer building infrastructure for production AI agents.

Focus: agent patterns, failure modes, runtime control, and system reliability.

πŸ”— GitHub: https://github.com/mykolademyanov


Editorial note

This documentation is AI-assisted, with human editorial responsibility for accuracy, clarity, and production relevance.

Content is grounded in real-world failures, post-mortems, and operational incidents in deployed AI agent systems.