This is the full implementation of the example from the article How to Restrict Tool Access.
If you haven't read the article yet, start there. The focus here is code: how the restrictions work at runtime.
What this example demonstrates
- Level 1 (tool access): which tools the agent can call at all
- Level 2 (action access): which actions are allowed inside an accessible tool
- Policy gateway: centralized request checks before execution
- Fallback behavior: the agent gets an error and chooses a safe next step
Project structure
foundations/
βββ tool-calling/
βββ python/
βββ main.py # agent loop
βββ llm.py # model + tool schemas
βββ gateway.py # policy checks + execution
βββ tools.py # tools (system actions)
βββ requirements.txt
This split is important: the model proposes an action, and gateway.py decides whether that action will be executed at all.
How to run
1. Clone the repository and go to the folder:
git clone https://github.com/AgentPatterns-tech/agentpatterns.git
cd foundations/tool-calling/python
2. Install dependencies:
pip install -r requirements.txt
3. Set the API key:
export OPENAI_API_KEY="sk-..."
4. Run:
python main.py
What we build in code
We build a careful robot that is not allowed to do everything blindly.
- AI can request any action
- a special "guard" (
gateway) checks whether it is allowed - if forbidden, the robot does nothing dangerous and explains a safe option
This is like a locked door: without permission, a command does not pass.
Code
tools.py β real tools
from typing import Any
CUSTOMERS = {
101: {"id": 101, "name": "Anna", "tier": "free", "email": "anna@gmail.com"},
202: {"id": 202, "name": "Max", "tier": "pro", "email": "max@company.local"},
}
def customer_db(action: str, customer_id: int, new_tier: str | None = None) -> dict[str, Any]:
customer = CUSTOMERS.get(customer_id)
if not customer:
return {"ok": False, "error": f"customer {customer_id} not found"}
if action == "read":
return {"ok": True, "customer": customer}
if action == "update_tier":
if not new_tier:
return {"ok": False, "error": "new_tier is required"}
customer["tier"] = new_tier
return {"ok": True, "customer": customer}
return {"ok": False, "error": f"unknown action '{action}'"}
def email_service(to: str, subject: str, body: str) -> dict[str, Any]:
return {
"ok": True,
"status": "queued",
"to": to,
"subject": subject,
"preview": body[:80],
}
These are ordinary Python functions. Risk starts when the agent can call them without control.
gateway.py β policy gateway (key layer)
import json
from typing import Any
from tools import customer_db, email_service
TOOL_REGISTRY = {
"customer_db": customer_db,
"email_service": email_service,
}
# Level 1: which tools are visible to the agent
ALLOWED_TOOLS = {"customer_db"}
# Level 2: which actions are allowed inside each tool
ALLOWED_ACTIONS = {
"customer_db": {"read"}, # update_tier is blocked
}
def execute_tool_call(tool_name: str, arguments_json: str) -> dict[str, Any]:
if tool_name not in ALLOWED_TOOLS:
return {"ok": False, "error": f"tool '{tool_name}' is not allowed"}
tool = TOOL_REGISTRY.get(tool_name)
if tool is None:
return {"ok": False, "error": f"tool '{tool_name}' not found"}
try:
args = json.loads(arguments_json or "{}")
except json.JSONDecodeError:
return {"ok": False, "error": "invalid JSON arguments"}
if tool_name == "customer_db":
action = args.get("action")
if action not in ALLOWED_ACTIONS["customer_db"]:
return {
"ok": False,
"error": f"action '{action}' is not allowed for tool '{tool_name}'",
}
try:
result = tool(**args)
except TypeError as exc:
return {"ok": False, "error": f"invalid arguments: {exc}"}
return {"ok": True, "tool": tool_name, "result": result}
Rules are enforced right here. The model cannot bypass this layer through a prompt.
llm.py β model and tool schemas
import os
from openai import OpenAI
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise EnvironmentError(
"OPENAI_API_KEY is not set.\n"
"Run: export OPENAI_API_KEY='sk-...'"
)
client = OpenAI(api_key=api_key)
SYSTEM_PROMPT = """
You are a support agent.
Use tools when data is missing.
If a tool or action is blocked, do not argue; suggest a safe manual next step.
Reply briefly in English.
""".strip()
TOOLS = [
{
"type": "function",
"function": {
"name": "customer_db",
"description": "Customer data operations: read or update tier",
"parameters": {
"type": "object",
"properties": {
"action": {"type": "string", "enum": ["read", "update_tier"]},
"customer_id": {"type": "integer"},
"new_tier": {"type": "string"},
},
"required": ["action", "customer_id"],
},
},
},
{
"type": "function",
"function": {
"name": "email_service",
"description": "Sends an email to the customer",
"parameters": {
"type": "object",
"properties": {
"to": {"type": "string"},
"subject": {"type": "string"},
"body": {"type": "string"},
},
"required": ["to", "subject", "body"],
},
},
},
]
def ask_model(messages: list[dict]):
completion = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "system", "content": SYSTEM_PROMPT}, *messages],
tools=TOOLS,
tool_choice="auto",
)
return completion.choices[0].message
Note: llm.py may show more tools to the model, but gateway.py still blocks disallowed ones.
main.py β agent loop with policy checks
import json
from gateway import execute_tool_call
from llm import ask_model
MAX_STEPS = 6
TASK = (
"For customer_id=101, check the profile, upgrade tier to pro, "
"and send a confirmation email to anna@gmail.com. "
"If any action is blocked, explain the safe manual next step."
)
def to_assistant_message(message) -> dict:
tool_calls = []
for tc in message.tool_calls or []:
tool_calls.append(
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
)
return {
"role": "assistant",
"content": message.content or "",
"tool_calls": tool_calls,
}
def run():
messages: list[dict] = [{"role": "user", "content": TASK}]
for step in range(1, MAX_STEPS + 1):
print(f"\n=== STEP {step} ===")
assistant = ask_model(messages)
messages.append(to_assistant_message(assistant))
if assistant.content and assistant.content.strip():
print("Assistant:", assistant.content.strip())
tool_calls = assistant.tool_calls or []
if not tool_calls:
print("\nDone: model finished the task.")
return
for tc in tool_calls:
print(f"Tool call: {tc.function.name}({tc.function.arguments})")
execution = execute_tool_call(
tool_name=tc.function.name,
arguments_json=tc.function.arguments,
)
print("Gateway result:", execution)
messages.append(
{
"role": "tool",
"tool_call_id": tc.id,
"content": json.dumps(execution, ensure_ascii=False),
}
)
print("\nStop: MAX_STEPS reached.")
if __name__ == "__main__":
run()
You can clearly see the boundary here: model proposes, gateway decides whether an action happens at all.
requirements.txt
openai>=1.0.0
Example output
=== STEP 1 ===
Tool call: customer_db({"action":"read","customer_id":101})
Gateway result: {'ok': True, 'tool': 'customer_db', 'result': {'ok': True, 'customer': {'id': 101, 'name': 'Anna', 'tier': 'free', 'email': 'anna@gmail.com'}}}
=== STEP 2 ===
Tool call: customer_db({"action":"update_tier","customer_id":101,"new_tier":"pro"})
Gateway result: {'ok': False, 'error': "action 'update_tier' is not allowed for tool 'customer_db'"}
=== STEP 3 ===
Tool call: email_service({"to":"anna@gmail.com","subject":"Tier updated","body":"..."})
Gateway result: {'ok': False, 'error': "tool 'email_service' is not allowed"}
=== STEP 4 ===
Assistant: I can only read the profile. Tier updates and email sending require manual operator action.
Done: model finished the task.
Note: the number of
STEPentries may differ between runs.
In oneSTEP, the model may return severaltool_calls, so sometimes you will see 2 steps, and sometimes 4.
This is normal LLM nondeterminism. What matters is that the policy gateway consistently blocksupdate_tierandemail_service.
Why this is a production approach
| Naive tool calling | With policy gateway | |
|---|---|---|
| The model decides everything by itself | β | β |
| There is centralized access control | β | β |
| You can separate read/write actions | β | β |
| Errors are transformed into controlled fallback | β | β |
Where to dig next
- Add
ALLOWED_TOOLS_BY_ROLE(for exampleviewer,operator,admin) - Add an approval flow for
update_tierinstead of a full deny - Add
max_tool_callsandmax_costnext toMAX_STEPS - Log
tool_name,args_hash,decision,reasonfor audit
Full code on GitHub
The repository contains the full version of this demo: tool loop, allowlist checks, and controlled fallback.
View full code on GitHub β