Voici l implementation complete de l exemple de l article Comment limiter l acces aux outils.
Si tu n as pas encore lu l article, commence par lui. Ici, le focus est sur le code: comment les limites fonctionnent exactement au runtime.
Ce que cet exemple montre
- Niveau 1 (tool access): quels outils l agent peut appeler tout court
- Niveau 2 (action access): quelles actions sont autorisees dans un outil accessible
- Policy gateway: verification centralisee des requetes avant execution
- Comportement de fallback: l agent recoit une erreur et choisit l etape suivante sure
Structure du projet
foundations/
└── tool-calling/
└── python/
├── main.py # agent loop
├── llm.py # model + tool schemas
├── gateway.py # policy checks + execution
├── tools.py # tools (system actions)
└── requirements.txt
Cette separation est importante: le modele propose une action, et gateway.py decide si cette action sera executee ou non.
Lancer le projet
1. Clone le repository et va dans le dossier:
git clone https://github.com/AgentPatterns-tech/agentpatterns.git
cd foundations/tool-calling/python
2. Installe les dependances:
pip install -r requirements.txt
3. Definis la cle API:
export OPENAI_API_KEY="sk-..."
4. Lance:
python main.py
Ce que nous construisons dans le code
Nous construisons un robot prudent qui ne peut pas tout faire sans controle.
- L AI peut demander n importe quelle action
- un "gardien" special (
gateway) verifie si c est autorise - si c est interdit, le robot ne fait rien de dangereux et explique une option sure
C est comme une porte avec serrure: sans autorisation, la commande ne passe pas.
Code
tools.py — outils reels
from typing import Any
CUSTOMERS = {
101: {"id": 101, "name": "Anna", "tier": "free", "email": "anna@gmail.com"},
202: {"id": 202, "name": "Max", "tier": "pro", "email": "max@company.local"},
}
def customer_db(action: str, customer_id: int, new_tier: str | None = None) -> dict[str, Any]:
customer = CUSTOMERS.get(customer_id)
if not customer:
return {"ok": False, "error": f"customer {customer_id} not found"}
if action == "read":
return {"ok": True, "customer": customer}
if action == "update_tier":
if not new_tier:
return {"ok": False, "error": "new_tier is required"}
customer["tier"] = new_tier
return {"ok": True, "customer": customer}
return {"ok": False, "error": f"unknown action '{action}'"}
def email_service(to: str, subject: str, body: str) -> dict[str, Any]:
return {
"ok": True,
"status": "queued",
"to": to,
"subject": subject,
"preview": body[:80],
}
Ce sont des fonctions Python classiques. Le risque commence quand l agent peut les appeler sans controle.
gateway.py — policy gateway (couche cle)
import json
from typing import Any
from tools import customer_db, email_service
TOOL_REGISTRY = {
"customer_db": customer_db,
"email_service": email_service,
}
# Level 1: which tools are visible to the agent
ALLOWED_TOOLS = {"customer_db"}
# Level 2: which actions are allowed inside each tool
ALLOWED_ACTIONS = {
"customer_db": {"read"}, # update_tier is blocked
}
def execute_tool_call(tool_name: str, arguments_json: str) -> dict[str, Any]:
if tool_name not in ALLOWED_TOOLS:
return {"ok": False, "error": f"tool '{tool_name}' is not allowed"}
tool = TOOL_REGISTRY.get(tool_name)
if tool is None:
return {"ok": False, "error": f"tool '{tool_name}' not found"}
try:
args = json.loads(arguments_json or "{}")
except json.JSONDecodeError:
return {"ok": False, "error": "invalid JSON arguments"}
if tool_name == "customer_db":
action = args.get("action")
if action not in ALLOWED_ACTIONS["customer_db"]:
return {
"ok": False,
"error": f"action '{action}' is not allowed for tool '{tool_name}'",
}
try:
result = tool(**args)
except TypeError as exc:
return {"ok": False, "error": f"invalid arguments: {exc}"}
return {"ok": True, "tool": tool_name, "result": result}
C est ici que les regles sont enforcees. Le modele ne peut pas contourner cette couche via prompt.
llm.py — modele et schemas d outils
import os
from openai import OpenAI
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise EnvironmentError(
"OPENAI_API_KEY is not set.\n"
"Run: export OPENAI_API_KEY='sk-...'"
)
client = OpenAI(api_key=api_key)
SYSTEM_PROMPT = """
You are a support agent.
Use tools when data is missing.
If a tool or action is blocked, do not argue; suggest a safe manual next step.
Reply briefly in English.
""".strip()
TOOLS = [
{
"type": "function",
"function": {
"name": "customer_db",
"description": "Customer data operations: read or update tier",
"parameters": {
"type": "object",
"properties": {
"action": {"type": "string", "enum": ["read", "update_tier"]},
"customer_id": {"type": "integer"},
"new_tier": {"type": "string"},
},
"required": ["action", "customer_id"],
},
},
},
{
"type": "function",
"function": {
"name": "email_service",
"description": "Sends an email to the customer",
"parameters": {
"type": "object",
"properties": {
"to": {"type": "string"},
"subject": {"type": "string"},
"body": {"type": "string"},
},
"required": ["to", "subject", "body"],
},
},
},
]
def ask_model(messages: list[dict]):
completion = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "system", "content": SYSTEM_PROMPT}, *messages],
tools=TOOLS,
tool_choice="auto",
)
return completion.choices[0].message
Note: llm.py peut montrer plus d outils au modele, mais gateway.py bloque quand meme ceux qui ne sont pas autorises.
main.py — boucle agent avec verification des policies
import json
from gateway import execute_tool_call
from llm import ask_model
MAX_STEPS = 6
TASK = (
"For customer_id=101, check the profile, upgrade tier to pro, "
"and send a confirmation email to anna@gmail.com. "
"If any action is blocked, explain the safe manual next step."
)
def to_assistant_message(message) -> dict:
tool_calls = []
for tc in message.tool_calls or []:
tool_calls.append(
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
)
return {
"role": "assistant",
"content": message.content or "",
"tool_calls": tool_calls,
}
def run():
messages: list[dict] = [{"role": "user", "content": TASK}]
for step in range(1, MAX_STEPS + 1):
print(f"\n=== STEP {step} ===")
assistant = ask_model(messages)
messages.append(to_assistant_message(assistant))
if assistant.content and assistant.content.strip():
print("Assistant:", assistant.content.strip())
tool_calls = assistant.tool_calls or []
if not tool_calls:
print("\nDone: model finished the task.")
return
for tc in tool_calls:
print(f"Tool call: {tc.function.name}({tc.function.arguments})")
execution = execute_tool_call(
tool_name=tc.function.name,
arguments_json=tc.function.arguments,
)
print("Gateway result:", execution)
messages.append(
{
"role": "tool",
"tool_call_id": tc.id,
"content": json.dumps(execution, ensure_ascii=False),
}
)
print("\nStop: MAX_STEPS reached.")
if __name__ == "__main__":
run()
Ici on voit bien la boundary: le modele propose, gateway decide si l action se produit vraiment.
requirements.txt
openai>=1.0.0
Exemple de sortie
=== STEP 1 ===
Tool call: customer_db({"action":"read","customer_id":101})
Gateway result: {'ok': True, 'tool': 'customer_db', 'result': {'ok': True, 'customer': {'id': 101, 'name': 'Anna', 'tier': 'free', 'email': 'anna@gmail.com'}}}
=== STEP 2 ===
Tool call: customer_db({"action":"update_tier","customer_id":101,"new_tier":"pro"})
Gateway result: {'ok': False, 'error': "action 'update_tier' is not allowed for tool 'customer_db'"}
=== STEP 3 ===
Tool call: email_service({"to":"anna@gmail.com","subject":"Tier updated","body":"..."})
Gateway result: {'ok': False, 'error': "tool 'email_service' is not allowed"}
=== STEP 4 ===
Assistant: I can only read the profile. Tier updates and email sending require manual operator action.
Done: model finished the task.
Note: le nombre de
STEPpeut varier selon les runs.
Dans unSTEP, le modele peut renvoyer plusieurstool_calls, donc parfois tu verras 2 etapes, parfois 4.
C est une non-determinisme normale du LLM. L important, c est que le policy gateway bloqueupdate_tieretemail_servicede facon stable.
Pourquoi c est une approche production
| Tool calling naif | Avec policy gateway | |
|---|---|---|
| Le modele decide tout tout seul | ✅ | ❌ |
| Il y a un controle d acces centralise | ❌ | ✅ |
| On peut separer les actions read/write | ❌ | ✅ |
| Les erreurs deviennent un fallback controle | ❌ | ✅ |
Ou creuser ensuite
- Ajoute
ALLOWED_TOOLS_BY_ROLE(par exempleviewer,operator,admin) - Ajoute un approval-flow pour
update_tierau lieu d un blocage complet - Ajoute
max_tool_callsetmax_costa cote deMAX_STEPS - Log
tool_name,args_hash,decision,reasonpour audit
Code complet sur GitHub
Le repository contient la version complete de cette demo: tool loop, verifications allowlist et fallback controle.
Voir le code complet sur GitHub ↗