Skip to main content
This section explains how OpenTools exposes its tools to large language models. It bundles tools using the correct provider specifications, whether that provider is OpenRouter, OpenAI, Anthropic, Gemini, or Ollama. The model adapter is also responsible for running the tool loop through the import:
from opentools.adapters.models.openai import run_with_tools
This loop executes tools when the model asks for them, feeds structured results back into the model and continues until a final answer is produced. Simply put, tools define what can be done, the model adapter defines how it gets done.

What is a model adapter?

A model adapter connects OpenTools to a specific LLM provider. It handles integration concerns, specifically, involving how tools must be declare and tool calls are represented. They also interpret responses and errors. Internally, this logic lives in the provider adapter; chat.py. This file manages both the tool-call format and the execution loop. OpenTools hides these differences behind a shared interface, so your tools, schemas, and application logic remain unchanged. Switching between models becomes a configuration change rather than a rewrite. Currently, OpenTools supports the following model providers using this shared pattern: OpenAI, OpenRouter, Anthropic, Gemini, and Ollama.

Models vs Frameworks

The quickstart demonstrates direct model usage, where OpenTools manages the tool loop itself. This approach is ideal when you want minimal dependencies, full control, and a lightweight agent. For more complex systems, OpenTools also integrates with frameworks that manage their own orchestration, memory, and control flow. In those cases, OpenTools focuses on exposing tools and schemas, while the framework owns the loop and handles integration with model provider.
Even when using a framework, be sure to use the model: "name_of_model" argument as this is a required argument for bundling

Next steps