Agents#

In the previous recipe on tool calling, we manually implemented the workflow that enables LLMs to use tools:

  1. Send a message to the model

  2. Check if the model wants to call a tool

  3. Execute the tool and send the result back

  4. Repeat until the model provides a final answer

This manual loop works, but it requires us to write the orchestration code ourselves. When we automate this process, i.e., handle the tool trigger → execution → result loop automatically, we create an Agent. This allows the model to reason and act iteratively until the task is complete.

Hint

Make sure you’re familiar with tool calling before diving into agents, as agents build directly on that foundation.

from dotenv import find_dotenv, load_dotenv

load_dotenv(find_dotenv())
True

Defining Tools#

Let’s reuse the calculator tools from the tool calling recipe. These simple tools will help us clearly see how agents handle multi-step reasoning:

from langchain_core.tools import tool


@tool
def add(a: int, b: int) -> int:
    """Add two numbers together. Always use this tool when trying to add numbers."""
    return a + b


@tool
def subtract(a: int, b: int) -> int:
    """Subtract the second number from the first. Always use this tool when trying to subtract numbers."""
    return a - b


@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers. Always use this tool when trying to multiply numbers."""
    return a * b


@tool
def divide(a: float, b: float) -> float:
    """Divide the first number by the second. Always use this tool when trying to divide numbers."""
    if b == 0:
        return "Error: Division by zero"
    return a / b


tools = [add, subtract, multiply, divide]

Creating an Agent#

LangChain provides the create_agent() function to wrap a model and tools into an agent. The agent manages the entire tool calling loop automatically:

from langchain.agents import create_agent
from langchain_dartmouth.llms import ChatDartmouth

llm = ChatDartmouth(model_name="openai.gpt-oss-120b")

agent = create_agent(
    model=llm,
    tools=tools,
)

That’s it! The agent is now ready to use. Notice how simple this is compared to the manual loop we wrote in the tool calling recipe.

Agent in Action#

Let’s test our agent with a query that requires multiple tool calls to solve. This will demonstrate how the agent autonomously chains together operations:

result = agent.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "What is (5 + 3) multiplied by 2?",
            }
        ]
    }
)

Let’s examine the conversation to see how the agent worked through the problem:

for msg in result["messages"]:
    msg.pretty_print()
================================ Human Message =================================

What is (5 + 3) multiplied by 2?
================================== Ai Message ==================================
Tool Calls:
  add (chatcmpl-tool-81ca8e00ac847fbe)
 Call ID: chatcmpl-tool-81ca8e00ac847fbe
  Args:
    a: 5
    b: 3
================================= Tool Message =================================
Name: add

8
================================== Ai Message ==================================
Tool Calls:
  multiply (chatcmpl-tool-965497b31f511195)
 Call ID: chatcmpl-tool-965497b31f511195
  Args:
    a: 8
    b: 2
================================= Tool Message =================================
Name: multiply

16
================================== Ai Message ==================================

The result is **16**.

Notice the sequence of events:

  1. Human Message: The user’s question

  2. AI Message: The model decides to call add(5, 3) first

  3. Tool Message: The result 8 is returned

  4. AI Message: The model then calls multiply(8, 2)

  5. Tool Message: The result 16 is returned

  6. AI Message: The final answer incorporating both results

The agent automatically handled this multi-step calculation. We didn’t need to implement the loop ourselves! This is the key benefit of agents: they automate the reasoning and tool execution cycle.

Looking Ahead: Multi-Agent Workflows#

The simple loop automation we’ve seen here is the foundation for much more sophisticated patterns. In real-world applications, you might want:

  • Multiple agents that can hand off tasks to each other

  • Conditional branching based on intermediate results

  • Human-in-the-loop approval for certain actions

  • State management across complex multi-step workflows

For these advanced use cases, orchestration frameworks like LangGraph provide the structure needed to build reliable, production-ready agentic systems. LangGraph extends the agent concept with explicit state management, cycles, and controllability.

Note

The single-agent pattern covered in this recipe is powerful for many tasks. Consider LangGraph when you need coordination between multiple agents or more complex control flow.

Summary#

In this recipe, we learned how agents automate the tool calling workflow.

  • Agents automate the tool trigger → execution → result loop that we manually coded in the tool calling recipe

  • create_agent() wraps a model and tools into an agent that handles the orchestration automatically

  • Agents can make multiple tool calls iteratively to solve problems that require multi-step reasoning

  • For complex multi-agent workflows, orchestration frameworks like LangGraph provide additional structure and control