A Comprehensive Guide to Using Function Calling with LangChain
Function calling is revolutionizing AI applications, empowering large language models (LLMs) to dynamically interact with APIs, databases, and custom logic. By leveraging LangChain, developers can create intelligent agents that manage complex workflows with ease. This guide offers a deep dive into building function-calling agents using LangChain, complete with practical steps and code examples.
Introduction to LangChain
LangChain is an open-source framework designed to simplify the integration of LLMs with external tools. It provides robust support for chaining prompts, managing conversational memory, and building agents capable of function calls.
Why Choose LangChain for Function Calling?
- Modularity: Easy integration of tools, chains, and LLMs.
- Flexibility: Supports a wide range of use cases.
- Scalability: Handles evolving workflows efficiently.
- Community Support: Backed by extensive documentation and active development.
Getting Started with LangChain
Installation and Setup
- Install LangChain and Dependencies:
pip install langchain openai
- Set Up OpenAI API Key: Obtain an API key from OpenAI and set it as an environment variable
export OPENAI_API_KEY='your_api_key'
- Verify Installation:
Test that LangChain and the OpenAI client are installed correctly by running a simple script.
Key Concepts
- Tools: Represent functions the agent can invoke.
- Chains: Sequential workflows for combining multiple prompts or tools.
- Agents: LLMs that interact with tools based on user queries.
Building Your First Function Calling Agent
We’ll create an agent to fetch weather data and expand its functionality with additional tools.
Step 1: Define Your Functions
Start by creating the function to fetch weather data. For simplicity, we’ll use a mock function.
import requests
def get_weather(city):
# Replace with a real API call if needed
return f"The weather in {city} is sunny with a high of 25°C."
Step 2: Wrap Functions as Tools
LangChain uses the Tool
class to expose functions to agents.
from langchain.tools import Tool
weather_tool = Tool(
name="get_weather",
func=get_weather,
description="Fetches weather information for a given city."
)
Step 3: Initialize the Agent
Set up the agent with the necessary tools and an LLM.
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
# Initialize the language model
llm = ChatOpenAI(temperature=0)
# Add tools to the agent
tools = [weather_tool]
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
Step 4: Test the Agent
Interact with the agent to see it in action.
response = agent.run("What is the weather in New York?")
print(response)
Expected output:
The weather in New York is sunny with a high of 25°C.
Advanced Use Case: Chaining Multiple Tools
Next, let’s enhance the agent by adding a tool to detect a user’s location based on their IP address.
Step 5: Add More Tools
Define an additional function to determine a user’s location:
def get_location(ip):
# Mock function for IP-to-location mapping
return "New York"
location_tool = Tool(
name="get_location",
func=get_location,
description="Fetches location based on IP address."
)
Step 6: Update the Agent
Include the new tool and update the agent configuration:
tools = [weather_tool, location_tool]
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
response = agent.run("What is the weather at my location with IP 123.45.67.89?")
print(response)
Adding Contextual Memory to Agents
To build more conversational agents, integrate memory to retain context across interactions.
Step 7: Add Conversational Memory
LangChain supports short-term memory using ConversationBufferMemory
.
from langchain.memory import ConversationBufferMemory
# Initialize memory
memory = ConversationBufferMemory(memory_key="chat_history")
# Update the agent to include memory
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", memory=memory, verbose=True)
# Test the agent with memory
response = agent.run("What is the weather in Paris?")
print(response)
response = agent.run("Can you remind me what I just asked?")
print(response)
Output with Memory:
The weather in Paris is sunny with a high of 22°C.
You just asked about the weather in Paris.
Handling API Calls and Error Management
In real-world applications, you’ll integrate with APIs. Let’s refactor the get_weather
function to use an actual API and handle errors gracefully.
def get_weather(city):
try:
response = requests.get(f"https://api.weatherapi.com/v1/current.json?key=your_api_key&q={city}")
data = response.json()
return f"The weather in {city} is {data['current']['condition']['text']} with a temperature of {data['current']['temp_c']}°C."
except Exception as e:
return f"An error occurred: {str(e)}"
Scaling with Custom Chains
LangChain allows you to create custom chains for more complex workflows.
Step 8: Build a Custom Chain
Define a custom chain that combines multiple tools and logic.
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# Define a prompt for combining tools
prompt = PromptTemplate(input_variables=["ip"],
template="Use the IP {ip} to get the location, then fetch the weather.")
custom_chain = LLMChain(llm=llm, prompt=prompt)
# Execute the chain
response = custom_chain.run({"ip": "123.45.67.89"})
print(response)
Conclusion
LangChain’s function-calling capabilities empower developers to build powerful, scalable AI agents. From chaining tools to integrating memory and handling API calls, LangChain simplifies complex workflows for diverse applications.
Cohorte Team
January 6, 2025