Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save roychri/4dc963ccae6c8c37eea6a786193229e6 to your computer and use it in GitHub Desktop.
Save roychri/4dc963ccae6c8c37eea6a786193229e6 to your computer and use it in GitHub Desktop.
Google Cloud Big Query AI Agent for LLM (Claude)
BUILD WITH CLAUDE
Tool use (function calling)
Claude is capable of interacting with external client-side tools and functions, allowing you to equip Claude with your own custom tools to perform a wider variety of tasks.
Learn everything you need to master tool use with Claude via our new comprehensive tool use course! Please continue to share your ideas and suggestions using this form.
Here’s an example of how to provide tools to Claude using the Messages API:
Shell
Python
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-3-5-sonnet-20240620",
max_tokens=1024,
tools=[
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
}
},
"required": ["location"],
},
}
],
messages=[{"role": "user", "content": "What's the weather like in San Francisco?"}],
)
print(response)
How tool use works
Integrate external tools with Claude in these steps:
1
Provide Claude with tools and a user prompt
Define tools with names, descriptions, and input schemas in your API request.
Include a user prompt that might require these tools, e.g., “What’s the weather in San Francisco?”
2
Claude decides to use a tool
Claude assesses if any tools can help with the user’s query.
If yes, Claude constructs a properly formatted tool use request.
The API response has a stop_reason of tool_use, signaling Claude’s intent.
3
Extract tool input, run code, and return results
On your end, extract the tool name and input from Claude’s request.
Execute the actual tool code client-side.
Continue the conversation with a new user message containing a tool_result content block.
4
Claude uses tool result to formulate a response
Claude analyzes the tool results to craft its final response to the original user prompt.
Note: Steps 3 and 4 are optional. For some workflows, Claude’s tool use request (step 2) might be all you need, without sending results back to Claude.
All tools are user-provided
It’s important to note that Claude does not have access to any built-in server-side tools. All tools must be explicitly provided by you, the user, in each API request. This gives you full control and flexibility over the tools Claude can use.
How to implement tool use
Choosing a model
Generally, use Claude 3 Opus for complex tools and ambiguous queries; it handles multiple tools better and seeks clarification when needed.
Use Haiku for straightforward tools, but note it may infer missing parameters.
Specifying tools
Tools are specified in the tools top-level parameter of the API request. Each tool definition includes:
Parameter Description
name The name of the tool. Must match the regex ^[a-zA-Z0-9_-]{1,64}$.
description A detailed plaintext description of what the tool does, when it should be used, and how it behaves.
input_schema A JSON Schema object defining the expected parameters for the tool.
Example simple tool definition
JSON
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
"required": ["location"]
}
}
This tool, named get_weather, expects an input object with a required location string and an optional unit string that must be either “celsius” or “fahrenheit”.
Best practices for tool definitions
To get the best performance out of Claude when using tools, follow these guidelines:
Provide extremely detailed descriptions. This is by far the most important factor in tool performance. Your descriptions should explain every detail about the tool, including:
What the tool does
When it should be used (and when it shouldn’t)
What each parameter means and how it affects the tool’s behavior
Any important caveats or limitations, such as what information the tool does not return if the tool name is unclear. The more context you can give Claude about your tools, the better it will be at deciding when and how to use them. Aim for at least 3-4 sentences per tool description, more if the tool is complex.
Prioritize descriptions over examples. While you can include examples of how to use a tool in its description or in the accompanying prompt, this is less important than having a clear and comprehensive explanation of the tool’s purpose and parameters. Only add examples after you’ve fully fleshed out the description.
Example of a good tool description
JSON
{
"name": "get_stock_price",
"description": "Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
Example poor tool description
JSON
{
"name": "get_stock_price",
"description": "Gets the stock price for a ticker.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string"
}
},
"required": ["ticker"]
}
}
The good description clearly explains what the tool does, when to use it, what data it returns, and what the ticker parameter means. The poor description is too brief and leaves Claude with many open questions about the tool’s behavior and usage.
Controlling Claude’s output
Forcing tool use
In some cases, you may want Claude to use a specific tool to answer the user’s question, even if Claude thinks it can provide an answer without using a tool. You can do this by specifying the tool in the tool_choice field like so:
tool_choice = {"type": "tool", "name": "get_weather"}
When working with the tool_choice parameter, we have three possible options:
auto allows Claude to decide whether to call any provided tools or not. This is the default value.
any tells Claude that it must use one of the provided tools, but doesn’t force a particular tool.
tool allows us to force Claude to always use a particular tool.
This diagram illustrates how each option works:
Note that when you have tool_choice as any or tool, we will prefill the assistant message to force a tool to be used. This means that the models will not emit a chain-of-thought text content block before tool_use content blocks, even if explicitly asked to do so.
Our testing has shown that this should not reduce performance. If you would like to keep chain-of-thought (particularly with Opus) while still requesting that the model use a specific tool, you can use {"type": "auto"} for tool_choice (the default) and add explicit instructions in a user message. For example: What's the weather like in London? Use the get_weather tool in your response.
JSON output
Tools do not necessarily need to be client-side functions — you can use tools anytime you want the model to return JSON output that follows a provided schema. For example, you might use a record_summary tool with a particular schema. See tool use examples for a full working example.
Chain of thought
When using tools, Claude will often show its “chain of thought”, i.e. the step-by-step reasoning it uses to break down the problem and decide which tools to use. The Claude 3 Opus model will do this if tool_choice is set to auto (this is the default value, see Forcing tool use), and Sonnet and Haiku can be prompted into doing it.
For example, given the prompt “What’s the weather like in San Francisco right now, and what time is it there?”, Claude might respond with:
JSON
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "<thinking>To answer this question, I will: 1. Use the get_weather tool to get the current weather in San Francisco. 2. Use the get_time tool to get the current time in the America/Los_Angeles timezone, which covers San Francisco, CA.</thinking>"
},
{
"type": "tool_use",
"id": "toolu_01A09q90qw90lq917835lq9",
"name": "get_weather",
"input": {"location": "San Francisco, CA"}
}
]
}
This chain of thought gives insight into Claude’s reasoning process and can help you debug unexpected behavior.
With the Claude 3 Sonnet model, chain of thought is less common by default, but you can prompt Claude to show its reasoning by adding something like "Before answering, explain your reasoning step-by-step in tags." to the user message or system prompt.
It’s important to note that while the <thinking> tags are a common convention Claude uses to denote its chain of thought, the exact format (such as what this XML tag is named) may change over time. Your code should treat the chain of thought like any other assistant-generated text, and not rely on the presence or specific formatting of the <thinking> tags.
Handling tool use and tool result content blocks
When Claude decides to use one of the tools you’ve provided, it will return a response with a stop_reason of tool_use and one or more tool_use content blocks in the API response that include:
id: A unique identifier for this particular tool use block. This will be used to match up the tool results later.
name: The name of the tool being used.
input: An object containing the input being passed to the tool, conforming to the tool’s input_schema.
Example API response with a `tool_use` content block
JSON
{
"id": "msg_01Aq9w938a90dw8q",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "tool_use",
"role": "assistant",
"content": [
{
"type": "text",
"text": "<thinking>I need to use the get_weather, and the user wants SF, which is likely San Francisco, CA.</thinking>"
},
{
"type": "tool_use",
"id": "toolu_01A09q90qw90lq917835lq9",
"name": "get_weather",
"input": {"location": "San Francisco, CA", "unit": "celsius"}
}
]
}
When you receive a tool use response, you should:
Extract the name, id, and input from the tool_use block.
Run the actual tool in your codebase corresponding to that tool name, passing in the tool input.
[optional] Continue the conversation by sending a new message with the role of user, and a content block containing the tool_result type and the following information:
tool_use_id: The id of the tool use request this is a result for.
content: The result of the tool, as a string (e.g. "content": "15 degrees") or list of nested content blocks (e.g. "content": [{"type": "text", "text": "15 degrees"}]). These content blocks can use the text or image types.
is_error (optional): Set to true if the tool execution resulted in an error.
Example of successful tool result
JSON
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01A09q90qw90lq917835lq9",
"content": "15 degrees"
}
]
}
Example of tool result with images
JSON
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01A09q90qw90lq917835lq9",
"content": [
{"type": "text", "text": "15 degrees"},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "/9j/4AAQSkZJRg...",
}
}
]
}
]
}
Example of empty tool result
JSON
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01A09q90qw90lq917835lq9",
}
]
}
After receiving the tool result, Claude will use that information to continue generating a response to the original user prompt.
Differences from other APIs
Unlike APIs that separate tool use or use special roles like tool or function, Anthropic’s API integrates tools directly into the user and assistant message structure.
Messages contain arrays of text, image, tool_use, and tool_result blocks. user messages include client-side content and tool_result, while assistant messages contain AI-generated content and tool_use.
Troubleshooting errors
There are a few different types of errors that can occur when using tools with Claude:
Tool execution error
If the tool itself throws an error during execution (e.g. a network error when fetching weather data), you can return the error message in the content along with "is_error": true:
JSON
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01A09q90qw90lq917835lq9",
"content": "ConnectionError: the weather service API is not available (HTTP 500)",
"is_error": true
}
]
}
Claude will then incorporate this error into its response to the user, e.g. “I’m sorry, I was unable to retrieve the current weather because the weather service API is not available. Please try again later.”
Max tokens exceeded
If Claude’s response is cut off due to hitting the max_tokens limit, and the truncated response contains an incomplete tool use block, you’ll need to retry the request with a higher max_tokens value to get the full tool use.
Invalid tool name
If Claude’s attempted use of a tool is invalid (e.g. missing required parameters), it usually means that the there wasn’t enough information for Claude to use the tool correctly. Your best bet during development is to try the request again with more-detailed description values in your tool definitions.
However, you can also continue the conversation forward with a tool_result that indicates the error, and Claude will try to use the tool again with the missing information filled in:
JSON
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01A09q90qw90lq917835lq9",
"content": "Error: Missing required 'location' parameter",
"is_error": true
}
]
}
If a tool request is invalid or missing parameters, Claude will retry 2-3 times with corrections before apologizing to the user.
<search_quality_reflection> tags
To prevent Claude from reflecting on search quality with <search_quality_reflection> tags, add “Do not reflect on the quality of the returned search results in your response” to your prompt.
Tool use examples
Here are a few code examples demonstrating various tool use patterns and techniques. For brevity’s sake, the tools are simple tools, and the tool descriptions are shorter than would be ideal to ensure best performance.
Single tool example
Shell
Python
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"tools": [{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either \"celsius\" or \"fahrenheit\""
}
},
"required": ["location"]
}
}],
"messages": [{"role": "user", "content": "What is the weather like in San Francisco?"}]
}'
Claude will return a response similar to:
JSON
{
"id": "msg_01Aq9w938a90dw8q",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "tool_use",
"role": "assistant",
"content": [
{
"type": "text",
"text": "<thinking>I need to call the get_weather function, and the user wants SF, which is likely San Francisco, CA.</thinking>"
},
{
"type": "tool_use",
"id": "toolu_01A09q90qw90lq917835lq9",
"name": "get_weather",
"input": {"location": "San Francisco, CA", "unit": "celsius"}
}
]
}
You would then need to execute the get_weather function with the provided input, and return the result in a new user message:
Shell
Python
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either \"celsius\" or \"fahrenheit\""
}
},
"required": ["location"]
}
}
],
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "<thinking>I need to use get_weather, and the user wants SF, which is likely San Francisco, CA.</thinking>"
},
{
"type": "tool_use",
"id": "toolu_01A09q90qw90lq917835lq9",
"name": "get_weather",
"input": {
"location": "San Francisco, CA",
"unit": "celsius"
}
}
]
},
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01A09q90qw90lq917835lq9",
"content": "15 degrees"
}
]
}
]
}'
This will print Claude’s final response, incorporating the weather data:
JSON
{
"id": "msg_01Aq9w938a90dw8q",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "stop_sequence",
"role": "assistant",
"content": [
{
"type": "text",
"text": "The current weather in San Francisco is 15 degrees Celsius (59 degrees Fahrenheit). It's a cool day in the city by the bay!"
}
]
}
Multiple tool example
You can provide Claude with multiple tools to choose from in a single request. Here’s an example with both a get_weather and a get_time tool, along with a user query that asks for both.
Shell
Python
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"tools": [{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
"required": ["location"]
}
},
{
"name": "get_time",
"description": "Get the current time in a given time zone",
"input_schema": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "The IANA time zone name, e.g. America/Los_Angeles"
}
},
"required": ["timezone"]
}
}],
"messages": [{
"role": "user",
"content": "What is the weather like right now in New York? Also what time is it there?"
}]
}'
In this case, Claude will most likely try to use two separate tools, one at a time — get_weather and then get_time — in order to fully answer the user’s question. However, it will also occasionally output two tool_use blocks at once, particularly if they are not dependent on each other. You would need to execute each tool and return their results in separate tool_result blocks within a single user message.
Missing information
If the user’s prompt doesn’t include enough information to fill all the required parameters for a tool, Claude 3 Opus is much more likely to recognize that a parameter is missing and ask for it. Claude 3 Sonnet may ask, especially when prompted to think before outputting a tool request. But it may also do its best to infer a reasonable value.
For example, using the get_weather tool above, if you ask Claude “What’s the weather?” without specifying a location, Claude, particularly Claude 3 Sonnet, may make a guess about tools inputs:
JSON
{
"type": "tool_use",
"id": "toolu_01A09q90qw90lq917835lq9",
"name": "get_weather",
"input": {"location": "New York, NY", "unit": "fahrenheit"}
}
This behavior is not guaranteed, especially for more ambiguous prompts and for models less intelligent than Claude 3 Opus. If Claude 3 Opus doesn’t have enough context to fill in the required parameters, it is far more likely respond with a clarifying question instead of making a tool call.
Sequential tools
Some tasks may require calling multiple tools in sequence, using the output of one tool as the input to another. In such a case, Claude will call one tool at a time. If prompted to call the tools all at once, Claude is likely to guess parameters for tools further downstream if they are dependent on tool results for tools further upstream.
Here’s an example of using a get_location tool to get the user’s location, then passing that location to the get_weather tool:
Shell
Python
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data \
'{
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"tools": [
{
"name": "get_location",
"description": "Get the current user location based on their IP address. This tool has no parameters or arguments.",
"input_schema": {
"type": "object",
"properties": {}
}
},
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature, either 'celsius' or 'fahrenheit'"
}
},
"required": ["location"]
}
}
],
"messages": [{
"role": "user",
"content": "What is the weather like where I am?"
}]
}'
In this case, Claude would first call the get_location tool to get the user’s location. After you return the location in a tool_result, Claude would then call get_weather with that location to get the final answer.
The full conversation might look like:
Role Content
User What’s the weather like where I am?
Assistant <thinking>To answer this, I first need to determine the user’s location using the get_location tool. Then I can pass that location to the get_weather tool to find the current weather there.</thinking>[Tool use for get_location]
User [Tool result for get_location with matching id and result of San Francisco, CA]
Assistant [Tool use for get_weather with the following input]{ “location”: “San Francisco, CA”, “unit”: “fahrenheit” }
User [Tool result for get_weather with matching id and result of “59°F (15°C), mostly cloudy”]
Assistant Based on your current location in San Francisco, CA, the weather right now is 59°F (15°C) and mostly cloudy. It’s a fairly cool and overcast day in the city. You may want to bring a light jacket if you’re heading outside.
This example demonstrates how Claude can chain together multiple tool calls to answer a question that requires gathering data from different sources. The key steps are:
Claude first realizes it needs the user’s location to answer the weather question, so it calls the get_location tool.
The user (i.e. the client code) executes the actual get_location function and returns the result “San Francisco, CA” in a tool_result block.
With the location now known, Claude proceeds to call the get_weather tool, passing in “San Francisco, CA” as the location parameter (as well as a guessed unit parameter, as unit is not a required parameter).
The user again executes the actual get_weather function with the provided arguments and returns the weather data in another tool_result block.
Finally, Claude incorporates the weather data into a natural language response to the original question.
Chain of thought tool use
By default, Claude 3 Opus is prompted to think before it answers a tool use query to best determine whether a tool is necessary, which tool to use, and the appropriate parameters. Claude 3 Sonnet and Claude 3 Haiku are prompted to try to use tools as much as possible and are more likely to call an unnecessary tool or infer missing parameters. To prompt Sonnet or Haiku to better assess the user query before making tool calls, the following prompt can be used:
Chain of thought prompt
Answer the user's request using relevant tools (if they are available). Before calling a tool, do some analysis within \<thinking>\</thinking> tags. First, think about which of the provided tools is the relevant tool to answer the user's request. Second, go through each of the required parameters of the relevant tool and determine if the user has directly provided or given enough information to infer a value. When deciding if the parameter can be inferred, carefully consider all the context to see if it supports a specific value. If all of the required parameters are present or can be reasonably inferred, close the thinking tag and proceed with the tool call. BUT, if one of the values for a required parameter is missing, DO NOT invoke the function (not even with fillers for the missing params) and instead, ask the user to provide the missing parameters. DO NOT ask for more information on optional parameters if it is not provided.
JSON mode
You can use tools to get Claude produce JSON output that follows a schema, even if you don’t have any intention of running that output through a tool or function.
When using tools in this way:
You usually want to provide a single tool
You should set tool_choice (see Forcing tool use) to instruct the model to explicitly use that tool
Remember that the model will pass the input to the tool, so the name of the tool and description should be from the model’s perspective.
The following uses a record_summary tool to describe an image following a particular format.
Shell
Python
#!/bin/bash
IMAGE_URL="https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
IMAGE_MEDIA_TYPE="image/jpeg"
IMAGE_BASE64=$(curl "$IMAGE_URL" | base64)
curl https://api.anthropic.com/v1/messages \
--header "content-type: application/json" \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--data \
'{
"model": "claude-3-sonnet-20240229",
"max_tokens": 1024,
"tools": [{
"name": "record_summary",
"description": "Record summary of an image using well-structured JSON.",
"input_schema": {
"type": "object",
"properties": {
"key_colors": {
"type": "array",
"items": {
"type": "object",
"properties": {
"r": { "type": "number", "description": "red value [0.0, 1.0]" },
"g": { "type": "number", "description": "green value [0.0, 1.0]" },
"b": { "type": "number", "description": "blue value [0.0, 1.0]" },
"name": { "type": "string", "description": "Human-readable color name in snake_case, e.g. \"olive_green\" or \"turquoise\"" }
},
"required": [ "r", "g", "b", "name" ]
},
"description": "Key colors in the image. Limit to less then four."
},
"description": {
"type": "string",
"description": "Image description. One to two sentences max."
},
"estimated_year": {
"type": "integer",
"description": "Estimated year that the images was taken, if is it a photo. Only set this if the image appears to be non-fictional. Rough estimates are okay!"
}
},
"required": [ "key_colors", "description" ]
}
}],
"tool_choice": {"type": "tool", "name": "record_summary"},
"messages": [
{"role": "user", "content": [
{"type": "image", "source": {
"type": "base64",
"media_type": "'$IMAGE_MEDIA_TYPE'",
"data": "'$IMAGE_BASE64'"
}},
{"type": "text", "text": "Describe this image."}
]}
]
}'
Pricing
Tool use requests are priced the same as any other Claude API request, based on the total number of input tokens sent to the model (including in the tools parameter) and the number of output tokens generated.”
The additional tokens from tool use come from:
The tools parameter in API requests (tool names, descriptions, and schemas)
tool_use content blocks in API requests and responses
tool_result content blocks in API requests
When you use tools, we also automatically include a special system prompt for the model which enables tool use. The number of tool use tokens required for each model are listed below (excluding the additional tokens listed above):
Model Tool choice Tool use system prompt token count
Claude 3.5 Sonnet auto
any, tool 294 tokens
261 tokens
Claude 3 Opus auto
any, tool 530 tokens
281 tokens
Claude 3 Sonnet auto
any, tool 159 tokens
235 tokens
Claude 3 Haiku auto
any, tool 264 tokens
340 tokens
These token counts are added to your normal input and output tokens to calculate the total cost of a request. Refer to our models overview table for current per-model prices.
When you send a tool use prompt, just like any other API request, the response will output both input and output token counts as part of the reported usage metrics.
Creating a Customer Service Agent with Client-Side Tools
In this recipe, we'll demonstrate how to create a customer service chatbot using Claude 3 plus client-side tools. The chatbot will be able to look up customer information, retrieve order details, and cancel orders on behalf of the customer. We'll define the necessary tools and simulate synthetic responses to showcase the chatbot's capabilities.
Step 1: Set up the environment
First, let's install the required libraries and set up the Anthropic API client.
%pip install anthropic
import anthropic
client = anthropic.Client()
MODEL_NAME = "claude-3-opus-20240229"
Step 2: Define the client-side tools
Next, we'll define the client-side tools that our chatbot will use to assist customers. We'll create three tools: get_customer_info, get_order_details, and cancel_order.
tools = [
{
"name": "get_customer_info",
"description": "Retrieves customer information based on their customer ID. Returns the customer's name, email, and phone number.",
"input_schema": {
"type": "object",
"properties": {
"customer_id": {
"type": "string",
"description": "The unique identifier for the customer."
}
},
"required": ["customer_id"]
}
},
{
"name": "get_order_details",
"description": "Retrieves the details of a specific order based on the order ID. Returns the order ID, product name, quantity, price, and order status.",
"input_schema": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "The unique identifier for the order."
}
},
"required": ["order_id"]
}
},
{
"name": "cancel_order",
"description": "Cancels an order based on the provided order ID. Returns a confirmation message if the cancellation is successful.",
"input_schema": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "The unique identifier for the order to be cancelled."
}
},
"required": ["order_id"]
}
}
]
Step 3: Simulate synthetic tool responses
Since we don't have real customer data or order information, we'll simulate synthetic responses for our tools. In a real-world scenario, these functions would interact with your actual customer database and order management system.
def get_customer_info(customer_id):
# Simulated customer data
customers = {
"C1": {"name": "John Doe", "email": "[email protected]", "phone": "123-456-7890"},
"C2": {"name": "Jane Smith", "email": "[email protected]", "phone": "987-654-3210"}
}
return customers.get(customer_id, "Customer not found")
def get_order_details(order_id):
# Simulated order data
orders = {
"O1": {"id": "O1", "product": "Widget A", "quantity": 2, "price": 19.99, "status": "Shipped"},
"O2": {"id": "O2", "product": "Gadget B", "quantity": 1, "price": 49.99, "status": "Processing"}
}
return orders.get(order_id, "Order not found")
def cancel_order(order_id):
# Simulated order cancellation
if order_id in ["O1", "O2"]:
return True
else:
return False
Step 4: Process tool calls and return results
We'll create a function to process the tool calls made by Claude and return the appropriate results.
def process_tool_call(tool_name, tool_input):
if tool_name == "get_customer_info":
return get_customer_info(tool_input["customer_id"])
elif tool_name == "get_order_details":
return get_order_details(tool_input["order_id"])
elif tool_name == "cancel_order":
return cancel_order(tool_input["order_id"])
Step 5: Interact with the chatbot
Now, let's create a function to interact with the chatbot. We'll send a user message, process any tool calls made by Claude, and return the final response to the user.
import json
def chatbot_interaction(user_message):
print(f"\n{'='*50}\nUser Message: {user_message}\n{'='*50}")
messages = [
{"role": "user", "content": user_message}
]
response = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
tools=tools,
messages=messages
)
print(f"\nInitial Response:")
print(f"Stop Reason: {response.stop_reason}")
print(f"Content: {response.content}")
while response.stop_reason == "tool_use":
tool_use = next(block for block in response.content if block.type == "tool_use")
tool_name = tool_use.name
tool_input = tool_use.input
print(f"\nTool Used: {tool_name}")
print(f"Tool Input:")
print(json.dumps(tool_input, indent=2))
tool_result = process_tool_call(tool_name, tool_input)
print(f"\nTool Result:")
print(json.dumps(tool_result, indent=2))
messages = [
{"role": "user", "content": user_message},
{"role": "assistant", "content": response.content},
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": tool_use.id,
"content": str(tool_result),
}
],
},
]
response = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
tools=tools,
messages=messages
)
print(f"\nResponse:")
print(f"Stop Reason: {response.stop_reason}")
print(f"Content: {response.content}")
final_response = next(
(block.text for block in response.content if hasattr(block, "text")),
None,
)
print(f"\nFinal Response: {final_response}")
return final_response
Extracting Structured JSON using Claude and Tool Use
In this cookbook, we'll explore various examples of using Claude and the tool use feature to extract structured JSON data from different types of input. We'll define custom tools that prompt Claude to generate well-structured JSON output for tasks such as summarization, entity extraction, sentiment analysis, and more.
If you want to get structured JSON data without using tools, take a look at our "How to enable JSON mode" cookbook.
Set up the environment
First, let's install the required libraries and set up the Anthropic API client.
%pip install anthropic requests beautifulsoup4
from anthropic import Anthropic
import requests
from bs4 import BeautifulSoup
import json
client = Anthropic()
MODEL_NAME = "claude-3-haiku-20240307"
Example 1: Article Summarization
In this example, we'll use Claude to generate a JSON summary of an article, including fields for the author, topics, summary, coherence score, persuasion score, and a counterpoint.
tools = [
{
"name": "print_summary",
"description": "Prints a summary of the article.",
"input_schema": {
"type": "object",
"properties": {
"author": {"type": "string", "description": "Name of the article author"},
"topics": {
"type": "array",
"items": {"type": "string"},
"description": 'Array of topics, e.g. ["tech", "politics"]. Should be as specific as possible, and can overlap.'
},
"summary": {"type": "string", "description": "Summary of the article. One or two paragraphs max."},
"coherence": {"type": "integer", "description": "Coherence of the article's key points, 0-100 (inclusive)"},
"persuasion": {"type": "number", "description": "Article's persuasion score, 0.0-1.0 (inclusive)"}
},
"required": ['author', 'topics', 'summary', 'coherence', 'persuasion', 'counterpoint']
}
}
]
url = "https://www.anthropic.com/news/third-party-testing"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
article = " ".join([p.text for p in soup.find_all("p")])
query = f"""
<article>
{article}
</article>
Use the `print_summary` tool.
"""
response = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
tools=tools,
messages=[{"role": "user", "content": query}]
)
json_summary = None
for content in response.content:
if content.type == "tool_use" and content.name == "print_summary":
json_summary = content.input
break
if json_summary:
print("JSON Summary:")
print(json.dumps(json_summary, indent=2))
else:
print("No JSON summary found in the response.")
JSON Summary:
{
"author": "Anthropic",
"topics": [
"AI policy",
"AI safety",
"third-party testing"
],
"summary": "The article argues that the AI sector needs effective third-party testing for frontier AI systems to avoid societal harm, whether deliberate or accidental. It discusses what third-party testing looks like, why it's needed, and the research Anthropic has done to arrive at this policy position. The article states that such a testing regime is necessary because frontier AI systems like large-scale generative models don't fit neatly into use-case and sector-specific frameworks, and can pose risks of serious misuse or AI-caused accidents. Though Anthropic and other organizations have implemented self-governance systems, the article argues that industry-wide third-party testing is ultimately needed to be broadly trusted. The article outlines key components of an effective third-party testing regime, including identifying national security risks, and discusses how it could be accomplished by a diverse ecosystem of organizations. Anthropic plans to advocate for greater funding and public sector infrastructure for AI testing and evaluation, as well as developing tests for specific capabilities.",
"coherence": 90,
"persuasion": 0.8
}
Example 2: Named Entity Recognition
In this example, we'll use Claude to perform named entity recognition on a given text and return the entities in a structured JSON format.
tools = [
{
"name": "print_entities",
"description": "Prints extract named entities.",
"input_schema": {
"type": "object",
"properties": {
"entities": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string", "description": "The extracted entity name."},
"type": {"type": "string", "description": "The entity type (e.g., PERSON, ORGANIZATION, LOCATION)."},
"context": {"type": "string", "description": "The context in which the entity appears in the text."}
},
"required": ["name", "type", "context"]
}
}
},
"required": ["entities"]
}
}
]
text = "John works at Google in New York. He met with Sarah, the CEO of Acme Inc., last week in San Francisco."
query = f"""
<document>
{text}
</document>
Use the print_entities tool.
"""
response = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
tools=tools,
messages=[{"role": "user", "content": query}]
)
json_entities = None
for content in response.content:
if content.type == "tool_use" and content.name == "print_entities":
json_entities = content.input
break
if json_entities:
print("Extracted Entities (JSON):")
print(json_entities)
else:
print("No entities found in the response.")
Extracted Entities (JSON):
{'entities': [{'name': 'John', 'type': 'PERSON', 'context': 'John works at Google in New York.'}, {'name': 'Google', 'type': 'ORGANIZATION', 'context': 'John works at Google in New York.'}, {'name': 'New York', 'type': 'LOCATION', 'context': 'John works at Google in New York.'}, {'name': 'Sarah', 'type': 'PERSON', 'context': 'He met with Sarah, the CEO of Acme Inc., last week in San Francisco.'}, {'name': 'Acme Inc.', 'type': 'ORGANIZATION', 'context': 'He met with Sarah, the CEO of Acme Inc., last week in San Francisco.'}, {'name': 'San Francisco', 'type': 'LOCATION', 'context': 'He met with Sarah, the CEO of Acme Inc., last week in San Francisco.'}]}
Example 3: Sentiment Analysis
In this example, we'll use Claude to perform sentiment analysis on a given text and return the sentiment scores in a structured JSON format.
tools = [
{
"name": "print_sentiment_scores",
"description": "Prints the sentiment scores of a given text.",
"input_schema": {
"type": "object",
"properties": {
"positive_score": {"type": "number", "description": "The positive sentiment score, ranging from 0.0 to 1.0."},
"negative_score": {"type": "number", "description": "The negative sentiment score, ranging from 0.0 to 1.0."},
"neutral_score": {"type": "number", "description": "The neutral sentiment score, ranging from 0.0 to 1.0."}
},
"required": ["positive_score", "negative_score", "neutral_score"]
}
}
]
text = "The product was okay, but the customer service was terrible. I probably won't buy from them again."
query = f"""
<text>
{text}
</text>
Use the print_sentiment_scores tool.
"""
response = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
tools=tools,
messages=[{"role": "user", "content": query}]
)
json_sentiment = None
for content in response.content:
if content.type == "tool_use" and content.name == "print_sentiment_scores":
json_sentiment = content.input
break
if json_sentiment:
print("Sentiment Analysis (JSON):")
print(json.dumps(json_sentiment, indent=2))
else:
print("No sentiment analysis found in the response.")
Sentiment Analysis (JSON):
{
"negative_score": 0.6,
"neutral_score": 0.3,
"positive_score": 0.1
}
Example 4: Text Classification
In this example, we'll use Claude to classify a given text into predefined categories and return the classification results in a structured JSON format.
tools = [
{
"name": "print_classification",
"description": "Prints the classification results.",
"input_schema": {
"type": "object",
"properties": {
"categories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string", "description": "The category name."},
"score": {"type": "number", "description": "The classification score for the category, ranging from 0.0 to 1.0."}
},
"required": ["name", "score"]
}
}
},
"required": ["categories"]
}
}
]
text = "The new quantum computing breakthrough could revolutionize the tech industry."
query = f"""
<document>
{text}
</document>
Use the print_classification tool. The categories can be Politics, Sports, Technology, Entertainment, Business.
"""
response = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
tools=tools,
messages=[{"role": "user", "content": query}]
)
json_classification = None
for content in response.content:
if content.type == "tool_use" and content.name == "print_classification":
json_classification = content.input
break
if json_classification:
print("Text Classification (JSON):")
print(json.dumps(json_classification, indent=2))
else:
print("No text classification found in the response.")
Text Classification (JSON):
{
"categories": [
{
"name": "Politics",
"score": 0.1
},
{
"name": "Sports",
"score": 0.1
},
{
"name": "Technology",
"score": 0.7
},
{
"name": "Entertainment",
"score": 0.1
},
{
"name": "Business",
"score": 0.5
}
]
}
These examples demonstrate how you can use Claude and the tool use feature to extract structured JSON data for various natural language processing tasks. By defining custom tools with specific input schemas, you can guide Claude to generate well-structured JSON output that can be easily parsed and utilized in your applications.
Python Client for Google BigQuery
bookmark_border
image image image
Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google BigQuery solves this problem by enabling super-fast, SQL queries against append-mostly tables, using the processing power of Google’s infrastructure.
Client Library Documentation
Product Documentation
Quick Start
In order to use this library, you first need to go through the following steps:
Select or create a Cloud Platform project.
Enable billing for your project.
Enable the Google Cloud BigQuery API.
Setup Authentication.
Installation
Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.
With virtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.
Supported Python Versions
Python >= 3.7
Unsupported Python Versions
Python == 2.7, Python == 3.5, Python == 3.6.
The last version of this library compatible with Python 2.7 and 3.5 is google-cloud-bigquery==1.28.0.
Mac/Linux
pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-cloud-bigquery
Windows
pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install google-cloud-bigquery
Example Usage
Perform a query
from google.cloud import bigquery
client = bigquery.Client()
# Perform a query.
QUERY = (
'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '
'WHERE state = "TX" '
'LIMIT 100')
query_job = client.query(QUERY) # API request
rows = query_job.result() # Waits for query to finish
for row in rows:
print(row.name)
Instrumenting With OpenTelemetry
This application uses OpenTelemetry to output tracing data from API calls to BigQuery. To enable OpenTelemetry tracing in the BigQuery client the following PyPI packages need to be installed:
pip install google-cloud-bigquery[opentelemetry] opentelemetry-exporter-gcp-trace
After installation, OpenTelemetry can be used in the BigQuery client and in BigQuery jobs. First, however, an exporter must be specified for where the trace data will be outputted to. An example of this can be found here:
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.cloud_trace import CloudTraceSpanExporter
tracer_provider = TracerProvider()
tracer_provider = BatchSpanProcessor(CloudTraceSpanExporter())
trace.set_tracer_provider(TracerProvider())
You are a python software developer with a lot of experience writing software that uses GenAI such as ClaudeAI.
When designing software, it’s important to think about scalability and maintainability as the application grows.
Single Responsibility Principle (SRP)
The Single Responsibility Principle advocates for a class or module to have only one reason to change. In simpler terms, it should do one thing and do it well. By adhering to SRP, your code becomes more modular, making it easier to understand and maintain.
Open-Closed Principle (OCP)
The Open-Closed Principle states that software entities should be open for extension but closed for modification. This means that you should be able to extend a class’s behavior without modifying it.
Liskov Substitution Principle (LSP)
The Liskov Substitution Principle states that objects in a program should be replaceable with instances of their subtypes without altering the correctness of the program. In other words, a subclass should be able to replace its parent class without breaking the code.
Interface Segregation Principle (ISP)
The Interface Segregation Principle states that clients should not be forced to depend on methods they do not use. This means that you should not have to implement methods that you do not need.
Dependency Inversion Principle (DIP)
The Dependency Inversion Principle states that high-level modules should not depend on low-level modules, but both should depend on abstractions. This means that you should not have to change your code when you change the implementation of a module.
Using a Calculator Tool with Claude
In this recipe, we'll demonstrate how to provide Claude with a simple calculator tool that it can use to perform arithmetic operations based on user input. We'll define the calculator tool and show how Claude can interact with it to solve mathematical problems.
Step 1: Set up the environment
First, let's install the required libraries and set up the Anthropic API client.
%pip install anthropic
from anthropic import Anthropic
client = Anthropic()
MODEL_NAME = "claude-3-opus-20240229"
Step 2: Define the calculator tool
We'll define a simple calculator tool that can perform basic arithmetic operations. The tool will take a mathematical expression as input and return the result.
Note that we are calling eval on the outputted expression. This is bad practice and should not be used generally but we are doing it for the purpose of demonstration.
import re
def calculate(expression):
# Remove any non-digit or non-operator characters from the expression
expression = re.sub(r'[^0-9+\-*/().]', '', expression)
try:
# Evaluate the expression using the built-in eval() function
result = eval(expression)
return str(result)
except (SyntaxError, ZeroDivisionError, NameError, TypeError, OverflowError):
return "Error: Invalid expression"
tools = [
{
"name": "calculator",
"description": "A simple calculator that performs basic arithmetic operations.",
"input_schema": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate (e.g., '2 + 3 * 4')."
}
},
"required": ["expression"]
}
}
]
In this example, we define a calculate function that takes a mathematical expression as input, removes any non-digit or non-operator characters using a regular expression, and then evaluates the expression using the built-in eval() function. If the evaluation is successful, the result is returned as a string. If an error occurs during evaluation, an error message is returned.
We then define the calculator tool with an input schema that expects a single expression property of type string.
Step 3: Interact with Claude
Now, let's see how Claude can interact with the calculator tool to solve mathematical problems.
def process_tool_call(tool_name, tool_input):
if tool_name == "calculator":
return calculate(tool_input["expression"])
def chat_with_claude(user_message):
print(f"\n{'='*50}\nUser Message: {user_message}\n{'='*50}")
message = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
messages=[{"role": "user", "content": user_message}],
tools=tools,
)
print(f"\nInitial Response:")
print(f"Stop Reason: {message.stop_reason}")
print(f"Content: {message.content}")
if message.stop_reason == "tool_use":
tool_use = next(block for block in message.content if block.type == "tool_use")
tool_name = tool_use.name
tool_input = tool_use.input
print(f"\nTool Used: {tool_name}")
print(f"Tool Input: {tool_input}")
tool_result = process_tool_call(tool_name, tool_input)
print(f"Tool Result: {tool_result}")
response = client.messages.create(
model=MODEL_NAME,
max_tokens=4096,
messages=[
{"role": "user", "content": user_message},
{"role": "assistant", "content": message.content},
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": tool_use.id,
"content": tool_result,
}
],
},
],
tools=tools,
)
else:
response = message
final_response = next(
(block.text for block in response.content if hasattr(block, "text")),
None,
)
print(response.content)
print(f"\nFinal Response: {final_response}")
return final_response
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment