All Products
Search
Document Center

Alibaba Cloud Model Studio:Function calling

Last Updated:May 08, 2025

Large language models (LLMs) may perform poorly with real-time issues or mathematical calculations. Use the function calling feature to equip LLMs with tools to interact with the outside world.

Supported models

The following models are supported:

QwQ and Qwen3 in Thinking mode supports function calling. However, its usage is different from the above models, see Function calling instead.
Currently, Qwen multimodal models are not supported.

We recommend Qwen-Plus, which offers a balanced combination of performance, speed, and cost.

  • If you have high requirements for response speed and cost, we recommend the commercial Qwen-Turbo and the open-source Qwen2.5 models with small parameter sizes.

  • If you have high requirements for response accuracy, we recommend the commercial Qwen-Max and the open-source Qwen2.5 models with larger parameter sizes.

Overview

If you directly ask the Qwen API "What is the latest news from Alibaba Cloud", it cannot answer accurately:

I cannot provide real-time information because my data is only updated until 2021.

A human can help the LLM by these steps:

  1. Choose a tool

    To get information about real-time news, open a browser.

  2. Extract parameters

    Based on the query "What is the latest news from Alibaba Cloud", input "Alibaba Cloud news" in the browser's input box.

  3. Run the tool

    The browser returns various web pages, including "Alibaba Cloud Named a Leader in Gartner® Magic Quadrant™ for Cloud Database Management Systems for Fifth Consecutive Year."

  4. Provide the tool's output to the model

    Include the web page content in the prompt for Qwen: "Here is the information: Alibaba Cloud Named a Leader in Gartner® Magic Quadrant™ for Cloud Database Management Systems for Fifth Consecutive Year ... Please summarize and answer: What is the latest news from Alibaba Cloud". With adequate reference information, the model can provide a relevant response:

The latest news from Alibaba Cloud is that it has been named a Leader in the Gartner® Magic Quadrant™ for Cloud Database Management Systems for the fifth consecutive year. 
...
This recognition highlights Alibaba Cloud's continued excellence and leadership in providing robust and innovative cloud database solutions.

At this point, the model has managed to answer questions regarding real-time news. However, this process necessitates manual intervention, such as tool selection, parameter extraction, and tool execution.

The function calling feature can automate this process. After the model receives a question, it automatically selects a tool, extracts parameters, run the tool, and summarize the outputs. Performance showcase:

Function Calling Disabled
This for reference only and no requests are actually sent.

The following chart shows how function calling works:

image

Prerequisites

You must first obtain an API key and set the API key as an environment variable. If you need to use OpenAI SDK or DashScope SDK, you must install the SDK. If you are using a sub-workspace, ensure the Super Admin has authorized the model for the sub-workspace.

How to use

This section details the steps for function calling using the OpenAI SDK, with weather query and time query as examples.

If you are using the DashScope SDK or you want to see the full code, click the link in the following table.

DashScope

Python SDK

Java SDK

HTTP

OpenAI

Python SDK

Node.js SDK

HTTP

1. Define tools

Tools serve as the interface between the LLM and the external world. To implement function calling, you must first define your tools.

1.1. Define tool functions

Start by defining two tool functions: the weather query tool and the time query tool.

  • Weather query tool

    The weather query tool receives the arguments parameter in the format of {"location": "the location to query"}. The tool outputs a string in the format of "{location} is {weather}.".

    In this topic, the weather query tool is a mock function that simply selects from sunny, cloudy, or rainy at random. In practice, you can replace it with actual weather services.
  • Time query tool

    The time query tool requires no input parameters and outputs a string: "Current time: {queried time}.".

    If you are using Node.js, use the following command to install the tool package date-fns first:
    npm install date-fns
## Step 1: Define tool functions

# Add import random module
import random
from datetime import datetime

# Simulate weather query tool. Example return: "Beijing is rainy."
def get_current_weather(arguments):
    # Define a list of possible weather conditions
    weather_conditions = ["sunny", "cloudy", "rainy"]
    # Randomly select a weather condition
    random_weather = random.choice(weather_conditions)
    # Extract location information from JSON
    location = arguments["location"]
    # Return formatted weather information
    return f"{location} is {random_weather}."

# Tool to query current time. Example return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
    # Get current date and time
    current_datetime = datetime.now()
    # Format current date and time
    formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
    # Return formatted current time
    return f"Current time: {formatted_time}."

# Test tool functions and output results, you can remove the following four lines of test code when running subsequent steps
print("Testing tool output:")
print(get_current_weather({"location": "Shanghai"}))
print(get_current_time())
print("\n")
// Step 1: Define tool functions

// Import time query tool
import { format } from 'date-fns';

function getCurrentWeather(args) {
    // Define a list of possible weather conditions
    const weatherConditions = ["sunny", "cloudy", "rainy"];
    // Randomly select a weather condition
    const randomWeather = weatherConditions[Math.floor(Math.random() * weatherConditions.length)];
    // Extract location information from JSON
    const location = args.location;
    // Return formatted weather information
    return `${location} is ${randomWeather}.`;
}

function getCurrentTime() {
    // Get current date and time
    const currentDatetime = new Date();
    // Format current date and time
    const formattedTime = format(currentDatetime, 'yyyy-MM-dd HH:mm:ss');
    // Return formatted current time
    return `Current time: ${formattedTime}.`;
}

// Test tool functions and output results, you can remove the following four lines of test code when running subsequent steps
console.log("Testing tool output:")
console.log(getCurrentWeather({location:"Shanghai"}));
console.log(getCurrentTime());
console.log("\n")

Sample return of the tools:

Testing tool output:
Shanghai is cloudy.
Current time: 2025-01-08 20:21:45.

1.2 Create tools array

To enable accurate tool selection by the LLM, you need to provide tool information in the JSON format, which includes the tool's purpose, scenario, and input parameters.

  • The type field is fixed as "function".

  • The function field is of Object type.

    • name: The name of the custom tool. We recommend that you use the same name as the tool function name, such as get_current_weather or get_current_time.

    • description: Describes the purpose of the tool. This helps the LLM decide whether to use the tool.

    • parameters: Describes the input parameters as an Object. This helps the LLM to extract the input parameters. If no input parameters are needed, the parameters field can be omitted.

      • type: Fixed as "object".

      • properties: Describes each input parameter's name, data type, and purpose as an Object. The key is the parameter name and the value is the type of description.

      • required: Lists required parameters as an Array.

The description format for the weather query tool:

{
    "type": "function",
    "function": {
        "name": "get_current_weather",
        "description": "Very useful when you want to query the weather of a specific city.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City or district, such as Beijing, Hangzhou, Yuhang District, etc.",
                }
            },
            "required": ["location"]
        }
    }
}

Before initiating function calling, you need to pass the tool description information through the tools parameter. The tools is of the JSON Array type, and the elements in the Array are the tool description information.

The tools parameter is specified when initiating function calling.
# Please paste the following code after Step 1 code

## Step 2: Create tools array

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "Very useful when you want to know the current time.",
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Very useful when you want to query the weather of a specific city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City or district, such as Beijing, Hangzhou, Yuhang District, etc.",
                    }
                },
                "required": ["location"]
            }
        }
    }
]
tool_name = [tool["function"]["name"] for tool in tools]
print(f"Created {len(tools)} tools: {tool_name}\n")
// Please paste the following code after Step 1 code

// Step 2: Create tools array

const tools = [
    {
      type: "function",
      function: {
        name: "get_current_time",
        description: "Very useful when you want to know the current time.",
      }
    },
    {
      type: "function",
      function: {
        name: "get_current_weather",
        description: "Very useful when you want to query the weather of a specific city.",
        parameters: {
          type: "object",
          properties: {
            location: {
              type: "string",
              description: "City or district, such as Beijing, Hangzhou, Yuhang District, etc.",
            }
          },
          required: ["location"]
        }
      }
    }
  ];
  
const toolNames = tools.map(tool => tool.function.name);
console.log(`Created ${tools.length} tools: ${toolNames.join(', ')}\n`);

2. Create messages array

Just like normal conversation with Qwen, you need to maintain a messages array to convey instructions and context to the LLM. This array should include both System Message and User Message before you initiate function calling.

System message

Although the purpose and scenario of tools have been described when creating tools array, you can highlight when to activate tools within the System Message to enhance the accuracy of the LLM. In this example, use the following System Prompt:

You are a helpful assistant. If the user asks about the weather, please call the 'get_current_weather' function;
If the user asks about the time, please call the 'get_current_time' function.
Please answer questions in a friendly tone.

User message

User Message is used to pass in the user's question. If the user asks "Shanghai weather," the messages array would be:

# Step 3: Create messages array
# Please paste the following code after Step 2 code
messages = [
    {
        "role": "system",
        "content": """You are a helpful assistant. If the user asks about the weather, please call the 'get_current_weather' function;
     If the user asks about the time, please call the 'get_current_time' function.
     Please answer questions in a friendly tone.""",
    },
    {
        "role": "user",
        "content": "Shanghai weather"
    }
]
print("messages array created\n")
// Step 3: Create messages array
// Please paste the following code after Step 2 code
const messages = [
    {
        role: "system",
        content: "You are a helpful assistant. If the user asks about the weather, please call the 'get_current_weather' function; If the user asks about the time, please call the 'get_current_time' function. Please answer questions in a friendly tone.",
    },
    {
        role: "user",
        content: "Shanghai weather"
    }
];

console.log("messages array created\n");
You can also ask about the current time.

3. Initiate function calling

With the tools and messages arrays prepared, use the following code to initiate a function call. The LLM will determine whether to invoke a tool and provide the necessary tool function and input parameters.

# Step 4: Initiate function calling
# Please paste the following code after Step 3 code
from openai import OpenAI
import os

client = OpenAI(
    # If environment variables are not configured, replace the following line with: api_key="sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)

def function_calling():
    completion = client.chat.completions.create(
        # Using qwen-plus as an example, you can change the model name as needed. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
        model="qwen-plus",
        messages=messages,
        tools=tools
    )
    print("Return object:")
    print(completion.choices[0].message.model_dump_json())
    print("\n")
    return completion

print("Initiating function calling...")
completion = function_calling()
// Step 4: Initiate function calling
// Please paste the following code after Step 3 code
import OpenAI from "openai";
const openai = new OpenAI(
    {
        // If environment variables are not configured, replace the following line with: apiKey: "sk-xxx",
        apiKey: process.env.DASHSCOPE_API_KEY,
        baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
    }
);

async function functionCalling() {
    const completion = await openai.chat.completions.create({
        model: "qwen-plus",  // Using qwen-plus as an example, you can change the model name as needed. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
        messages: messages,
        tools: tools
    });
    console.log("Return object:");
    console.log(JSON.stringify(completion.choices[0].message));
    console.log("\n");
    return completion;
}

const completion = await functionCalling();

The LLM decides the tool function to use in the tool_calls parameter, such as "get_current_weather", and provides the input parameter: "{\"location\": \"Shanghai\"}".

{
    "content": "",
    "refusal": null,
    "role": "assistant",
    "audio": null,
    "function_call": null,
    "tool_calls": [
        {
            "id": "call_6596dafa2a6a46f7a217da",
            "function": {
                "arguments": "{\"location\": \"Shanghai\"}",
                "name": "get_current_weather"
            },
            "type": "function",
            "index": 0
        }
    ]
}

Note that if the LLM decides no tool is required for the question, the tool_calls parameter will not be included in the response. The LLM will give its response directly in the content parameter. For example, when the input is "Hello", the tool_calls parameter is null:

{
    "content": "Hello! How can I help you? I'm particularly good at answering questions about weather or time.",
    "refusal": null,
    "role": "assistant",
    "audio": null,
    "function_call": null,
    "tool_calls": null
}
If the tool_calls parameter is not returned, your program should directly return the content without further steps.
If you want the model to select a specific tool for each call, see Forced tool calling.

4. Run tool functions

After you have the tool function name and input parameters, execute the function to obtain its output.

The execution of tool functions occurs within your computing environment, not by the LLM.

Because the LLM outputs only strings, parse the tool function name and input parameters before execution.

  • Tool function

    Create a function_mapper from the tool function name to the tool function entity.

  • Input parameters

    The input parameters are JSON strings. Use a JSON parsing tool to extract the input parameter information as JSON objects.

# Step 5: Run tool functions
# Please paste the following code after Step 4 code
import json

print("Executing tool functions...")
# Get function name and input parameters from the returned result
function_name = completion.choices[0].message.tool_calls[0].function.name
arguments_string = completion.choices[0].message.tool_calls[0].function.arguments

# Use json module to parse parameter string
arguments = json.loads(arguments_string)
# Create a function mapping table
function_mapper = {
    "get_current_weather": get_current_weather,
    "get_current_time": get_current_time
}
# Get function entity
function = function_mapper[function_name]
# If input parameters are empty, call the function directly
if arguments == {}:
    function_output = function()
# Otherwise, call the function with parameters
else:
    function_output = function(arguments)
# Print the tool's output
print(f"Tool function output: {function_output}\n")
// Step 5: Run tool functions
// Please paste the following code after Step 4 code

console.log("Executing tool functions...");
const function_name = completion.choices[0].message.tool_calls[0].function.name;
const arguments_string = completion.choices[0].message.tool_calls[0].function.arguments;

// Use JSON module to parse parameter string
const args = JSON.parse(arguments_string);

// Create a function mapping table
const functionMapper = {
    "get_current_weather": getCurrentWeather,
    "get_current_time": getCurrentTime
};

// Get function entity
const func = functionMapper[function_name];

// If input parameters are empty, call the function directly
let functionOutput;
if (Object.keys(args).length === 0) {
    functionOutput = func();
} else {
    // Otherwise, call the function with parameters
    functionOutput = func(args);
}

// Print the tool's output
console.log(`Tool function output: ${functionOutput}\n`);

Sample response:

Shanghai is cloudy.

You can use the tool function output as the final response. But if you want a more human-like response, use an LLM to summarize the tool function output.

5. Use LLM to summarize tool function output (optional)

If the tool function's output is too rigid, you may want the LLM to generate a more natural response based on the tool output and the user's query. To do this, update the messages array with the tool output and submit it to the LLM for another request.

  1. Add Assistant Message

    After initiating function calling, retrieve the Assistant Message using completion.choices[0].message and then add it to the messages array.

  2. Add Tool Message

    Add the tool's output in the messages array as {"role": "tool", "content": "tool's output","tool_call_id": completion.choices[0].message.tool_calls[0].id}.

    Ensure that the tool's output is in string format.
# Step 6: Submit tool output to the LLM
# Please paste the following code after Step 5 code

messages.append(completion.choices[0].message)
print("Added assistant message")
messages.append({"role": "tool", "content": function_output, "tool_call_id": completion.choices[0].message.tool_calls[0].id})
print("Added tool message\n")
// Step 6: Submit tool output to the LLM
// Please paste the following code after Step 5 code

messages.push(completion.choices[0].message);
console.log("Added assistant message")
messages.push({
    "role": "tool",
    "content": functionOutput,
    "tool_call_id": completion.choices[0].message.tool_calls[0].id
});
console.log("Added tool message\n");

The current messages array:

[
  System Message -- Guides the model's tool calling strategy
  User Message -- User's question
  Assistant Message -- Tool calling information returned by the model
  Tool Message -- Tool's output information (if using parallel tool calling as introduced below, there may be multiple Tool Messages)
]

After updating the messages array, let the LLM summarize the output:

# Step 7: LLM summarizing tool output
# Please paste the following code after Step 6 code
print("Summarizing tool output...")
completion = function_calling()
// Step 7: LLM summarizing tool output
// Please paste the following code after Step 6 code

console.log("Summarizing tool output...");
const completion_1 = await functionCalling();

Sample response:

{
    "content": "The weather in Shanghai is cloudy. If you have any other questions, feel free to ask.",
    "refusal": null,
    "role": "assistant",
    "audio": null,
    "function_call": null,
    "tool_calls": null
}

Till now, you have completed an entire process of function calling.

Advanced usage

Stream

To improve user experience and minimize waiting time, you can use the streaming output mode to quickly retrieve the name of the needed tool function:

  • Tool function name: Only appears in the first streamed return object.

  • Input parameter information: Output in a continuous streaming manner.

Streaming output allow for more flexible handling of function calling results. To implement streaming output, use the following code to change initiating function calling to streaming output mode.

def function_calling():
    completion = client.chat.completions.create(
        model="qwen-plus",
        messages=messages,
        tools=tools,
        stream=True
    )
    for chunk in completion:
        print(chunk.model_dump_json())

function_calling()
async function functionCalling() {
    const completion = await openai.chat.completions.create({
        model: "qwen-plus",
        messages: messages,
        tools: tools,
        stream: true
    });
    for await (const chunk of completion) {
        console.log(JSON.stringify(chunk))
    }
}

functionCalling();

The tool function name is retrieved from the first returned object, while the input parameter information must be concatenated before you can run the tool function.

{"id":"chatcmpl-3f8155c3-e96f-95bc-a2a6-8e48537a0893","choices":[{"delta":{"content":null,"function_call":null,"refusal":null,"role":"assistant","tool_calls":[{"index":0,"id":"call_5507104cabae4f64a0fdd3","function":{"arguments":"{\"location\":","name":"get_current_weather"},"type":"function"}]},"finish_reason":null,"index":0,"logprobs":null}],"created":1736251532,"model":"qwen-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-3f8155c3-e96f-95bc-a2a6-8e48537a0893","choices":[{"delta":{"content":null,"function_call":null,"refusal":null,"role":null,"tool_calls":[{"index":0,"id":"","function":{"arguments":" \"Shanghai\"}","name":""},"type":"function"}]},"finish_reason":null,"index":0,"logprobs":null}],"created":1736251532,"model":"qwen-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-3f8155c3-e96f-95bc-a2a6-8e48537a0893","choices":[{"delta":{"content":null,"function_call":null,"refusal":null,"role":null,"tool_calls":[{"index":0,"id":"","function":{"arguments":null,"name":null},"type":"function"}]},"finish_reason":"tool_calls","index":0,"logprobs":null}],"created":1736251532,"model":"qwen-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}

If you need to use the LLM to summarize the tool function output, the Assistant Message you add must follow the following format:

{
    "content": "",
    "refusal": None,
    "role": "assistant",
    "audio": None,
    "function_call": None,
    "tool_calls": [
        {
            "id": "call_xxx",
            "function": {
                "arguments": '{"location": "Shanghai"}',
                "name": "get_current_weather",
            },
            "type": "function",
            "index": 0,
        }
    ],
}

The following elements must be replaced:

  • id

    Replace the id in tool_calls with choices[0].delta.tool_calls[0].id from the first returned object.

  • arguments

    After concatenating the input parameter information, replace arguments in tool_calls.

  • name

    Replace name in tool_calls with choices[0].delta.tool_calls[0].function.name from the first returned object.

Specify calling method

Parallel tool calling

In the preceding sections, the query "Shanghai weather" requires only a single tool call. However, the query may require multiple calls, for example: How is the weather in Beijing, Tianjin, Shanghai, and Chongqing or Weather in Hangzhou and the current time. Only one call information will be returned after initiating function calling. Take How is the weather in Beijing, Tianjin, Shanghai, and Chongqing as an example:

{
    "content": "",
    "refusal": null,
    "role": "assistant",
    "audio": null,
    "function_call": null,
    "tool_calls": [
        {
            "id": "call_61a2bbd82a8042289f1ff2",
            "function": {
                "arguments": "{\"location\": \"Beijing\"}",
                "name": "get_current_weather"
            },
            "type": "function",
            "index": 0
        }
    ]
}

Only Beijing is returned. To solve this problem, set parallel_tool_calls to true when initiating function calling. Then, the returned object will contain all required functions and request parameters.

def function_calling():
    completion = client.chat.completions.create(
        model="qwen-plus",  # Using qwen-plus as an example, you can change the model name as needed
        messages=messages,
        tools=tools,
        # New parameter
        parallel_tool_calls=True
    )
    print("Return object:")
    print(completion.choices[0].message.model_dump_json())
    print("\n")
    return completion

print("Initiating function calling...")
completion = function_calling()
async function functionCalling() {
    const completion = await openai.chat.completions.create({
        model: "qwen-plus",  // Using qwen-plus as an example, you can change the model name as needed
        messages: messages,
        tools: tools,
        parallel_tool_calls: true
    });
    console.log("Return object:");
    console.log(JSON.stringify(completion.choices[0].message));
    console.log("\n");
    return completion;
}

const completion = await functionCalling();

The returned tool_calls array contains the request parameters of all four cities:

{
    "content": "",
    "role": "assistant",
    "tool_calls": [
        {
            "function": {
                "name": "get_current_weather",
                "arguments": "{\"location\": \"Beijing\"}"
            },
            "index": 0,
            "id": "call_c2d8a3a24c4d4929b26ae2",
            "type": "function"
        },
        {
            "function": {
                "name": "get_current_weather",
                "arguments": "{\"location\": \"Tianjin\"}"
            },
            "index": 1,
            "id": "call_dc7f2f678f1944da9194cd",
            "type": "function"
        },
        {
            "function": {
                "name": "get_current_weather",
                "arguments": "{\"location\": \"Shanghai\"}"
            },
            "index": 2,
            "id": "call_55c95dd718d94d9789c7c0",
            "type": "function"
        },
        {
            "function": {
                "name": "get_current_weather",
                "arguments": "{\"location\": \"Chongqing\"}"
            },
            "index": 3,
            "id": "call_98a0cc7fded64b3ba88251",
            "type": "function"
        }
    ]
}

Forced tool calling

Content generated by the LLM can be unpredictable, so the LLM may sometimes call inappropriate tools. To ensure the LLM adheres to a specific strategy for certain questions (such as using a particular tool, using at least one tool, or preventing any tool use), modify the tool_choice parameter.

The default value of tool_choice is "auto", which means the LLM decides which tool to call.
If you need to use the LLM to summarize the tool function output, omit the tool_choice parameter when initiating the summary request. Otherwise, the LLM will continue to provide calling details.
  • Force the use of a specific tool

    If you want to force the calling of a specific tool for certain questions, set the tool_choice parameter to {"type": "function", "function": {"name": "the_function_to_call"}}. This way, the LLM will not participate in tool selection, and will only provide the input parameters.

    For example, if the scenario is limited to weather-related questions, you can modify function_calling to:

    def function_calling():
        completion = client.chat.completions.create(
            model="qwen-plus",
            messages=messages,
            tools=tools,
            tool_choice={"type": "function", "function": {"name": "get_current_weather"}}
        )
        print(completion.model_dump_json())
    
    function_calling()
    async function functionCalling() {
        const response = await openai.chat.completions.create({
            model: "qwen-plus",
            messages: messages,
            tools: tools,
            tool_choice: {"type": "function", "function": {"name": "get_current_weather"}}
        });
        console.log("Return object:");
        console.log(JSON.stringify(response.choices[0].message));
        console.log("\n");
        return response;
    }
    
    const response = await functionCalling();

    No matter what question is asked, the tool function in the return object will always be get_current_weather.

    Make sure that the questions are related to the selected tool to avoid unexpected results.
  • Force no tool usage

    If you want to ensure that no tool is used no matter what the question is (the return object contains content but tool_calls is empty), you can either set the tool_choice parameter to "none" or omit the tools parameter. If you do either of these, the tool_calls parameter in the function's return will always be empty.

    For example, if the scenario always requires no tool, you can modify function_calling to:

    def function_calling():
        completion = client.chat.completions.create(
            model="qwen-plus",
            messages=messages,
            tools=tools,
            tool_choice="none"
        )
        print(completion.model_dump_json())
    
    function_calling()
    async function functionCalling() {
        const completion = await openai.chat.completions.create({
            model: "qwen-plus",
            messages: messages,
            tools: tools,
            tool_choice: "none"
        });
        console.log("Return object:");
        console.log(JSON.stringify(completion.choices[0].message));
        console.log("\n");
        return completion;
    }
    
    const completion = await functionCalling();

Billing details

To initiate function calling, you must specify the tools and messages parameters. The tokens in the messages parameter and also the tokens from the tool descriptions in the tools parameter.

Complete code

OpenAI

You can use the OpenAI SDK or the OpenAI-compatible HTTP method to initiate function calling with Qwen models.

Python

Sample code

from openai import OpenAI
from datetime import datetime
import json
import os
import random

client = OpenAI(
    # If environment variables are not configured, replace the following line with: api_key="sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",  # Fill in DashScope SDK's base_url
)

# Define tool list, the model will refer to the tool's name and description when choosing which tool to use
tools = [
    # Tool 1: Get the current time
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "Very useful when you want to know the current time.",
            # Since getting the current time doesn't require input parameters, parameters is an empty dictionary
            "parameters": {},
        },
    },
    # Tool 2: Get the weather for a specified city
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Very useful when you want to query the weather of a specific city.",
            "parameters": {
                "type": "object",
                "properties": {
                    # When querying weather, location needs to be provided, so the parameter is set to location
                    "location": {
                        "type": "string",
                        "description": "City or district, such as Beijing, Hangzhou, Yuhang District, etc.",
                    }
                },
                "required": ["location"],
            },
        },
    },
]


# Simulate weather query tool. Example return: "Beijing is rainy."
def get_current_weather(arguments):
    # Define a list of possible weather conditions
    weather_conditions = ["sunny", "cloudy", "rainy"]
    # Randomly select a weather condition
    random_weather = random.choice(weather_conditions)
    # Extract location information from JSON
    location = arguments["location"]
    # Return formatted weather information
    return f"{location} is {random_weather}."


# Tool to query current time. Example return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
    # Get current date and time
    current_datetime = datetime.now()
    # Format current date and time
    formatted_time = current_datetime.strftime("%Y-%m-%d %H:%M:%S")
    # Return formatted current time
    return f"Current time: {formatted_time}."


# Encapsulate model response function
def get_response(messages):
    completion = client.chat.completions.create(
        model="qwen-plus", 
        messages=messages,
        tools=tools,
    )
    return completion


def call_with_messages():
    print("\n")
    messages = [
        {
            "content": input(
                "Please enter: "
            ),  # Question examples: "What time is it now?" "What time is it in one hour" "How's the weather in Beijing?"
            "role": "user",
        }
    ]
    print("-" * 60)
    # First round of model calling
    i = 1
    first_response = get_response(messages)
    assistant_output = first_response.choices[0].message
    print(f"\nRound {i} model output information: {first_response}\n")
    if assistant_output.content is None:
        assistant_output.content = ""
    messages.append(assistant_output)
    # If no tool call is needed, return the final answer directly
    if (
        assistant_output.tool_calls == None
    ):  # If the model determines no tool is needed, print the assistant's reply directly without a second round of model calling
        print(f"No need to call tools, I can reply directly: {assistant_output.content}")
        return
    # If tools are needed, perform multiple rounds of model calling until the model determines no tool is needed
    while assistant_output.tool_calls != None:
        # If the weather query tool is needed, run the weather query tool
        tool_info = {
            "content": "",
            "role": "tool",
            "tool_call_id": assistant_output.tool_calls[0].id,
        }
        if assistant_output.tool_calls[0].function.name == "get_current_weather":
            # Extract location parameter information
            argumens = json.loads(assistant_output.tool_calls[0].function.arguments)
            tool_info["content"] = get_current_weather(argumens)
        # If the time query tool is needed, run the time query tool
        elif assistant_output.tool_calls[0].function.name == "get_current_time":
            tool_info["content"] = get_current_time()
        tool_output = tool_info["content"]
        print(f"Tool output information: {tool_output}\n")
        print("-" * 60)
        messages.append(tool_info)
        assistant_output = get_response(messages).choices[0].message
        if assistant_output.content is None:
            assistant_output.content = ""
        messages.append(assistant_output)
        i += 1
        print(f"Round {i} model output information: {assistant_output}\n")
    print(f"Final answer: {assistant_output.content}")


if __name__ == "__main__":
    call_with_messages()

Sample response

When entering: What time is it?, the program will output the following:

2024-07-25_15-37-20 (1)

Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.

Input: Hangzhou weather

{
    'id': 'chatcmpl-e2f045fd-2604-9cdb-bb61-37c805ecd15a',
    'choices': [
        {
            'finish_reason': 'tool_calls',
            'index': 0,
            'logprobs': None,
            'message': {
                'content': '',
                'role': 'assistant',
                'function_call': None,
                'tool_calls': [
                    {
                        'id': 'call_7a33ebc99d5342969f4868',
                        'function': {
                            'arguments': '{
                                "location": "Hangzhou"
                            }',
                            'name': 'get_current_weather'
                        },
                        'type': 'function',
                        'index': 0
                    }
                ]
            }
        }
    ],
    'created': 1726049697,
    'model': 'qwen-max',
    'object': 'chat.completion',
    'service_tier': None,
    'system_fingerprint': None,
    'usage': {
        'completion_tokens': 18,
        'prompt_tokens': 217,
        'total_tokens': 235
    }
}

Input: hello

{
    'id': 'chatcmpl-5d890637-9211-9bda-b184-961acf3be38d',
    'choices': [
        {
            'finish_reason': 'stop',
            'index': 0,
            'logprobs': None,
            'message': {
                'content': 'Hello! How can I help you?',
                'role': 'assistant',
                'function_call': None,
                'tool_calls': None
            }
        }
    ],
    'created': 1726049765,
    'model': 'qwen-max',
    'object': 'chat.completion',
    'service_tier': None,
    'system_fingerprint': None,
    'usage': {
        'completion_tokens': 7,
        'prompt_tokens': 216,
        'total_tokens': 223
    }
}

Node.js

Sample code

import OpenAI from "openai";
import { format } from 'date-fns';
import readline from 'readline';

function getCurrentWeather(location) {
    return `${location} is rainy.`;
}
function getCurrentTime() {
    // Get current date and time
    const currentDatetime = new Date();
    // Format current date and time
    const formattedTime = format(currentDatetime, 'yyyy-MM-dd HH:mm:ss');
    // Return formatted current time
    return `Current time: ${formattedTime}.`;
}
const openai = new OpenAI(
    {
        // If environment variables are not configured, replace the following line with: apiKey: "sk-xxx",
        apiKey: process.env.DASHSCOPE_API_KEY,
        baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
    }
);
const tools = [
// Tool 1: Get the current time
{
    "type": "function",
    "function": {
        "name": "getCurrentTime",
        "description": "Very useful when you want to know the current time.",
        // Since getting the current time doesn't require input parameters, parameters is empty
        "parameters": {}  
    }
},  
// Tool 2: Get the weather for a specified city
{
    "type": "function",
    "function": {
        "name": "getCurrentWeather",
        "description": "Very useful when you want to query the weather of a specific city.",
        "parameters": {  
            "type": "object",
            "properties": {
                // When querying weather, location needs to be provided, so the parameter is set to location
                "location": {
                    "type": "string",
                    "description": "City or district, such as Beijing, Hangzhou, Yuhang District, etc."
                }
            },
            "required": ["location"]
        }
    }
}
];
async function getResponse(messages) {
    const response = await openai.chat.completions.create({
        model: "qwen-plus",  // Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
        messages: messages,
        tools: tools,
    });
    return response;
}
const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout
});
rl.question("user: ", async (question) => {
    const messages = [{"role": "user","content": question}];
    let i = 1;
    const firstResponse = await getResponse(messages);
    let assistantOutput = firstResponse.choices[0].message;    
    console.log(`Round ${i} model output information: ${JSON.stringify(assistantOutput)}`);
    if (Object.is(assistantOutput.content,null)){
        assistantOutput.content = "";
    }
    messages.push(assistantOutput);
    if (! ("tool_calls" in assistantOutput)) {
        console.log(`No need to call tools, I can reply directly: ${assistantOutput.content}`);
        rl.close();
    } else{
        while ("tool_calls" in assistantOutput) {
            let toolInfo = {};
            if (assistantOutput.tool_calls[0].function.name == "getCurrentWeather" ) {
                toolInfo = {"role": "tool"};
                let location = JSON.parse(assistantOutput.tool_calls[0].function.arguments)["location"];
                toolInfo["content"] = getCurrentWeather(location);
            } else if (assistantOutput.tool_calls[0].function.name == "getCurrentTime" ) {
                toolInfo = {"role":"tool"};
                toolInfo["content"] = getCurrentTime();
            }
            console.log(`Tool output information: ${JSON.stringify(toolInfo)}`);
            console.log("=".repeat(100));
            messages.push(toolInfo);
            assistantOutput = (await getResponse(messages)).choices[0].message;
            if (Object.is(assistantOutput.content,null)){
                assistantOutput.content = "";
            }
            messages.push(assistantOutput);
            i += 1;
            console.log(`Round ${i} model output information: ${JSON.stringify(assistantOutput)}`)
    }
    console.log("=".repeat(100));
    console.log(`Final model output information: ${JSON.stringify(assistantOutput.content)}`);
    rl.close();
    }});

Sample response

Enter How is the weather in Beijing, Tianjin, Shanghai, and Chongqing?, and the program outputs:

Round 1 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Beijing\"}"},"index":0,"id":"call_d2aff21240b24c7291db6d","type":"function"}]}
Tool output information: {"role":"tool","content":"Beijing is rainy."}
====================================================================================================
Round 2 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Tianjin\"}"},"index":0,"id":"call_bdcfa937e69b4eae997b5e","type":"function"}]}
Tool output information: {"role":"tool","content":"Tianjin is rainy."}
====================================================================================================
Round 3 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Shanghai\"}"},"index":0,"id":"call_bbf22d017e8e439e811974","type":"function"}]}
Tool output information: {"role":"tool","content":"Shanghai is rainy."}
====================================================================================================
Round 4 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Chongqing\"}"},"index":0,"id":"call_f4f8e149af01492fb60162","type":"function"}]}
Tool output information: {"role":"tool","content":"Chongqing is rainy."}
====================================================================================================
Round 5 model output information: {"content":"All four municipalities (Beijing, Tianjin, Shanghai, and Chongqing) have rainy weather. Don't forget to bring an umbrella!","role":"assistant"}
====================================================================================================
Final model output information: "All four municipalities (Beijing, Tianjin, Shanghai, and Chongqing) have rainy weather. Don't forget to bring an umbrella!"

HTTP

Sample code

import requests
import os
from datetime import datetime
import json

# Define tool list, the model will refer to the tool's name and description when choosing which tool to use
tools = [
    # Tool 1: Get the current time
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "Very useful when you want to know the current time.",
            "parameters": {},  # Since getting the current time doesn't require input parameters, parameters is an empty dictionary
        },
    },
    # Tool 2: Get the weather for a specified city
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Very useful when you want to query the weather of a specific city.",
            "parameters": {  # When querying weather, location needs to be provided, so the parameter is set to location
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City or district, such as Beijing, Hangzhou, Yuhang District, etc.",
                    }
                },
                "required": ["location"],
            },
        },
    },
]


# Simulate weather query tool. Example return: "Beijing is sunny."
def get_current_weather(location):
    return f"{location} is sunny. "


# Tool to query current time. Example return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
    # Get current date and time
    current_datetime = datetime.now()
    # Format current date and time
    formatted_time = current_datetime.strftime("%Y-%m-%d %H:%M:%S")
    # Return formatted current time
    return f"Current time: {formatted_time}."


def get_response(messages):
    api_key = os.getenv("DASHSCOPE_API_KEY")
    url = "https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions"
    headers = {"Content-Type": "application/json", "Authorization": f"Bearer {api_key}"}
    body = {"model": "qwen-plus", "messages": messages, "tools": tools}

    response = requests.post(url, headers=headers, json=body)
    return response.json()


def call_with_messages():
    messages = [
        {
            "content": input(
                "Please enter: "
            ),  # Question examples: "What time is it now?" "What time is it in one hour" "How's the weather in Beijing?"
            "role": "user",
        }
    ]

    # First round of model calling
    first_response = get_response(messages)
    print(f"\nFirst round call result: {first_response}")
    assistant_output = first_response["choices"][0]["message"]
    if assistant_output["content"] is None:
        assistant_output["content"] = ""
    messages.append(assistant_output)
    if (
        "tool_calls" not in assistant_output
    ):  # If the model determines no tool is needed, print the assistant's reply directly without a second round of model calling
        print(f"Final answer: {assistant_output['content']}")
        return
    # If the model chooses the get_current_weather tool
    elif assistant_output["tool_calls"][0]["function"]["name"] == "get_current_weather":
        tool_info = {"name": "get_current_weather", "role": "tool"}
        location = json.loads(
            assistant_output["tool_calls"][0]["function"]["arguments"]
        )["location"]
        tool_info["content"] = get_current_weather(location)
    # If the model chooses the get_current_time tool
    elif assistant_output["tool_calls"][0]["function"]["name"] == "get_current_time":
        tool_info = {"name": "get_current_time", "role": "tool"}
        tool_info["content"] = get_current_time()
    print(f"Tool output information: {tool_info['content']}")
    messages.append(tool_info)

    # Second round of model calling, summarizing the tool's output
    second_response = get_response(messages)
    print(f"Second round call result: {second_response}")
    print(f"Final answer: {second_response['choices'][0]['message']['content']}")


if __name__ == "__main__":
    call_with_messages()

Sample response

Enter: How is the weather in Hangzhou? and the program outputs:

2024-07-16_14-43-10 (1)

Below is the model's return information when initiating Function Call (the model's first round of calling). When entering "Hangzhou weather", the model will return the tool_calls parameter; when entering "hello", the model determines no tool is needed, and will not return the tool_calls parameter.

Input: Hangzhou weather

{
    'choices': [
        {
            'message': {
                'content': '',
                'role': 'assistant',
                'tool_calls': [
                    {
                        'function': {
                            'name': 'get_current_weather',
                            'arguments': '{
                                "location": "Hangzhou"
                            }'
                        },
                        'index': 0,
                        'id': 'call_416cd81b8e7641edb654c4',
                        'type': 'function'
                    }
                ]
            },
            'finish_reason': 'tool_calls',
            'index': 0,
            'logprobs': None
        }
    ],
    'object': 'chat.completion',
    'usage': {
        'prompt_tokens': 217,
        'completion_tokens': 18,
        'total_tokens': 235
    },
    'created': 1726050222,
    'system_fingerprint': None,
    'model': 'qwen-max',
    'id': 'chatcmpl-61e30855-ee69-93ab-98d5-4194c51a9980'
}

Input: hello

{
    'choices': [
        {
            'message': {
                'content': 'Hello! How can I help you?',
                'role': 'assistant'
            },
            'finish_reason': 'stop',
            'index': 0,
            'logprobs': None
        }
    ],
    'object': 'chat.completion',
    'usage': {
        'prompt_tokens': 216,
        'completion_tokens': 7,
        'total_tokens': 223
    },
    'created': 1726050238,
    'system_fingerprint': None,
    'model': 'qwen-max',
    'id': 'chatcmpl-2f2f86d1-bc4e-9494-baca-aac5b0555091'
}

DashScope

You can use the DashScope SDK or the HTTP method to initiate function calling with Qwen models.

Python

Sample code

import os
from dashscope import Generation
from datetime import datetime
import random
import json
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'


# Define tool list, the model will refer to the tool's name and description when choosing which tool to use
tools = [
    # Tool 1: Get the current time
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "Very useful when you want to know the current time.",
            "parameters": {},  # Since getting the current time doesn't require input parameters, parameters is an empty dictionary
        },
    },
    # Tool 2: Get the weather for a specified city
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Very useful when you want to query the weather of a specific city.",
            "parameters": {
                # When querying weather, location needs to be provided, so the parameter is set to location
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City or district, such as Beijing, Hangzhou, Yuhang District, etc.",
                    }
                },
                "required": ["location"],
            },
        },
    },
]


# Simulate weather query tool. Example return: "Beijing is sunny."
def get_current_weather(location):
    return f"{location} is sunny. "


# Tool to query current time. Example return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
    # Get current date and time
    current_datetime = datetime.now()
    # Format current date and time
    formatted_time = current_datetime.strftime("%Y-%m-%d %H:%M:%S")
    # Return formatted current time
    return f"Current time: {formatted_time}."


# Encapsulate model response function
def get_response(messages):
    response = Generation.call(
        # If environment variables are not configured, replace the following line with: api_key="sk-xxx",
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        model="qwen-plus",  
        messages=messages,
        tools=tools,
        seed=random.randint(
            1, 10000
        ),  # Set random seed, if not set, the default random seed is 1234
        result_format="message",  # Set output to message format
    )
    return response


def call_with_messages():
    print("\n")
    messages = [
        {
            "content": input(
                "Please enter: "
            ),  # Question examples: "What time is it now?" "What time is it in one hour" "How's the weather in Beijing?"
            "role": "user",
        }
    ]

    # First round of model calling
    first_response = get_response(messages)
    assistant_output = first_response.output.choices[0].message
    print(f"\nModel first round output information: {first_response}\n")
    messages.append(assistant_output)
    if (
        "tool_calls" not in assistant_output
    ):  # If the model determines no tool is needed, print the assistant's reply directly without a second round of model calling
        print(f"Final answer: {assistant_output.content}")
        return
    # If the model chooses the get_current_weather tool
    elif assistant_output.tool_calls[0]["function"]["name"] == "get_current_weather":
        tool_info = {"name": "get_current_weather", "role": "tool"}
        location = json.loads(assistant_output.tool_calls[0]["function"]["arguments"])[
            "location"
        ]
        tool_info["content"] = get_current_weather(location)
    # If the model chooses the get_current_time tool
    elif assistant_output.tool_calls[0]["function"]["name"] == "get_current_time":
        tool_info = {"name": "get_current_time", "role": "tool"}
        tool_info["content"] = get_current_time()
    print(f"Tool output information: {tool_info['content']}\n")
    messages.append(tool_info)

    # Second round of model calling, summarizing the tool's output
    second_response = get_response(messages)
    print(f"Model second round output information: {second_response}\n")
    print(f"Final answer: {second_response.output.choices[0].message['content']}")


if __name__ == "__main__":
    call_with_messages()

Sample response

Enter a question and get the response: 2024-04-29_11-22-10 (1).gif

Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.

Input: Hangzhou weather

{
  "status_code": 200,
  "request_id": "33cf0a53-ea38-9f47-8fce-b93b55d86573",
  "code": "",
  "message": "",
  "output": {
    "text": null,
    "finish_reason": null,
    "choices": [
      {
        "finish_reason": "tool_calls",
        "message": {
          "role": "assistant",
          "content": "",
          "tool_calls": [
            {
              "function": {
                "name": "get_current_weather",
                "arguments": "{\"location\": \"Hangzhou\"}"
              },
              "index": 0,
              "id": "call_9f62f52f3a834a8194f634",
              "type": "function"
            }
          ]
        }
      }
    ]
  },
  "usage": {
    "input_tokens": 217,
    "output_tokens": 18,
    "total_tokens": 235
  }
}

Input: hello

{
  "status_code": 200,
  "request_id": "4818ce03-e7c9-96de-a7bc-781649d98465",
  "code": "",
  "message": "",
  "output": {
    "text": null,
    "finish_reason": null,
    "choices": [
      {
        "finish_reason": "stop",
        "message": {
          "role": "assistant",
          "content": "Hello! How can I help you?"
        }
      }
    ]
  },
  "usage": {
    "input_tokens": 216,
    "output_tokens": 7,
    "total_tokens": 223
  }
}

Java

Sample code

// Copyright (c) Alibaba, Inc. and its affiliates.
// version >= 2.12.0

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import com.alibaba.dashscope.aigc.conversation.ConversationParam.ResultFormat;
import com.alibaba.dashscope.aigc.generation.Generation;
importimport com.alibaba.dashscope.aigc.generation.GenerationOutput.Choice;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.tools.FunctionDefinition;
import com.alibaba.dashscope.tools.ToolCallBase;
import com.alibaba.dashscope.tools.ToolCallFunction;
import com.alibaba.dashscope.tools.ToolFunction;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.protocol.Protocol;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.github.victools.jsonschema.generator.Option;
import com.github.victools.jsonschema.generator.OptionPreset;
import com.github.victools.jsonschema.generator.SchemaGenerator;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfig;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfigBuilder;
import com.github.victools.jsonschema.generator.SchemaVersion;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Scanner;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;

public class Main {

    public static class GetWeatherTool {
        private String location;

        public GetWeatherTool(String location) {
            this.location = location;
        }

        public String callWeather() {
            // Assume location is a JSON string, e.g., {"location": "Beijing"}
            // Need to extract the value of "location"
            try {
                // Use Jackson library to parse JSON
                ObjectMapper objectMapper = new ObjectMapper();
                JsonNode jsonNode = objectMapper.readTree(location);
                String locationName = jsonNode.get("location").asText();
                return locationName + " is sunny";
            } catch (Exception e) {
                // If parsing fails, return the original string
                return location + " is sunny";
            }
        }
    }
    public static class GetTimeTool {
        public String getCurrentTime() {
            LocalDateTime now = LocalDateTime.now();
            DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
            return "Current time: " + now.format(formatter) + ".";
        }
    }
    private static ObjectNode generateSchema(Class

Sample response

Enter a question and get the response.

Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.

Input: Hangzhou weather

{
    "requestId": "e2faa5cf-1707-973b-b216-36aa4ef52afc",
    "usage": {
        "input_tokens": 254,
        "output_tokens": 19,
        "total_tokens": 273
    },
    "output": {
        "choices": [
            {
                "finish_reason": "tool_calls",
                "message": {
                    "role": "assistant",
                    "content": "",
                    "tool_calls": [
                        {
                            "type": "function",
                            "id": "",
                            "function": {
                                "name": "get_current_whether",
                                "arguments": "{\"location\": \"Hangzhou\"}"
                            }
                        }
                    ]
                }
            }
        ]
    }
}

Input: hello

{
    "requestId": "f6ca3828-3b5f-99bf-8bae-90b4aa88923f",
    "usage": {
        "input_tokens": 253,
        "output_tokens": 7,
        "total_tokens": 260
    },
    "output": {
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": "Hello! How can I help you?"
                }
            }
        ]
    }
}

HTTP

Sample code

import requests
import os
from datetime import datetime
import json

# Define tool list, the model will refer to the tool's name and description when choosing which tool to use
tools = [
    # Tool 1: Get the current time
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "Very useful when you want to know the current time.",
            "parameters": {},  # Since getting the current time doesn't require input parameters, parameters is an empty dictionary
        },
    },
    # Tool 2: Get the weather for a specified city
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Very useful when you want to query the weather of a specific city.",
            "parameters": {  # When querying weather, location needs to be provided, so the parameter is set to location
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City or district, such as Beijing, Hangzhou, Yuhang District, etc.",
                    }
                },
                "required": ["location"],
            },
        },
    },
]

# Simulate weather query tool. Example return: "Beijing is sunny."
def get_current_weather(location):
    return f"{location} is sunny. "

# Tool to query current time. Example return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
    # Get current date and time
    current_datetime = datetime.now()
    # Format current date and time
    formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
    # Return formatted current time
    return f"Current time: {formatted_time}."


def get_response(messages):
    api_key = os.getenv("DASHSCOPE_API_KEY")
    url = "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation"
    headers = {"Content-Type": "application/json", "Authorization": f"Bearer {api_key}"}
    body = {
        "model": "qwen-plus",
        "input": {"messages": messages},
        "parameters": {"result_format": "message", "tools": tools},
    }

    response = requests.post(url, headers=headers, json=body)
    return response.json()


def call_with_messages():
    messages = [
            {
                "content": input('Please enter: '),  # Question examples: "What time is it now?" "What time is it in one hour" "How's the weather in Beijing?"
                "role": "user"
            }
    ]
    
    # First round of model calling
    first_response = get_response(messages)
    print(f"\nFirst round call result: {first_response}")
    assistant_output = first_response['output']['choices'][0]['message']
    messages.append(assistant_output)
    if 'tool_calls' not in assistant_output:  # If the model determines no tool is needed, print the assistant's reply directly without a second round of model calling
        print(f"Final answer: {assistant_output['content']}")
        return
    # If the model chooses the get_current_weather tool
    elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_weather':
        tool_info = {"name": "get_current_weather", "role":"tool"}
        location = json.loads(assistant_output['tool_calls'][0]['function']['arguments'])['location']
        tool_info['content'] = get_current_weather(location)
    # If the model chooses the get_current_time tool
    elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_time':
        tool_info = {"name": "get_current_time", "role":"tool"}
        tool_info['content'] = get_current_time()
    print(f"Tool output information: {tool_info['content']}")
    messages.append(tool_info)

    # Second round of model calling, summarizing the tool's output
    second_response = get_response(messages)
    print(f"Second round call result: {second_response}")
    print(f"Final answer: {second_response['output']['choices'][0]['message']['content']}")

if __name__ == '__main__':
    call_with_messages()
package org.example;
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.Scanner;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import org.json.JSONArray;
import org.json.JSONObject;

public class Main {
    private static final String userAGENT = "Java-HttpURLConnection/1.0";
    public static void main(String[] args) throws Exception {
        // User input question
        Scanner scanner = new Scanner(System.in);
        System.out.println("Please enter:");
        String UserInput = scanner.nextLine();
        // Initialize messages
        JSONArray messages = new JSONArray();
        // Define system message
        JSONObject systemMessage = new JSONObject();
        systemMessage.put("role","system");
        systemMessage.put("content","You are a helpful assistant.");
        // Construct user_message based on user input
        JSONObject userMessage = new JSONObject();
        userMessage.put("role","user");
        userMessage.put("content",UserInput);
        // Add system_message and user_message to messages in order
        messages.put(systemMessage);
        messages.put(userMessage);
        // Make the first round of model call and print the result
        JSONObject responseJson = getResponse(messages);
        System.out.println("First round call result:"+responseJson);
        // Get assistant message
        JSONObject assistantMessage = responseJson.getJSONObject("output").getJSONArray("choices").getJSONObject(0).getJSONObject("message");
        // Initialize tool message
        JSONObject toolMessage = new JSONObject();

        // If assistant_message doesn't have tool_calls parameter, print the response information in assistant_message and return
        if (! assistantMessage.has("tool_calls")){
            System.out.println("Final answer:"+assistantMessage.get("content"));
            return;
        }
        // If assistant_message has tool_calls parameter, it means the model determines that a tool needs to be called
        else {
            // Add assistant_message to messages
            messages.put(assistantMessage);
            // If the model determines that the get_current_weather function needs to be called
            if (assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("name").equals("get_current_weather")) {
                // Get arguments information and extract the location parameter
                JSONObject argumentsJson = new JSONObject(assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("arguments"));
                String location = argumentsJson.getString("location");
                // Run the tool function, get the tool's output, and print
                String toolOutput = getCurrentWeather(location);
                System.out.println("Tool output information:"+toolOutput);
                // Construct tool_message information
                toolMessage.put("name","get_current_weather");
                toolMessage.put("role","tool");
                toolMessage.put("content",toolOutput);
            }
            // If the model determines that the get_current_time function needs to be called
            if (assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("name").equals("get_current_time")) {
                // Run the tool function, get the tool's output, and print
                String toolOutput = getCurrentTime();
                System.out.println("Tool output information:"+toolOutput);
                // Construct tool_message information
                toolMessage.put("name","get_current_time");
                toolMessage.put("role","tool");
                toolMessage.put("content",toolOutput);
            }
        }
        // Add tool_message to messages
        messages.put(toolMessage);
        // Make the second round of model call and print the result
        JSONObject secondResponse = getResponse(messages);
        System.out.println("Second round call result:"+secondResponse);
        System.out.println("Final answer:"+secondResponse.getJSONObject("output").getJSONArray("choices").getJSONObject(0).getJSONObject("message").getString("content"));
    }
    // Define the function to get weather
    public static String getCurrentWeather(String location) {
        return location+" is sunny";
    }
    // Define the function to get current time
    public static String getCurrentTime() {
        LocalDateTime now = LocalDateTime.now();
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
        String currentTime = "Current time: " + now.format(formatter) + ".";
        return currentTime;
    }
    // Encapsulate model response function, input: messages, output: json formatted http response
    public static JSONObject getResponse(JSONArray messages) throws Exception{
        // Initialize tool library
        JSONArray tools = new JSONArray();
        // Define tool 1: Get current time
        String jsonStringTime = "{\"type\": \"function\", \"function\": {\"name\": \"get_current_time\", \"description\": \"Very useful when you want to know the current time.\", \"parameters\": {}}}";
        JSONObject getCurrentTimeJson = new JSONObject(jsonStringTime);
        // Define tool 2: Get weather for a specified area
        String jsonString_weather = "{\"type\": \"function\", \"function\": {\"name\": \"get_current_weather\", \"description\": \"Very useful when you want to query the weather of a specific city.\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"City or district, such as Beijing, Hangzhou, Yuhang District, etc.\"}}, \"required\": [\"location\"]}}}";
        JSONObject getCurrentWeatherJson = new JSONObject(jsonString_weather);
        // Add both tools to the tool library
        tools.put(getCurrentTimeJson);
        tools.put(getCurrentWeatherJson);
        String toolsString = tools.toString();
        // API call URL
        String urlStr = "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation";
        // Get DASHSCOPE_API_KEY from environment variables
        String apiKey = System.getenv("DASHSCOPE_API_KEY");

        URL url = new URL(urlStr);
        HttpURLConnection connection = (HttpURLConnection) url.openConnection();
        connection.setRequestMethod("POST");
        // Define request header information
        connection.setRequestProperty("Content-Type", "application/json");
        connection.setRequestProperty("Authorization", "Bearer " + apiKey);
        connection.setDoOutput(true);
        // Define request body information
        String jsonInputString = String.format("{\"model\": \"qwen-max\", \"input\": {\"messages\":%s}, \"parameters\": {\"result_format\": \"message\",\"tools\":%s}}",messages.toString(),toolsString);

        // Get http response
        try (DataOutputStream wr = new DataOutputStream(connection.getOutputStream())) {
            wr.write(jsonInputString.getBytes(StandardCharsets.UTF_8));
            wr.flush();
        }
        StringBuilder response = new StringBuilder();
        try (BufferedReader in = new BufferedReader(
                new InputStreamReader(connection.getInputStream()))) {
            String inputLine;
            while ((inputLine = in.readLine()) != null) {
                response.append(inputLine);
            }
        }
        connection.disconnect();
        // Return json formatted response
        return new JSONObject(response.toString());
    }
}

Sample response

Enter a question and get the response.

Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.

Input: Hangzhou weather

{
    'output': {
        'choices': [
            {
                'finish_reason': 'tool_calls',
                'message': {
                    'role': 'assistant',
                    'tool_calls': [
                        {
                            'function': {
                                'name': 'get_current_weather',
                                'arguments': '{
                                    "location": "Hangzhou"
                                }'
                            },
                            'index': 0,
                            'id': 'call_240d6341de4c484384849d',
                            'type': 'function'
                        }
                    ],
                    'content': ''
                }
            }
        ]
    },
    'usage': {
        'total_tokens': 235,
        'output_tokens': 18,
        'input_tokens': 217
    },
    'request_id': '235ed6a4-b6c0-9df0-aa0f-3c6dce89f3bd'
}

Input: hello

{
    'output': {
        'choices': [
            {
                'finish_reason': 'stop',
                'message': {
                    'role': 'assistant',
                    'content': 'Hello! How can I help you?'
                }
            }
        ]
    },
    'usage': {
        'total_tokens': 223,
        'output_tokens': 7,
        'input_tokens': 216
    },
    'request_id': '42c42853-3caf-9815-96e8-9c950f4c26a0'
}

Error code

If the call failed and an error message is returned, see Error messages.

OSZAR »