Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save ben-vargas/c7c9633e6f482ea99041dd7bd90fbe09 to your computer and use it in GitHub Desktop.
Save ben-vargas/c7c9633e6f482ea99041dd7bd90fbe09 to your computer and use it in GitHub Desktop.

MoonshotAI Kimi K2 model not invoking tools despite listed support

Description

The moonshotai/kimi-k2 model is listed as supporting "Tools" and "Tool Choice" on the OpenRouter model page, but when making requests with tools defined, the model responds with regular text instead of invoking the requested tools.

Steps to Reproduce

  1. Make a request to moonshotai/kimi-k2 via OpenRouter API with tools defined
  2. Use the standard OpenAI-compatible tool/function format
  3. Request structured output via tool calling

Expected Behavior

The model should respond with a tool call in the format:

{
  "tool_calls": [{
    "id": "call_123",
    "type": "function",
    "function": {
      "name": "generateTaskList",
      "arguments": "{\"tasks\": [...], \"metadata\": {...}}"
    }
  }]
}

Actual Behavior

The model generates a regular text response containing JSON, but does not invoke the tool. This causes the Vercel AI SDK to throw:

AI_NoObjectGeneratedError: No object generated: the tool was not called.

Technical Details

  • Model ID: moonshotai/kimi-k2
  • API Library: @openrouter/ai-sdk-provider v0.7.2 with Vercel AI SDK
  • Response Details:
    {
      "id": "gen-1752527019-y8LCKGAYKbeVXxCbANSi",
      "timestamp": "2025-07-14T21:05:38.135Z",
      "modelId": "moonshotai/kimi-k2",
      "usage": {
        "promptTokens": 6211,
        "completionTokens": 1827,
        "totalTokens": 8038
      },
      "finishReason": "stop"
    }

Code Example

Here's what the Vercel AI SDK sends under the hood when using generateObject:

// Using Vercel AI SDK with OpenRouter provider
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { generateObject } from 'ai';
import { z } from 'zod';

const taskSchema = z.object({
  tasks: z.array(z.object({
    id: z.number(),
    title: z.string(),
    description: z.string()
  })),
  metadata: z.object({
    projectName: z.string(),
    totalTasks: z.number()
  })
});

const client = createOpenRouter({ apiKey: OPENROUTER_API_KEY });

// This is what generateObject does internally - it converts the schema to a tool:
const result = await generateObject({
  model: client('moonshotai/kimi-k2'),
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Generate tasks based on requirements...' }
  ],
  schema: taskSchema,
  mode: 'auto', // This tells the SDK to use tool calling
  maxTokens: 64000,
  temperature: 0.2
});

This gets translated to an OpenAI-compatible API request with:

{
  "model": "moonshotai/kimi-k2",
  "messages": [...],
  "tools": [{
    "type": "function",
    "function": {
      "name": "generateTaskList",
      "description": "Generate structured task data",
      "parameters": {
        "type": "object",
        "properties": {
          "tasks": {
            "type": "array",
            "items": {
              "type": "object",
              "properties": {
                "id": { "type": "number" },
                "title": { "type": "string" },
                "description": { "type": "string" }
              }
            }
          },
          "metadata": {
            "type": "object",
            "properties": {
              "projectName": { "type": "string" },
              "totalTasks": { "type": "number" }
            }
          }
        }
      }
    }
  }],
  "tool_choice": "auto"
}

What Actually Happens

Instead of responding with a tool call like:

{
  "choices": [{
    "message": {
      "tool_calls": [{
        "id": "call_abc123",
        "type": "function",
        "function": {
          "name": "generateTaskList",
          "arguments": "{\"tasks\":[{\"id\":1,\"title\":\"Setup\",\"description\":\"...\"}]}"
        }
      }]
    }
  }]
}

The Kimi K2 model responds with:

{
  "choices": [{
    "message": {
      "content": "Here are the tasks:\n\n```json\n{\n  \"tasks\": [...]\n}\n```",
      "role": "assistant"
    }
  }]
}

Additional Context

  • The model successfully processes the request (uses tokens, returns completion)
  • It appears to understand the request and generates valid JSON in the text response
  • Other models through OpenRouter (e.g., openai/gpt-4o-mini, meta-llama/llama-3.3-70b-instruct) work correctly with identical code
  • The OpenRouter documentation shows Kimi K2 supports tools, but it doesn't invoke them in practice

Questions

  1. Is tool/function calling actually supported for the Kimi K2 model?
  2. Does Kimi K2 require a different prompt format to trigger tool usage?
  3. Is there a known issue with Kimi K2's tool calling implementation?
  4. Should the model page be updated to reflect that tool calling is not functional?

Environment

  • Node.js: v22.16.0
  • @openrouter/ai-sdk-provider: 0.7.2
  • ai (Vercel AI SDK): 4.0.12

Workaround

Currently using a different model or implementing a fallback to parse the JSON from the text response.

@thijssmudde
Copy link

Running into the same issue, do you have something working now?

@ben-vargas
Copy link
Author

@thijssmudde - I did implement/find a workaround but didn’t PR it into Task Master yet as I added Groq instead as a provider which is much faster. You can see workaround details in this branch though… ben-vargas/ai-claude-task-master@dfa49da

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment