LongCat API开放平台接口文档
概述
LongCat API开放平台专为LongCat-Flash-Chat模型提供AI模型代理服务,同时兼容OpenAI和Anthropic API格式。本文档遵循标准API格式约定。
基础URL
生产环境端点: https://api.longcat.chat
认证
所有API请求都需要在Authorization头中使用API密钥进行认证:
Authorization: Bearer YOUR_API_KEY
接口
聊天补全
POST /openai/v1/chat/completions
使用OpenAI兼容格式创建聊天补全。
请求头
Authorization: Bearer YOUR_API_KEY
(必填)Content-Type: application/json
请求体
字段 | 类型 | 必填 | 说明 |
---|---|---|---|
model | string | 是 | 模型标识符(仅支持LongCat-Flash-Chat) |
messages | array | 是 | 消息对象数组,仅允许文本输入 |
stream | boolean | 否 | 是否以流式返回响应(默认:false) |
max_tokens | integer | 否 | 生成的最大token数,默认为1024 |
temperature | number | 否 | 采样温度,范围0到1 |
top_p | number | 否 | 核采样参数 |
消息对象
字段 | 类型 | 必填 | 说明 |
---|---|---|---|
role | string | 是 | 消息作者角色。必须为以下之一: • system - 设置助手行为和上下文• user - 人类用户消息• assistant - AI助手消息(用于会话历史) |
content | string | 是 | 消息内容。简单文本消息字符串。 |
请求示例
{
"model": "LongCat-Flash-Chat",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, how are you?"
}
],
"stream": false,
"max_tokens": 150,
"temperature": 0.7
}
响应(非流式)
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "LongCat-Flash-Chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 15,
"total_tokens": 35
}
}
响应(流式)
当stream: true
时,响应以Server-Sent Events (SSE)返回:
Content-Type: text/event-stream
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"LongCat-Flash-Chat","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"LongCat-Flash-Chat","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"LongCat-Flash-Chat","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: [DONE]
Anthropic消息
POST /anthropic/v1/messages
使用Anthropic的Claude API格式创建消息。
请求头
Authorization: Bearer YOUR_API_KEY
(必填)Content-Type: application/json
请求体
字段 | 类型 | 必填 | 说明 |
---|---|---|---|
model | string | 是 | Claude模型名称 |
messages | array | 是 | 消息对象数组 |
max_tokens | integer | 否 | 生成的最大token数 |
stream | boolean | 否 | 是否以流式返回响应(默认:false) |
temperature | number | 否 | 采样温度,范围0到1 |
top_p | number | 否 | 核采样参数 |
system | string | 否 | 用于设置上下文的系统消息 |
消息对象
字段 | 类型 | 必填 | 说明 |
---|---|---|---|
role | string | 是 | 消息作者角色。必须为以下之一: • user - 人类用户消息• assistant - Claude助手消息(用于会话历史)注意:系统消息通过 system 参数单独传递 |
content | string | 是 | 消息内容。仅支持文本消息字符串 |
请求示例
{
"model": "LongCat-Flash-Chat",
"max_tokens": 1000,
"messages": [
{
"role": "user",
"content": "Hello, LongCat"
}
],
"stream": false,
"temperature": 0.7
}
响应(非流式)
{
"id": "msg_123",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello! How can I help you today?"
}
],
"model": "LongCat-Flash-Chat",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 10,
"output_tokens": 8
}
}
响应(流式)
当stream: true
时,响应遵循Anthropic的SSE格式:
Content-Type: text/event-stream
event: message_start
data: {"type": "message_start", "message": {"id": "msg_123", "type": "message", "role": "assistant", "content": [], "model": "LongCat-Flash-Chat", "stop_reason": null, "stop_sequence": null, "usage": {"input_tokens": 10, "output_tokens": 0}}}
event: content_block_start
data: {"type": "content_block_start", "index": 0, "content_block": {"type": "text", "text": ""}}
event: content_block_delta
data: {"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": "Hello"}}
event: content_block_delta
data: {"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": "!"}}
event: content_block_stop
data: {"type": "content_block_stop", "index": 0}
event: message_delta
data: {"type": "message_delta", "delta": {"stop_reason": "end_turn", "stop_sequence": null}, "usage": {"output_tokens": 8}}
event: message_stop
data: {"type": "message_stop"}
错误响应
API使用常规HTTP响应码指示成功或失败:
HTTP状态码
状态码 | 状态名称 | 说明 |
---|---|---|
200 | OK | 请求成功 |
400 | Bad Request | 请求参数无效或JSON格式错误 |
401 | Unauthorized | API密钥无效或缺失 |
403 | Forbidden | API密钥无权限访问请求资源 |
429 | Too Many Requests | 超出速率限制 |
500 | Internal Server Error | 服务器遇到意外情况 |
502 | Bad Gateway | 上游服务器响应无效 |
503 | Service Unavailable | 服务器暂时不可用 |
错误响应格式
所有错误返回如下结构的JSON对象:
{
"error": {
"message": "人类可读的错误描述",
"type": "error_type_identifier",
"code": "specific_error_code",
"param": "parameter_name_if_applicable"
}
}
错误类型与代码
错误类型 | 错误代码 | HTTP状态 | 说明 |
---|---|---|---|
authentication_error | invalid_api_key | 401 | 提供的API密钥无效 |
authentication_error | missing_api_key | 401 | 未提供API密钥 |
permission_error | insufficient_quota | 403 | API密钥配额不足 |
invalid_request_error | invalid_parameter | 400 | 参数值无效 |
invalid_request_error | missing_parameter | 400 | 缺少必填参数 |
invalid_request_error | invalid_json | 400 | JSON格式无效 |
rate_limit_error | rate_limit_exceeded | 429 | 短时间内请求过多 |
server_error | internal_error | 500 | 服务器内部错误 |
错误响应示例
API密钥无效
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}
缺少必填参数
{
"error": {
"message": "Missing required parameter: 'messages'",
"type": "invalid_request_error",
"code": "missing_parameter",
"param": "messages"
}
}
超出速率限制
{
"error": {
"message": "Rate limit exceeded. Please try again in 60 seconds",
"type": "rate_limit_error",
"code": "rate_limit_exceeded"
}
}
速率限制
速率限制按API密钥执行。超出限制时会收到429状态码。
SDK兼容性
本API设计兼容:
- OpenAI Python SDK(用于
/openai/
端点) - Anthropic Python SDK(用于
/anthropic/
端点) - 任何支持相应API格式的HTTP客户端
示例
使用OpenAI Python SDK
import openai
# 配置LongCat-Flash-Chat API
openai.api_base = "https://api.longcat.chat/openai"
openai.api_key = "your-api-key"
response = openai.ChatCompletion.create(
model="LongCat-Flash-Chat",
messages=[
{"role": "user", "content": "Hello!"}
]
)
使用Anthropic Python SDK
import anthropic
# 配置LongCat-Flash-Chat API
client = anthropic.Anthropic(
api_key="Bearer your-api-key",
base_url="https://api.longcat.chat"
)
default_headers={
"Content-Type": "application/json",
"Authorization": "Bearer your-api-key",
}
message = client.messages.create(
model="LongCat-Flash-Chat",
max_tokens=150,
messages=[
{"role": "user", "content": "Hello, LongCat!"}
]
)
使用cURL
# OpenAI风格请求
curl -X POST https://api.longcat.chat/openai/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "LongCat-Flash-Chat",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'
# Anthropic风格请求
curl -X POST https://api.longcat.chat/anthropic/v1/messages \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "LongCat-Flash-Chat",
"max_tokens": 1000,
"messages": [{"role": "user", "content": "Hello!"}]
}'
📋 需要帮助? 请查阅我们的FAQ获取常见问题和故障排查指南。