API DocsAPI Docs
LongCat API Platform
  • English
  • 简体中文
LongCat API Platform
  • English
  • 简体中文
  • Quick Start
  • API Docs
  • Claude Code Configuration
  • FAQ
  • Change Log

LongCat API Platform FAQ

General Questions

Q: What services does the LongCat API Platform provide?

A: The LongCat API Open Platform provides AI model proxy services specifically for the LongCat series models. By maintaining compatibility with OpenAI and Anthropic API formats, you can use existing SDKs and tools to access the LongCat series model.

Q: What registration/login methods are supported by the LongCat API Platform?

A: Users in Mainland China can register/login using their mobile phone number. Users outside Mainland China can register/login using either their mobile phone number or email.

Q: What should I do if I do not receive the verification code when logging in with my mobile phone number on the LongCat API Platform?

A: Mainland China users who do not receive the verification code can call the customer service hotline at 1010-7888 for assistance. Users outside Mainland China can switch to email login. If you still do not receive the verification code, please contact us by email at longcat-team@meituan.com.

Authentication & API Keys

Q: How do I get an API Key?

A: After successful account registration, the system will automatically create an API Key named "default" for each account. After logging into the open platform, go to the API Keys page to view.

Q: Can I purchase additional API Key token quota?

A: No, the platform is currently in the public beta phase and does not support paid quota purchases.

Q: What should I do if my token quota is insufficient?

A: You can apply to increase your free token quota on the Usage. Once your application is approved, your free quota will be increased to 5,000,000 tokens per day. For additional quota, please contact us by email at longcat-team@meituan.com .

Q: Where do I put my API Key?

A: Include your API Key in the Authorization header: Authorization: Bearer YOUR_API_KEY

Q: Do I need separate API Keys for different models?

A: No, you only need one LongCat API Platform API Key to access our LongCat series model service.

Q: Can I use the same API Key for both OpenAI and Anthropic endpoints?

A: Yes, the same LongCat-Flash-Chat API Key works for both /openai/ and /anthropic/ endpoints.

Q: My API Key isn't working, what should I check?

A: Verify that:

  • The API Key is correctly formatted with "Bearer " prefix
  • You're using the correct base URL
  • Your account has sufficient quota (500,000 free tokens provided daily, paid recharge not currently available)

Request Format & Parameters

Q: What's the difference between the OpenAI and Anthropic endpoints?

A:

  • OpenAI endpoint (/openai/native/chat/completions) follows OpenAI's format with system/user/assistant roles
  • Anthropic endpoint (/anthropic/v1/messages) follows Claude's format with separate system parameter
  • Choose based on which format you prefer to use

Q: How do I enable streaming responses?

A: Set "stream": true in your request body. The response will be returned as Server-Sent Events (SSE).

Q: What's the maximum token limit?

A: LongCat-Flash-Chat outputs tokens up to 8k.

Error Handling

Q: What causes a 401 Unauthorized error?

A: This typically means:

  • Missing or invalid API Key
  • Incorrect Authorization header format

Q: What causes a 429 Rate Limit error?

A: You're sending requests too quickly. Implement exponential backoff and retry logic, or reduce your request rate.

Q: What causes a 400 Bad Request error?

A: Check that:

  • Your JSON is properly formatted
  • All required parameters are included
  • Parameter values are within valid ranges
  • Message format matches the endpoint requirements

Q: What does "context_length_exceeded" mean?

A: Your input (messages + max_tokens) exceeds the model's maximum context window. Try:

  • Reducing the number of messages
  • Shortening message content
  • Reducing max_tokens parameter

SDK Integration

Q: Can I use the official OpenAI Python SDK?

A: Yes, just change the base URL:

openai.api_base = "https://api.longcat.chat/openai/"

Q: Can I use the official Anthropic Python SDK?

A: Yes, configure the base URL:

client = anthropic.Anthropic(base_url="https://api.longcat.chat/anthropic")

Q: Do I need to modify my existing code?

A: Minimal changes required - usually just updating the base URL and API Key. The request/response formats remain compatible.

Performance

Q: How fast are the responses?

A: Response times depend on:

  • Request length and complexity
  • Server load and geographic location
  • Whether you're using streaming or non-streaming

Q: Is there a rate limit?

A: Yes, rate limits are enforced per API Key. When exceeded, you'll receive a 429 status code.

Q: How do I handle timeouts?

A: Implement proper timeout handling in your client:

  • Set reasonable connection and read timeouts
  • Use exponential backoff for retries
  • Consider using streaming for long responses

Troubleshooting

Q: My streaming response was suddenly interrupted. What should I do?

A: Check:

  • Network connectivity issues
  • Client timeout settings
  • Server-side processing limits
  • Proper SSE parsing in your client

Q: How can I debug request/response issues?

A:

  • Enable detailed logging in your HTTP client
  • Check response headers for additional error information
  • Verify request format matches our documentation exactly
  • Test with simple requests first, then add complexity

Common Integration Patterns

Q: How do I monitor API usage?

A: Implement logging and monitoring:

  • Log request/response times
  • Track token usage
  • Monitor error rates
  • Set up alerts for quota limits
Last Updated: 9/22/25, 5:33 PM
Contributors: zhuqi09
Prev
Claude Code Configuration
Next
Change Log