API reference
Google Gemini proxy
POST /v1/generateContent— calls the Google Generative Language API through mintoken. The request/response shape matches Google's own API.
Basic example
curl https://api.mintoken.in/v1/generateContent \
-H "Authorization: Bearer mt_live_xxxxx" \
-H "X-Provider-Key: <your-google-ai-key>" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-1.5-flash",
"contents": [
{
"parts": [
{"text": "Explain connection pooling"}
]
}
]
}'
Model field
Unlike OpenAI/Anthropic (model goes in the body), Google's native API puts the model in the URL path (/models/gemini-1.5-flash:generateContent). To keep mintoken's interface consistent, mintoken reads the model from the request body's model field and constructs the upstream path for you.
System instructions
Gemini's equivalent of a system prompt is systemInstruction. Mintoken prepends its compression rules to the first text part — if you don't set one, we create it for you.
Supported features
- Tool use (function calling) via
tools - Multi-modal input (images, audio, video via base64/URL parts)
- Structured output via
responseMimeType - Safety settings passed through as-is
Common models
gemini-1.5-flash— default pick; cheap and fastgemini-1.5-pro— higher quality, larger contextgemini-2.0-flash/gemini-2.5-pro— newer generations, check availability on your project