
o3-mini
o3-miniFast reasoning model with efficient output generation.
Fast Reasoning
Quick analytical thinking
Efficient
Low cost per token
Low Cost
Budget-friendly reasoning
Compact
Small but capable
API Documentation
View complete API reference with all parameters and examples.
Advanced Features
Streaming
Enable real-time streaming responses with Server-Sent Events.
{
"model": "o3-mini",
"stream": true,
"messages": [...]
}Function Calling (Tools)
Enable the model to use tools and call functions.
{
"model": "o3-mini",
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}],
"messages": [{"role": "user", "content": "What's the weather in Tokyo?"}]
}JSON Mode
Get structured JSON responses from the model.
{
"model": "o3-mini",
"response_format": {"type": "json_object"},
"messages": [{"role": "user", "content": "Extract info as JSON: John is 30 years old"}]
}API Parameters Reference
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model identifier (e.g., o3-mini) |
| messages | array | Yes | Array of message objects with role and content |
| max_tokens | integer | No | Maximum tokens in the response |
| stream | boolean | No | Enable streaming responses (SSE) |
| temperature | number | No | Sampling temperature (0.0 - 2.0) |
| top_p | number | No | Nucleus sampling threshold (0.0 - 1.0) |
| tools | array | No | Function calling tools definition |
| response_format | object | No | Output format (e.g., json_object) |
Full API Documentation
View complete API reference with streaming, thinking, and more.
Pricing
Billing: Cost = (input_tokens * input_price + output_tokens * output_price) / 1,000,000
About o3-mini
o3-mini is a Large Language Model API provided by OpenAI. Fast reasoning model with efficient output generation. Through API Models platform, you can access this model via a unified API at prices significantly lower than official rates. Current pricing: Input: $0.486, Output: $1.942 per 1M tokens.
Key Features
- Fast Reasoning -- Quick analytical thinking
- Efficient -- Low cost per token
- Low Cost -- Budget-friendly reasoning
- Compact -- Small but capable
Use Cases
Chatbot & Customer Support
Build intelligent conversational systems to automatically answer user queries and improve service efficiency.
Content Generation
Automatically write articles, emails, ad copy, and other text content to boost productivity.
Code Assistant
Assist with code writing, debugging, and code review to accelerate software development.
Data Analysis
Understand and analyze unstructured data, extract key insights, and generate summary reports.
Why API Models
- Unified API -- One API key to access all models, no need to register on multiple platforms
- Cost Savings -- 60-95% cheaper than official pricing, ideal for indie developers and startups
- Instant Access -- Start using immediately after signup, supports Stripe and Alipay payments
- Full Documentation -- Detailed API docs with code examples in cURL, Python, and Node.js
Frequently Asked Questions
How much does o3-mini cost?
o3-mini is available through API Models at: Input: $0.486, Output: $1.942 per 1M tokens. This is up to 95% cheaper than official pricing.
How to use o3-mini API?
Sign up at API Models, get your API key, and call our unified API endpoint. We provide detailed API documentation with code examples in cURL, Python, and Node.js.
What is the difference between API Models and the official OpenAI API?
API Models offers the same o3-mini model at 60-95% lower cost through our aggregation platform. We provide a unified API interface so you do not need separate accounts for each provider - one API key to access all models.
What payment methods are supported?
We support Stripe (Visa, Mastercard, and other international cards) and Alipay. Credits are available instantly after payment.