GPT-5 Chat points to the GPT-5 snapshot currently used in ChatGPT. GPT-5 is our next-generation, high-intelligence flagship model. It accepts both text and image inputs, and produces text outputs.
gpt-5.2
Common Name: GPT-5.2
GPT-5.2 is OpenAI's best model for coding and agentic tasks across industries.
Specifications
Performance (7-day Average)
Pricing
Usage Statistics
Similar Models
GPT-5.1-Codex is a version of GPT-5 optimized for agentic coding tasks in Codex or similar environments. It's available in the Responses API only and the underlying model snapshot will be regularly updated. If you want to learn more about prompting GPT-5-Codex, refer to our dedicated guide
GPT-5.1-Codex-Max is a version of GPT-5.1-Codex with enhanced capabilities for agentic coding tasks.
GPT-5.1 is the OpenAI's best model for coding and agentic tasks across industries.
Documentation
GPT-5.2
GPT-5.2 is OpenAI's most advanced frontier model, released on December 11, 2025. It features a massive 400K token context window, 128K max output tokens, and state-of-the-art performance on coding, math, and scientific tasks.
Key Features
- 400K Context Window: Process entire codebases or document collections in a single request
- 128K Max Output: Generate comprehensive responses without truncation
- Enhanced Reasoning: New
xhighreasoning effort level for expert-level problem-solving - Multimodal: Support for text and image inputs/outputs
- Knowledge Cutoff: August 31, 2025
Model Variants
| Variant | Best For | Reasoning Levels |
|---|---|---|
| GPT-5.2 | General tasks, balanced speed/quality | none to xhigh |
| GPT-5.2 Pro | Scientific research, expert-level accuracy | medium, high, xhigh |
Basic Usage
from openai import OpenAI
client = OpenAI(
base_url="https://api.ohmygpt.com/v1",
api_key="your-api-key",
)
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms."}
],
)
print(response.choices[0].message.content)Using Reasoning Mode
Control thinking depth with the reasoning_effort parameter for complex tasks:
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{
"role": "user",
"content": "Write a Python function to find the longest palindromic substring."
}
],
reasoning_effort="high",
max_completion_tokens=16384,
)
print(response.choices[0].message.content)Reasoning Effort Levels
| Level | Description | Use Case |
|---|---|---|
none | No reasoning (default for GPT-5.2) | Fast responses, simple queries |
minimal | Minimal reasoning | Quick tasks, low latency |
low | Light reasoning | Balanced speed and accuracy |
medium | Moderate reasoning | General problem-solving |
high | Deep reasoning | Complex math, coding, analysis |
xhigh | Maximum reasoning (new in 5.2) | Scientific research, expert-level tasks |
Vision Example
Analyze images, charts, and documents with multimodal input:
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Analyze this chart and summarize the key trends."},
{
"type": "image_url",
"image_url": {"url": "https://example.com/chart.png"}
}
]
}
],
)Long Context Example
Process entire codebases or large documents with the 400K context window:
# Read a large codebase or document
with open("large_codebase.txt", "r") as f:
codebase = f.read()
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{
"role": "system",
"content": "You are a senior software architect reviewing code."
},
{
"role": "user",
"content": f"Review this codebase and identify potential security vulnerabilities:\n\n{codebase}"
}
],
reasoning_effort="high",
max_completion_tokens=32768,
)Best Practices
- Match Reasoning to Task Complexity: Use
none/minimalfor simple queries,high/xhighfor complex reasoning - Use
max_completion_tokens: Required when using reasoning models with Chat Completions API - Leverage Long Context: Process entire codebases without chunking for better coherence
- Consider GPT-5.2 Pro: For scientific research or expert-level accuracy, use the Pro variant
- Cache When Possible: Cached inputs are 10x cheaper—reuse prompts where applicable