GPT-5.1 Codex Max

GPT-5.1 Codex Max

High-accuracy Codex for large codebases and migrations.

Community Sentiment

Mixed
Based on Reddit reviews

Community Verdict

Best for Agentic Coding
Based on Reddit reviews

Input Modalities

Text, Images

Output Modalities

Text

Price / 1M tokens

$1.25$10
InputOutput

Best For

Complex, long-run agentic coding (24+ hrs)
Large-scale refactors (millions of tokens)
Deep debugging with long context
Tasks needing very high reasoning
Projects requiring memory compaction
77.9% SWE-bench using 30% fewer tokens

Avoid For

Strict guardrail needs (ignores read-only modes)
Tasks where file changes must be avoided
Workflows needing reliable, uninterrupted runs
Cost-sensitive use
Simple coding (use Codex Mini)
200,000 context window
100,000 max output tokens
Jan 1, 2024 knowledge cutoff
Reasoning supported

Parameters

While OpenAI documents a unified parameter set for the Chat Completions and Responses APIs, each model supports only a limited subset of model-specific parameters and values. This table lists the supported model-specific GPT-5.1 Codex Max API parameters and allowed values.
ParameterDescriptionPathSupported Values
ModelSelects the model version for the requestmodel
gpt-5.1-codex-max
Message rolesDefines the role of the message in the inputmessages[].role
developersystemuserassistant
Reasoning effortControls the depth of internal reasoning used by the modelreasoning_effort
lowmediumhighxhigh
Reasoning summaryControls whether the model produces a concise or detailed reasoning summaryreasoning_summary
detailedauto
Max output tokensMax output tokens the model may generatemax_completion_tokens16 .. 100,000
VerbosityControls how brief or detailed the generated response isverbosity
medium
Output formatSpecifies the output format, including structured JSON schemasresponse_format
textjson_objectjson_schema
TemperatureControls how random or deterministic the output istemperatureNot supported
Top PControls how diverse the output tokens aretop_pNot supported
Presence penaltyEncourages the model to introduce new topicspresence_penaltyNot supported
Frequency penaltyReduces repetition of the same words or phrasesfrequency_penaltyNot supported

Pricing

GPT-5.1 Codex Max API pricing is based on token usage for input and output. Prices are listed per 1M tokens, with lower rates for cached input. Tool-specific features may add per-call fees.
Text tokens
Per 1M tokens
Input$1.25
Cached input$0.13
Output$10.00
Example costs (GPT-5.1 Codex Max)
TASK
APPROX COST
Large codebase analysis (100k tokens)
~$0.12
Multi-file refactoring session
~$0.50–$2.00
24-hour autonomous coding run
~$5.00–$20.00

Modalities

What the model can accept and produce
TextInput and output
ImagesInput only
AudioNot supported
VideoNot supported

Features

Platform-level capabilities
StreamingSupported
Function callingSupported
Structured outputsSupported
Fine-tuningNot supported
DistillationNot supported
Predicted outputsNot supported

Tools

Tools supported by this model when using the Responses API.
Web searchSupported
File searchSupported
Image generationNot supported
Code interpreterSupported
MCPSupported
Computer useNot supported

Snapshots

GPT-5.1 Codex Max model snapshots ensure stable behavior by locking a specific version. See all available snapshots and aliases below.
GPT-5.1 Codex Max
gpt-5.1-codex-max
↪ gpt-5.1-codex-max
gpt-5.1-codex-max

FAQs

Ship prompts that pass the tests
Don't wait until they break in production
© 2026 LangFast. All rights reserved. Privacy Policy. Terms of Service.