OpenAI Chat Model Cheat Sheet - Decoding the Chaos

By Brady Stroud

April 17, 2025

OpenAI’s model names (e.g., o4, 4o) are confusing and change often. This guide is my quick reference for picking the best model for any task.

This is a living document and I've probably made some mistakes while writing it, so feel free to suggest changes or additions. I'll keep it updated as OpenAI rolls out new models and features 🚀 If you see something missing, or a mistake, please make a PR! this page is stored as markdown in GitHub using TinaCMS 🦙

OpenAI Chat Model Cheat-Sheet

Status Legend:

  • 🟢 current & recommended
  • 🟡 active but being phased out
  • 🔴 legacy/deprecated
StatusModelRelease dateStrengths / Best use-casesTrade-offsNaming note
🟢GPT-4.114 April 2025Flagship long-context (1M tokens), top-tier coding & RAGCostliest text-only model“4.1” = incremental rev over 4
🟢GPT-4.1 mini/nano14 April 2025~80–90% of 4.1 accuracy at lower price/latencySlightly less reasoning depth“mini / nano” denote distillations
🟢GPT-4o13 May 2024Native text + image + audio (“omni”) real-time chatPricier than 4o mini“o” = omni multimodal
🟢GPT-4o miniJuly 2024Cheaper, faster multimodalLower accuracy vs 4oSame naming rule
🟢o4 mini / o4 mini high16 April 2025Efficient reasoning, image-aware; “high” spends more tokens for reliabilitySmaller context than 4.1“mini” = size; “high” = higher reasoning effort setting
🟢o316 April 2025Deep step-by-step reasoning, code, math, tool useSlower & pricier than o4 mini“o-series” = optimized reasoning line
🟢o3 mini / o3 mini highFebruary–March 2025Very cheap STEM reasoning; “high” = extra depthOutclassed by o4 mini on most tasksLegacy small variant
🟢GPT-3.5 Turbo30 November 2022Budget workhorse for textReasoning weaker than 4-line“Turbo” = cost/latency optimized snapshot
🟡GPT-4 TurboNovember 2023128K context text/visionSunset mid-2025“Turbo” as above
🟡GPT-3.5 (base)March–November 2022Early RLHF model; ChatGPT v1Small context, dated knowledge“3.5” signaled RLHF-tuned step between 3 & 4
🔴GPT-414 March 2023High-quality text; vision via separate endpointRetired from ChatGPT 30 April 2025Plain version number
🔴GPT-4.5 previewFebruary 2025Bridge model while 4o rolled outAPI removal slated 14 July 2025“.5” = half-step; “preview” = experimental
🔴GPT-3June 2020Historic foundation LLMLags on reasoning, cut-off 2020Generation number only
🔴GPT-2February 2019Historic: text generation demoSmall context, safety gapsGeneration number only
🔴GPT-1June 2018Proof-of-concept117M params, research onlyFirst use of “GPT”

Decoding the names

  • GPT= Generative Pre-trained Transformer
  • Major number (1, 2, 3, 4, 4.1…) → new architecture/train run.
  • .5/.1 → mid-cycle fine-tune refresh.
  • oomni (multimodal I/O).
  • o-series without “GPT” (o1, o3, o4…) → separate reasoning line; letters are branding, numbers track generations.
  • mini / nano / high → trade speed/cost vs accuracy.
  • Turbo → snapshot tuned for throughput/cost.
  • Date format - MMDD (e.g., gpt-4-0613) → training-snapshot date.