OpenAI has introduced two new AI models, GPT-5.4 mini and nano, aimed at making artificial intelligence faster, more affordable, and easier to use for businesses and developers. The new models are smaller versions of the company’s main GPT-5.4 system but are designed to handle tasks more quickly and at lower cost, especially in high-volume environments where speed matters.
Also read: OpenAI Introduces Codex App for macOS – Here’s What It Can Do
GPT-5.4 mini is positioned as a balanced model, offering strong performance across coding, reasoning, and handling both text and images. According to the company, it is more than twice as fast as its previous version while delivering performance close to the larger GPT-5.4 model in many cases. GPT-5.4 nano, on the other hand, is the smallest and most cost-efficient option. It is built for simpler and repetitive tasks such as data sorting, basic coding support, and information extraction, where quick response and low cost are more important than deep analysis.
Also read: OpenAI Snaps Up Startup Rockset for Powerful Database Analytics
The launch comes shortly after OpenAI rolled out the model more broadly across its platforms, including ChatGPT and Codex, as part of its efforts to expand AI capabilities for professional and enterprise use. With these new models, the company is focusing on a practical approach instead of relying only on large, heavy AI systems, organisations can now combine different models. Larger models can handle complex thinking and decision-making, while smaller models like mini and nano can quickly complete routine tasks in the background.
This approach is expected to help businesses reduce costs, improve speed, and build more responsive applications such as coding assistants, customer support tools, and systems that can understand images in real time. GPT-5.4 mini is now available across OpenAI’s API, ChatGPT, and Codex platforms, while the nano is being offered as a lightweight option for developers handling simpler workloads.






