top of page

OpenAI’s new model GPT-4.5 offers more natural conversation, fewer errors

  • Staff Writer
  • Feb 28
  • 2 min read

OpenAI

OpenAI has released GPT- 4.5, its most advanced general purpose large language model (LLM), equipped with a wider knowledge base and a higher EQ, which enables it to perform tasks such as writing, programming, and problem solving a lot better than its predecessor GPT-4o. OpenAI also claims that GPT-4.5 offers more natural conversations and hallucinates less. 


Internally known as Orion, GPT-4.5 is OpenAI’s largest AI model and has been trained using more GPUs and data than any of its predecessors. It was trained on Microsoft’s Azure AI supercomputer, which as per Top 500, is the fourth fastest supercomputer in the world.


“It is a giant, expensive model. we really wanted to launch it to plus and pro at the same time, but we’ve been growing a lot and are out of GPUs. We will add tens of thousands of GPUs next week and roll it out to the plus tier then,” said Altman in a post on X.


Currently under preview, GPT-4.5 is available only to ChatGPT’s paid customers for now. Users of ChatGPT Pro ($200/month) can access it right away on mobile and desktops, while ChatGPT Plus ($20/month) and Team users will get it next week and Enterprise and Edu users will get access in the week after it. 


OpenAI said that to enhance the capabilities of GPT-4.5 it focused on scaling two commonly used training paradigms – unsupervised learning and reasoning. 

OpenAI used the same scaling technique of increasing computing power and data during pre-training as used in previous models.

Though historically successful, this technique is now being questioned as it involves substantial investment. DeepSeek V3, a Chinese AI model, demonstrated last month that there are more efficient and less resource intensive methods than scaling data and computing. 


Unsupervised learning allows models to identify complex patterns, relationships, and structures within the data without explicit labels. Scaling it improves a model’s ability to understand language and context.


Similarly, reasoning enables models to draw inferences, apply logic, and solve complex problems. Scaling also reduces errors and enhances problem-solving. These models, when trained, require more time to process and provide a step-by-step reasoning process before giving an answer.


Though reasoning is a critical part of training GPT-4.5, OpenAI clarified that it is not a reasoning model like o1 and o3-mini and doesn’t think before responding. GPT-4.5 is a more general purpose model with reasoning capabilities like the GPT-4o.

“This isn’t a reasoning model and won’t crush benchmarks,” said Altman.


OpenAI’s benchmark results shows GPT-4.5 achieving a 62.5% QA accuracy rate, which surpasses both GPT-4o’s 38.2% and o1’s 47%. GPT-4.5 also shows a lower hallucination rate of 37.1%, outperforming GPT-4.o’s 61.8% and o1’s 44%. Lower hallucination rate means the model is less prone to making up things. 


In standard academic benchmark tests, GPT-4.5’s scores were significantly higher than that of GPT-4o. For example, in science, GPT-4.5 scored 71.4%, while GPT-4o managed 53.6%; in maths, GPT-4.5 scored 36.7%, while GPT-4o got 9.3%; and in coding, GPT-4.5 got 32.6%, while GPT-4o scored 23.3%. 


According to OpenAI, GPT-4.5 is also search enabled and has access to the latest information. It supports file and image uploads but doesn’t have multi-model capabilities such as voice mode and videos.



Image credit: Pexels

bottom of page