OpenAI’s GPT-5 is said to be below expectations
OpenAI’s efforts to develop its next major model, GPT-5, are running behind schedule, with results that do not yet justify the massive costs, according to New report In the Wall Street Journal.
This is an echo Previous report In the information indicating that OpenAI is looking to new strategies such as GPT-5 may not represent as big a leap forward as previous models. But the WSJ story includes additional details about the 18-month development of GPT-5, codenamed Orion.
OpenAI has reportedly completed at least two major training sessions, which aim to improve the model by training it on massive amounts of data. Initial training was slower than expected, suggesting that larger training would be time-consuming and expensive. Although GPT-5 can perform better than its predecessors, it has not yet advanced enough to justify the cost of keeping the model running.
The Wall Street Journal also notes that instead of relying solely on publicly available data and licensing deals, OpenAI has also hired people to create new data by writing code or solving mathematical problems. It also uses synthetic data generated by another of its models, o1.
OpenAI did not immediately respond to a request for comment. The company said earlier A model codenamed Orion will not be launched this year.