TMTPOST -- OpenAI is ready for expansion of its free artificial intelligence (AI) services powered by cutting-edge models as Chinese startup DeepSeek’s explosive tool upends the competition on the sector.
Credit:Xinhua News Agency
OpenAI will release GPT-5 in both ChatGPT and its API, and free ChatGPT users will get “unlimited chat access at the standard intelligence setting”, though subject to abuse thresholds, the CEO Sam Altman announced in a post on social media platform X on Wednesday. ChatGPT Plus subscribers will be able to run GPT-5 at a higher level of intelligence, and Pro subscribers will be able to run GPT-5 at an even higher level of intelligence, according to Altman. He added these models will incorporate voice, canvas, search, deep research, and more,
GPT-5 is a system that will integrate lots of OpenAI’s technologies including o3, the reasoning model that the company previewed in December but has not released yet. OpenAI will no longer ship o3 as a standalone model, Atman said.
Besides GPT-5, the post also revealed OpenAI’s next generation model GPT-4.5. OpenAI will ship GPT-4.5 that is called Orion internally as its “last non-chain-of-thought model”. Following the model, OpenAI’s top goal is “to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks”, said Altman.
Altman explained OpenAI wants to streamline its offerings for it realized its model and offerings became increasingly complicated. “We hate the model picker as much as you do and want to return to magic unified intelligence,” Altman wrote. He didn’t specify the release date of GPT-4.5 or GPT-5, but responded a netizen’s question with a vogue estimate of “weeks/months”, suggesting the former will come in weeks and the later in months.
OpenAI’s launch blueprint came as the free DeepSeek application stunned both Silicon Valley and Wall Street. DeepSeek’s popular chatbot service app powered by its AI models jumped to the No.1 spot in app stores on January 27, dethroning OpenAI’s ChatGPT as the most downloaded free app in U.S. on Apple Inc.'s App Store, and has maintained its lead since. DeepSeek also topped the global Apple App Store download charts the same day and held the No.1 position since, according to Appfigures data that excludes third-party app stores in China. DeepSeek app was downloaded 16 million times in its first 18 days, surpassing the 9 million downloads recorded by OpenAI's ChatGPT app in the same timeframe, data from Sensor Tower shows. DeepSeek's AI assistant is also the leading free download in Google's App Store, a position it has held since January 28.
DeepSeek’s app is backed by its highly-efficient models that were trained for a relatively small sum of money and less-advanced chips. What shocked Silicon Valley is that it took just $5.58 million for DeepSeek to train its V3 large language model (LLM). The startup claimed it used 2,048 Nvidia H800 chips, a downgraded version of Nvidia’s H100 chips designed to comply with U.S. export restrictions. DeepSeek only spent 2.6 million H800 hours for a model much better than Meta’s, while Meta could have trained DeepSeek-V3 at least 15 times using the company’s budget of the Llama 3 model family.
On January 20, DeepSeek released open-source DeepSeek-R1, the reasoning models that it claims performance comparable to leading offerings like OpenAI’s o1 at a fraction of the cost. Several third-party tests have found that DeepSeek actually outperforms OpenAI's latest model. R1 contains 671 billion parameters and its “distilled” versions range in size from 1.5 billion parameters to 70 billion parameters. The full R1 can be available through DeepSeek’s API at prices 90%-95% cheaper than o1.
OpenAI on Thursday released its guideline “Sharing the latest Model Spec”, discussing the methods it employs to shape the desired behavior of its models. The core focus is on balancing AI advancement with safety assurances.
OpenAI has reinforced its commitment to customizability, transparency, and knowledge accessibility by updating its guidelines based on its May 2023 foundation and accumulated experience.
To promote openness, OpenAI has released the new model guidelines under a Creative Commons CC0 license, making them publicly available for developers and researchers to use, adapt, and build upon. Additionally, OpenAI has open-sourced evaluation prompts and plans to release more guideline evaluation and alignment tools on GitHub, regularly updating them.
This signals OpenAI’s intention to expand the use of its open-source technologies.
After the release of DeepSeek, OpenAI CEO Sam Altman acknowledged that the company had previously been on the wrong side of history regarding open-source AI. However, OpenAI still maintains a cautious approach.
On January 31, 2025, OpenAI announced that its o3-Mini inference model would be freely available to users. However, its core technology remains proprietary, with OpenAI reiterating that open-source is not its top priority. This suggests that OpenAI is unlikely to make significant changes to its open-source strategy in the near future.
Moving forward, OpenAI plans to broaden the scope of its evaluations, incorporating real-world use cases. As AI systems advance, OpenAI will iteratively update its guidelines, seek community feedback, and discontinue publishing blog posts for every update. The ultimate goal is to safely enable new applications under research and innovation guidance while encouraging public participation in AI development.
At the same time, tech giants like Google and ByteDance are accelerating the iteration of their proprietary AI models.
As DeepSeek's low-cost, high-efficiency model gains traction, Google launched its flagship AI model, Gemini 2.0 Pro Experimental, in early February, alongside Gemini 2.0 Flash Thinking. These moves are seen as efforts to strengthen Google's competitive position in AI.
Gemini 2.0 Pro can now access Google Search and execute code on behalf of users. It also features a 2-million-token context window, allowing it to process around 1.5 million English words in a single prompt—enough to read all seven Harry Potter books with 400,000 words to spare.
On February 5, Google CEO Sundar Pichai praised DeepSeek during an earnings call, noting that reducing AI costs would benefit both tech giants and the broader AI ecosystem. He added that Gemini 2.0 Flash models can compete with DeepSeek R1 in terms of efficiency.
On Wednesday, ByteDance's Doubao AI Foundation team unveiled a new UltraMem sparse model architecture, which improves inference speed by 2–6 times compared to traditional Mixture of Experts (MoE) architectures, while reducing costs by up to 83%. This breakthrough has been accepted by ICLR 2025, a top-tier AI conference, offering new insights into improving AI inference efficiency and scalability.
Previously, Doubao AI collaborated with Beijing Jiaotong University and the University of Science and Technology of China to develop VideoWorld, an experimental video-generation model. This model has reached professional 5-dan 9x9 Go level and can perform robotic tasks in various environments. The project's code and model have been open-sourced.
The anticipation surrounding GPT-5 comes at a pivotal moment for OpenAI.
This week, Elon Musk proposed a $97.4 billion bid to take control of OpenAI, intending to merge it with xAI. However, OpenAI CEO Sam Altman rejected the offer, saying OpenAI "not for sale."
Meanwhile, OpenAI is set to finalize a $40 billion funding round, pushing its post-investment valuation to $300 billion. OpenAI plans to fully transition into a for-profit company, potentially accelerating its commercialization efforts.