OpenAI is introducing GPT-4o, a new version of the GPT-4 model that drives its product, ChatGPT. This updated model is said to be 'much faster' with the ability to enhance 'capabilities across text, vision, and audio,' according to OpenAI Chief Technology Officer Mira Murati. During a live stream announcement on Monday, Murati mentioned that it will be available for free to all users, while paid users will still 'have up to five times the capacity limits' compared to free users.
In a company blog post, OpenAI stated that GPT-4o’s capabilities 'will be rolled out iteratively,' with its text and image functionalities beginning to roll out today in ChatGPT.
OpenAI CEO Sam Altman posted that the model is 'natively multimodal,' meaning it can generate content or understand commands in voice, text, or images. Developers interested in experimenting with GPT-4o will have access to the API, which is half the price and twice as fast as GPT-4 Turbo, Altman added on X.
our new model: GPT-4o, is our best model ever. it is smart, it is fast,it is natively multimodal (!), and…
— Sam Altman (@sama) May 13, 2024
New features are being introduced to ChatGPT’s voice mode with the new model. The app will function with a her-like voice assistant, responding in real-time and observing the world around us. Currently, the voice mode is more limited, only responding to one prompt at a time and processing only what it can hear.
Altman reflected on OpenAI’s journey in a blog post following the livestream event. He noted that the company's initial vision was to 'create all sorts of benefits for the world,' but acknowledged that this vision has evolved. OpenAI has faced criticism for not open-sourcing its advanced AI models, and Altman seems to indicate that the company’s focus has shifted towards providing these models to developers through paid APIs, allowing third parties to be the creators. 'Instead, it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from,' he stated.