[ad_1]
It’s not GPT-5, but OpenAI is releasing a new chatbot model that promises to be more human-like than ever before.GPT-4o aims for something akin to the movie Her; a voice assistant that talks and responds like a person rather than a software program. This “Voice Mode” will roll out in alpha to ChatGPT Plus users “in the coming weeks.” GPT-4o’s text and image capabilities are rolling out today to the free version of ChatGPT, though Plus users will get up to 5x higher message limits.”This is the first time we are really making a huge step forward when it comes to the ease of use,” OpenAI CTO Mira Murati said during a live-streamed event on Monday. The company says it has over 100 million users and seems eager to bring in more by making its best tech widely available. Its announcement also comes a day before the Google I/O developers’ conference, where the search giant is expected to share AI updates of its own.
CTO Mira Murati (Credit: OpenAI)
In OpenAI’s case, the new model is designed to match the performance of GPT-4 Turbo, the most powerful model available to those with a $20-per-month ChatGPT Plus subscription. Until now, free users have only had access to GPT-3.5. “GPT-4o provides GPT-level intelligence, but it is much faster and it improves on its capabilities across text, vision, and audio,” Murati said.”With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network,” OpenAI says.
During the event, the OpenAI team performed several impressive live demos, including real-time conversations with ChatGPT in Voice Mode. While OpenAI employees spoke into a phone on stage, ChatGPT accurately understood their requests and responded immediately without the “awkward 2- to 3-second lag as you wait for a response.” That includes language translation; Murati touted improved quality and speed for 50 languages, and demonstrated on stage by speaking to her colleagues in Italian and having ChatGPT translate.
(Credit: OpenAI)
Using the vision capabilities, the employees asked ChatGPT for homework help by showing it a piece of paper with a math equation. Rather than immediately giving them the answer, ChatGPT provided helpful clues as a tutor might. Another demo showed GPT-4o recognizing a dog and appropriately responding with emotion. “Hello cutie, what’s you’re name, little fluff ball?”
With the new desktop app, announced today, you can screen-share with ChatGPT to show it what you’re working on and get help. In the demo, execs showed ChatGPT some code, got its input, then ran it and discussed the output with ChatGPT, speaking into the phone the entire time.When asked to tell a story in different tones of voice—a dramatic reading, a robot voice, singing—it adjusted without skipping a beat, narrating the story like an expert voice actor.On his blog, OpenAI Sam Altman said: “The new voice (and video) mode is the best computer interface I’ve ever used. It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change.”
Recommended by Our Editors
Overall, the demo proved far less clunky than the model’s name. According to a company blog post, the “o” in GPT-4o stands for “omni.””It’s extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction),” added Greg Brockman, president and co-founder of OpenAI, in a post on X.
This Tweet is currently unavailable. It might be loading or has been removed.
GPT-4o is also available to developers in the API as a text and vision model. It could be a compelling alternative to GPT-4 Turbo, which launched for developers in November 2023, as it offers “five-times higher rate limits,” is two times faster, and 50% cheaper.It remains to be seen if GPT-4o will perform at the same level as GPT-4, but if it does, it could help ChatGPT compete against rivals like Claude, Perplexity AI, Anthropic, and the many other AI up-and-comers.OpenAI didn’t reveal how it developed GPT-4o. But Murati said part of the credit goes to Nvidia, which released a next-generation GPU for AI development in March called Blackwell.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
[ad_2]