[ad_1]
OpenAI has reportedly created a new multimodal AI model that is capable of both talking to you and recognizing objects.According to The Information, the company may show the feature off on Monday. OpenAI has plans to unveil something via a live stream on Monday at 1pm ET.According to The Information, the new model will be able to interpret images and audio faster and more accurately than its current models, and could be used by customer service agents so that they could “better understand the intonation of callers.” The model could, in theory, be used to help customer service agents understand when a customer is being sarcastic in their responses.
This Tweet is currently unavailable. It might be loading or has been removed.
According to sources familiar with the matter speaking with The Information, the model will be able to outperform GPT-4 Turbo at answering some types of questions; however, there’s still a risk that some of those answers might be wrong.OpenAI is also rumored to be working on a way for ChatGPT to make phone calls.While we don’t know exactly what OpenAI plans to announce on Monday we do know one thing that the company won’t be announcing: GPT-5.
Recommended by Our Editors
CEO Sam Altman has been very clear that the announcement has nothing to do with the next generation of the model; however, the company does plans to show off “some new stuff we think people will love [that] feels like magic to me.”GPT-5 is on the company’s agenda for 2024 and is expected by the end of the year.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
[ad_2]