To watch and register to the workshops, visit PAACADEMY’s new website (paacademy.com→).

OpenAI’s GPT-4o reminds the AI assistant in Spike Jonze’s Her

OpenAI's GPT-4o, featuring simultaneous translation in 50 languages and text-image interaction, echoes the AI assistant in Spike Jonze's Her.

Become A Digital Member

Subscribe only for €3.99 per month.
Cancel anytime!

Table of Contents

GPT-4o AI model by OpenAI demonstrating real-time text and image interaction
© OpenAI

OpenAI has taken a big step forward in AI by introducing GPT-4o. The ‘o’ in the model name stands for ‘Omni.’ These developments remind many people of the AI assistant in Spike Jonze’s movie Her.

GPT-4o stands out with its ability to perform simultaneous translation. The new updated version can support 50 different languages. Additionally, GPT-4o can create instant interaction between text and image. The new model can serve as a voice assistant and a meeting and conversation tracker. Moreover, it can be used free without requiring a ChatGPT Plus subscription. Paid members can use it with more credits.

“It accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages.” stated OpenAI.

To access GPT-4o, simply visit the website and log in with your free or paid membership. Paid members can select GPT-4o from the drop-down menu in the upper left corner.

Free members will have GPT-4o automatically assigned to their account with limited use. Your account will direct you to GPT-3.5 when you reach the usage limit. Additionally, free members with access to GPT-4o can now submit files for analysis. This includes images, videos, and PDFs, and you can ask questions about the content.

The new model can understand real-time spoken conversations and interpret and respond without delay. Like all previous GPT-4 models, the new model can handle common text LLM tasks, including text summarization. The model can understand sound, images, and text at the same speed. GPT-4o can remember previous interactions.

Share with a friend:

Courses:

Learn about parametric and computational from the online courses at the PAACADEMY:

Leave a Comment

Your email address will not be published. Required fields are marked *

Explore More

Subscribe to our weekly newsletter