According to OpenAI CTO Mira Murati, who spoke during a livestream announcement, this update will be accessible to all ChatGPT users at no additional cost, while paid users will receive up to five times the capacity limits of free users.
The integration of the GPT-4o features into ChatGPT begins today, focusing initially on text and image capabilities. OpenAI CEO Sam Altman described the model as “natively multimodal,” capable of generating and understanding content in various formats such as voice, text, and images.
Furthermore, developers interested in exploring GPT-4o will have access to an API that is priced at half the cost of the previous model, GPT-4-turbo, yet offers double the speed.
In visual processing, GPT-4o improves upon its ability to interact with images. It can now analyse visual content swiftly and accurately, answering queries ranging from code analysis to identifying items within images. Future capabilities are expected to include real-time interpretations of live events such as sports, which would allow ChatGPT to explain game rules as the action unfolds.
GPT-4o also boasts improved multilingual support, enhancing performance across approximately 50 different languages. Additionally, it offers increased efficiency for developers, being twice as fast as the previous GPT-4 Turbo model, at half the cost, and with higher rate limits within OpenAI’s API.
Initial access to GPT-4o’s full capabilities, particularly in audio processing, will be restricted to a select group of trusted partners to mitigate risks of misuse, with broader availability planned subsequently.
The rollout includes updates to the user interface of ChatGPT and the release of a macOS desktop version that allows enhanced interaction through keyboard shortcuts. Access to OpenAI’s GPT Store and previously exclusive features like memory capabilities have been extended to users of the free tier as well.
Credit The Daily Star