ChatGPT now understands real-time video, seven months after OpenAI first demoed it


OpenAI has finally released the real-time video capabilities for ChatGPT that it demoed nearly seven months ago.

On Thursday during a livestream, the company said that Advanced Voice Mode, its human-like conversational feature for ChatGPT, is getting vision. Using the ChatGPT app, users subscribed to ChatGPT Plus or Pro can point their smartphones at objects and have ChatGPT respond in near-real-time.

In a recent demo on CNN’s 60 Minutes, OpenAI president Greg Brockman had Advanced Voice Mode with vision quiz Anderson Cooper on his anatomy skills. As Cooper drew body parts on a blackboard, ChatGPT could “understand” what he was drawing.

Image Credits:OpenAI

“The location is spot on,” the assistant said. “The brain is right there in the head. As for the shape, it’s a good start. The brain is more of an oval.”

In that same demo, Advanced Voice Mode with vision made a mistake on a geometry problem, however — suggesting that it’s prone to hallucinating.

Advanced Voice Mode with vision has been delayed multiple times — reportedly in part because OpenAI announced the feature far before it was production-ready. In April, OpenAI promised that Advanced Voice Mode would roll out to users “within a few weeks.” Months later, the company said it needed more time.

When Advanced Voice Mode finally arrived in early fall for some ChatGPT users, it lacked the visual analysis component. In the lead-up to today’s launch, OpenAI has focused most of its attention on bringing the voice-only Advanced Voice Mode experience to additional platforms and users in the EU.



Source link

About The Author

Scroll to Top