Wednesday, January 22, 2025
HomeTechnologyNvidia's CEO Explains How Its New AI Models Could Work on Future...

Nvidia’s CEO Explains How Its New AI Models Could Work on Future Smart Glasses | Global News Avenue

Nvidia’s CEO Explains How Its New AI Models Could Work on Future Smart Glasses

Technology products – whether Telephone, robot Thanks to artificial intelligence, self-driving cars will be able to better understand the world around us. This message has been loud and clear in 2024, and it will be even louder in 2024. CES 2025chipmaker Nvidia unveiled a new artificial intelligence model for understanding the physical world, as well as a series of large-scale language models for driving Artificial Intelligence Agents of the Future.

Nvidia CEO Jensen Huang positions these world-based models as ideal for robotics and self-driving cars. But there’s another class of devices that could benefit from a better understanding of the real world: smart glasses. High-tech glasses such as Meta’s Ray-Ban glasses are quickly becoming popular new artificial intelligence products. According to statistics, shipments of Meta glasses exceeded the 1 million pair mark in November. Counterpoint study.

Such devices seem to be ideal containers for artificial intelligence agents or assistants, which use cameras to understand the world around you and process voice and visual input to help you do your job rather than just answer questions.

Huang did not say whether Nvidia-based smart glasses are imminent. But he did explain how the company’s new model could power future smart glasses if partners adopt the technology for that purpose.

In a press Q&A at CES, Huang said in response to a question about whether his model would work with smart glasses: “Connecting artificial intelligence with virtual presence technologies like wearables and glasses, all of that is very exciting.”

Read more: Google Android president tells CNET smart glasses will work this time

Look at this: These new smart glasses want to be your next AI companion

Huang noted that cloud processing is an option, meaning queries using Nvidia’s Cosmos model can be processed in the cloud rather than on the device itself. Compact devices such as smartphones often use this approach to reduce the processing load when running demanding AI models. Huang said that if device manufacturers want to make glasses that can leverage Nvidia’s AI models on the device rather than relying on the cloud, Cosmos will distill its knowledge into a smaller model that is less general and more specific. The task has been optimized.

Nvidia’s new Cosmos model is touted as a platform for collecting physical-world data for training robots and self-driving car models, similar to how large language models learn how to generate textual responses after being trained on written media.

“Robotics’ ChatGPT moment is coming,” Huang said in a press release.

Nvidia also announced a new set of AI models built using Meta’s Llama technology, called Llama Nemotron, designed to accelerate the development of AI agents. But it’s also interesting to think about how these AI tools and models could be applied to smart glasses.

The latest NVIDIA Patent application Although the chipmaker has not made any announcements about future products in this area, it has fueled speculation about upcoming smart glasses. But Nvidia’s new model and Huang’s comments come as Google, Samsung and Qualcomm announced last month that they were building a new platform called “mixed reality” for smart glasses and headsets. AndroidXRhinting that smart glasses may soon become more prominent.

Several new smart glasses were also showcased at CES 2025, such as RayNeo X3 Pro and Halliday smart glasses. this international data corp. It was also predicted in September that smart glasses shipments would grow by 73.1% in 2024. Nvidia’s actions are also another space worth paying attention to.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments