Since its debut at the end of last year, Gemini 2.0 has gone on to power a handful of Google products, including a new AI Mode chatbot. Now Google DeepMind is using that same technology for something altogether more interesting. On Wednesday, the AI lab announced two new Gemini-based models it says will "lay the foundation for a new generation of helpful robots."
The first, Gemini Robotics, was designed by Deepmind to facilitate direct control of robots. According to the company, AI systems for robots need to excel at three qualities: generality, interactivity and dexterity.
The first involves a robot's flexibility to adapt to novel situations, including ones not covered by its training. Interactivity, meanwhile, encapsulates a robot's ability to respond to people and the environment. Finally, there's dexterity, which is mostly self-explanatory: a lot of tasks humans can complete without a second thought involve fine motor skills that are difficult for robots to master.
"While our previous work demonstrated progress in these areas, Gemini Robotics represents a substantial step in performance on all three axes, getting us closer to truly general purpose robots," says DeepMind.
For instance, with Gemini Robotics powering it, DeepMind's ALOHA 2 robot is able to fold origami and close a Ziploc bag. The two-armed robot also understands all the instructions given to it in natural, everyday language. As you can see from the video Google shared, it can even complete tasks despite encountering roadblocks, such as when the researcher moves around the Tupperware he just asked the robot to place the fruit inside of.
Google is partnering with Apptronik, the company behind the Apollo bipedal robot, to build the next generation of humanoid robots. At the same time, DeepMind is releasing Gemini Robotics-ER (or embodied reasoning). Of the second model, the company says it will enable roboticists to run their own programs using Gemini's advanced reasoning abilities. DeepMind is giving "trusted testers," including one-time Google subsidiary Boston Dynamics, access to the system.
This article originally appeared on Engadget at https://www.engadget.com/ai/deepminds-latest-ai-model-can-help-robots-fold-origami-and-close-ziploc-bags-151455249.html?src=rss https://www.engadget.com/ai/deepminds-latest-ai-model-can-help-robots-fold-origami-and-close-ziploc-bags-151455249.html?src=rssAccedi per aggiungere un commento
Altri post in questo gruppo

After a couple years of having its open-source Llama AI model be just a part of its Connect conferences, Meta is breaking things out and hosting an entirely generative AI-focused developer conferen


The Legend of Ochi feels like a film that shouldn't exist today. It's an original story, not an adaptation of an already popular book or comic. It's filled with complex puppetry and practi

The fashion-forward adventure Infinity Nikki is finally coming to Steam on April 29, compl
