We just released a very excited touch sensor that finally simplifies touch sensing for robotics.
Our most exciting result: Learned visuotactile policies for precise tasks like inserting USBs and credit card swiping, that work out-of-the-box when you replace skins! To the best of our knowledge, this has never been shown before with any existing tactile sensor.
Why is this important? For the first time, you could now collect data and train models on one sensor and expect them to generalize to new copies of the sensor -- opening the door to the kind of large foundation models that have revolutionized vision and language reasoning.
Would love to hear the community's questions, thoughts and comments!
Comments URL: https://news.ycombinator.com/item?id=41603865
Points: 290
# Comments: 54
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe
I made a software library for displaying piano music 7 years back, and recently ported it to the web (which is now easier than even ever).
It displays sheet music as you play, and let's you take
Article URL: https://openwrt.org/voting/2025-02-12-openwrt-two
Article URL: https://plainframework.com/
Comments URL: https://news.ycombinator.com/item?id

Article URL: https://arxiv.org/abs/2301.08243
Comments URL: https://news.ycombinator.c
Article URL: https://www.mail-tester.com/test-p3tdhnk3o
Comments URL: https: