Princeton Robotics Seminar

The seminar is scheduled on Fridays at 11am-12pm eastern time (unless otherwise noted). The location is Computer Science Building, Room 105 (unless otherwise noted).

Please fill out the form to subscribe to the robotics-seminar mailing list.

 

Upcoming Seminars

 


Apr 19, 2024 - Jeannette Bohg, Stanford

Headshot of Jeannette Bohg (Stanford)

Title: Enabling Cross-Embodiment Learning

Abstract: In this talk, I will investigate the problem of learning manipulation skills across a diverse set of robotic embodiments. Conventionally, manipulation skills are learned separate for every task, environment and robot. However, in domains like Computer Vision and Natural Language Processing we have seen that one of the main contributing factor to generalisable models is large amounts of diverse data. If we were able to have one robot learn a new task even from data recorded with a different robot, then we could already scale up training data to a much larger degree for each robot embodiment. In this talk, I will present a new, large-scale datasets that was put together across multiple industry and academic research labs to make it possible to explore the possibility of cross-embodiment learning in the context of robotic manipulation, alongside experimental results that provide an example of effective cross-robot policies. Given this dataset, I will also present multiple alternative ways to learn cross-embodiment policies. These example approaches will include (1) UniGrasp - a model that allows to synthesise grasps with new hands, (2) XIRL - an approach to automatically discover and learn vision-based reward functions from cross-embodiment demonstration videos and (3) Equivact - an approach that leverages equivariance to learn sensorimotor policies that generalise to scenarios that are traditionally out-of-distribution.

Bio: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.

Past Seminars

 

April 12th, 2024 - Aja Carter, CMU - PaleoPerformance: Connections between Paleobiology and Bioinspired Robots

Apr 5, 2024 - Changliu Liu, CMU - Ensuring Robot Safety Through Safety Index Synthesis

Mar 22, 2024 - Ye Zhao, GA Tech - Unlocking Agility, Safety, and Resilience for Legged Navigation: Addressing Real-world Challenges in Uncertain Environments

Mar 8, 2024 - Marin Kobilarov, JHU - High-confidence Robot Motion Planning under Uncertainty

Feb 23, 2024 - Caitlin Mueller, MIT - Towards Robotic Construction of Sustainable Structures

Feb 9, 2024 - Elad Hazan, Princeton - The theory of online control and its application to robotics

Dec 1, 2023 - Sonia Chernova, Georgia Tech - Autonomy in the Human World: Developing Rob that Handle the Diversity of Human Lives

Nov 17, 2023 - Stephanie Gil, Harvard - Resilient Coordination in Networked Multi-Robot Teams

Nov 10, 2023 - Baffour Osei, Princeton - A Reflection on How to Learn, Learning to Help, and Staying out of the Way

Nov 3, 2023 - Russ Tedrake, MIT - Dexterous Manipulation with Diffusion Policies

Nov 3, 2023 - Monroe Kennedy, Stanford - Collaborative Robotics: From Dexterity to Teammate Prediction

Oct 20, 2023 - Henny Admoni, CMU - Robots that Learn From and Collaborate with People

Oct 06, 2023 - Lerrel Pinto, NYU - Three Lessons for Building General-Purpose Robots

Sep 22, 2023 - Pulkit Agrawal, MIT - Physical Intelligence as API

June 9, 2023 - Vikas Sindhwani, Google Deepmind - Large Language Models with Eyes, Arms and Legs

May 12, 2023, David Fridovich-Keil - Dynamic Game Models for Multi-Agent Interactions: The Role of Information in Designing Efficient Algorithms

April 21, 2023, Renee Zhao - Multifunctional Origami Robots

April 7, 2023, Scott Kuindersma - Task Agility: Making Useful Dynamic Behavior Easier to Create

Mar 24, 2023 - Dorsa Sadigh - Learning Representations for Interactive Robotics

Mar 10, 2023 - Greg Chirikjian - Robot Imagination: Affordance-Based Reasoning about Unknown Objects

Feb 28, 2023 - Danica Kragic - Learning perception, action and interaction

Feb 10, 2023 - Ken Goldberg - The New Wave in Robot Grasping

Dec 2, 2022 - Nadia Figueroa - Collaborative Robots in the Wild: Challenges and Future Directions from a Human-Centric Perspective

Nov 11, 2002 - Andy Zeng - Language as Robot Middleware

Oct 28, 2022 - Tomas Lozano-Perez - Generalization in Planning and Learning for Robotic Manipulation

Oct 14, 2022 - Sarah Tang - Data-Centric ML for Autonomous Driving

Sep 30, 2022 - Radhika Nagpal - Towards Collective A.I.

Sep 16, 2022 - Katie Skinner - Learning from Limited Data for Robot Vision in the Wild

April 22, 2022 - Mac Schwager - Reimagining Robot Autonomy with Neural Environment Representations

April 8, 2022 - Stephen Tu - Learning from many trajectories

March 25, 2022 - Karen Leung - Towards the Unification of Autonomous Vehicle Safety Concepts: A Reachability Perspective

March 11, 2022 - Reza Moini - Bio-inspired Design and Additive Manufacturing of Architected Cement-based Materials

Feb 25, 2022 - Aimy Wissa - Bio-Inspired Locomotion Strategies across Mediums: From Feather-Inspired Flow Control to Beetle-Inspired Jumping

Feb 11, 2022 - Jordan Taylor - The steep part of the learning curve: how cognitive strategies shape motor skill acquisition

Dec 3, 2021 - Naomi LeonardCollective Intelligence and Multi-Robot System

Nov 19, 2021 - Aaron Ames - Safety-Critical Control of Dynamic Robots

Nov 5, 2021 - Chuchu Fan - Building Dependable Autonomous Systems through Learning Certified Decisions and Control

Oct 8, 2021 - Karthik Narasimhan - Language-guided policy learning for better generalization and safety

Sep 24, 2021 - Daniel Cohen - Living microrobots: controlling cellular swarms and the waterbear as a potential microrobot chassis

Sep 17, 2021 - Michael Posa - Contact-Rich Robotics: Learning, Impact-Invariant Control, and Tactile Feedback

Feb 11, 2021 - Jia Deng - Optimization Inspired Deep Architectures for Multiview 3D

Feb 25, 2021 - Stefana Parascho - Rethinking Architectural Robotics

Mar 11, 2021 - Naveen Verma - AI Meets Large-scale Sensing: preserving and exploiting structure of the real world to enhance machine perception

April 8, 2021 - Jaime Fernandez Fisac - Safe Robots in the Wild: maintaining safety by planning through uncertainty and interaction

April 22, 2021 - Olga RussakovskyZhiwei Deng - Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation

May 6, 2021 - Bartolomeo Stellato - Data-Driven Embedded Optimization for Control