Dynamic Autonomy and Intelligent Robotics Lab

170B Towne Building, 220 S. 33rd Street, Philadelphia PA, 19104


Welcome to the Dynamic Autonomy and Intelligent Robotics (DAIR) Lab!

Our research centers on control, learning, planning, and analysis of robots as they interact with the world. Whether a robot is assisting within the home, or operating in a manufacturing plant, the fundamental promise of robotics requires touching and affecting a complex environment in a safe and controlled fashion. We are focused on developing computationally tractable and data efficient algorithms which enable robots to operate both dynamically and safely as they quickly maneuver through and interact with their environments.

Right now, we are particularly interested in understanding the interplay between the non-smooth dynamics of contact, machine learning, and numerical optimization, and then testing these techniques on both legged robots and robotic manipulation.

For more information on our work, please see Michael’s research statement.

We are proud to be a group within the Penn Engineering GRASP Lab.

Recent Updates

Sep 5, 2024 Congratulations to William Yang, who successfully defended his thesis, titled “Controlling Contact Transitions for Dynamic Robots.” Will demonstrated remarkable new robot capabilities for controlling both high-speed impact events for bipedal running and jumping, and for dynamic, dexterous manipulation exploiting sliding friction and stick-slip transitions. The thesis document itself will be uploaded soon.
Aug 1, 2024 For the upcoming 2024-2025 application cycle, we will be looking to recruit multiple incoming Ph.D. students across all relevant departments (MEAM, ESE, or CIS). We are dedicated to assembling a dynamic and diverse team of researchers, and actively seek individuals with diverse cultural, ethnic, socioeconomic, and academic backgrounds. Get more information and apply here.
Jul 19, 2024 Congratulations to Will Yang! Will’s paper, “Dynamic On-Palm Manipulation via Controlled Sliding,” received the Outstanding Student Paper Award at Robotics: Science and Systems (RSS)!
May 22, 2024 How far can you get by simply specifying that an object move? With contact-implicit MPC, quite far! Will Yang’s paper “Dynamic On-Palm Manipulation via Controlled Sliding,” accepted to RSS 2024, pushes the limits of real-time dexterity. Check out the project website and arXiv preprint
May 2, 2024 We have another recently published paper in IEEE Transactions on Robotics! Simple models, like inverted pendulums and single rigid-body models, are ubiquitous in legged locomotion. They achieve good performance with only a low-dimensional representation, but can an algorithm do better? We explore the use of numerical optimization to synthesize new simple models, achieving higher performance across a wider range of bipedal walking tasks. This work led by recent grad Yu-Ming Chen, and supported by the Toyota Research Institute.

Check out the project website, paper, and freely available arXiv version
Feb 27, 2024 Recently published in IEEE Transactions on Robotics! How much can you accomplish with only a few minutes of data to learn from? Quite a bit! We use 4 minutes of experiential data to learn a model for robust real-time manipulation of a previously unknown object. Work led by Wanxin Jin, and supported by the Toyota Research Institute.

Dexterous manipulation, making and breaking frictional contact, is inherently hybrid, with thousands of possible modes. Fortunately, most of these are unnecessary for control. Here, we’re learning a task-relevant reduced-order hybrid model, limiting the number of hybrid modes. This builds on a bunch of our recent work on (1) data-efficient learning of multi-contact models (ContactNets and related papers) and (2) real-time MPC through contact. In this paper, we bridge these two by imbuing the model-learning process with task relevancy. Check out the project website, paper, and freely available arXiv version
Feb 27, 2024 We had three papers accepted to ICRA 2024.
  1. Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
  2. Enhancing Task Performance of Learned Simplified Models via Reinforcement Learning
  3. Reinforcement Learning for Reduced-order Models of Legged Robots
Congratulations to Hien, Yu-Ming, Wei-Cheng, Alp and Wanxin!
Oct 14, 2023 For the upcoming 2023-2024 application cycle, we will be looking to recruit multiple incoming Ph.D. students across all relevant departments (MEAM, ESE, or CIS). We are dedicated to assembling a dynamic and diverse team of researchers, and actively seek individuals with diverse cultural, ethnic, socioeconomic, and academic backgrounds. Get more information and apply here.
Oct 9, 2023 It has been a busy stretch for the lab! Two weeks ago, Alp Aydinoglu his Ph.D. thesis, titled “Control of Multi-Contact Systems via Local Hybrid Models.” Alp developed a class of algorithms for the control of multi-contact robotic systems, including MPC strategies that, in real time, are able to plan novel contact sequences. Check out the talk!
Sep 25, 2023 Last week, Yu-Ming Chen defended his Ph.D. thesis! Yu-Ming’s thesis, titled “Toward High-performance Simple Models of Legged Locomotion”, explored the use of optimization and machine learning to computationally discover new and improved simple models that enable performant locomotion while remaining low-dimensional and easy to plan with. Check out the talk!
older news...


Lab Wiki (private)