New preprint

ReLU activated neural networks have a lot in common with non-smooth dynamical systems! Building off our prior work on frictional robotic systems, we analyze the stability of learned neural network control policies using convex optimization, specifically Linear Matrix Inequalities (LMIs). This efficient approach is made possible by drawing a clear connection between these neural networks and Linear Complementarity Systems. Feedback is welcome! The paper is below, with code to come shortly.

Stability Analysis of Complementarity Systems with Neural Network Controllers

New preprint: ContactNets

We’re excited to share a new preprint where we learn the dynamics of multi-contact interaction. Contact dynamics are notoriously difficult to model and identify, owing largely to the discontinuous nature of impacts and friction.

Common methods for learning implicitly assume motion is continuous, causing unrealistic predictions (e.g. penetration or floating). We resolve this conflict by learning a smooth, implicit encoding of contact-induced discontinuities, leading to data-efficient identification. Our method can predict realistic impact, non-penetration, and stiction when trained on 60 seconds of real-world data

Samuel Pfrommer*, Mathew Halm*, and Michael Posa. ContactNets: Learning of Discontinuous Contact Dynamics with Smooth, Implicit Representations.