Alexander L. Burka
aburka (at) seas (dot) upenn (dot) edu
Lee Group Research: Kinematic Trees
Modern robots are equipped with advanced vision hardware, often 3D (using a stereo camera or a Kinect-like device). However, perception in unstructured environments is still very hard. In particular,1 even if object recognition is a solved problem2 there is still the issue of making sense of objects, especially objects with multiple moving parts (articulated objects). This is a part of perception for manipulation.
My research views articulated objects as graphs of nodes, where the edges specify the type of joint (revolute, prismatic, etc). Actually, most objects are trees or can be approximated by trees, which makes the analysis tractable. The project focuses on decomposing observations of an object and learning this tree automatically from data.
Here are the major software components:
- Acquisition: that is, capturing images from a camera and extracting the positions of interesting objects. Currently, we blatantly cheat here, and instrument the world with augmented reality tags (like these).
- Kinematic arboretum: this is the old project which takes a recorded trajectory, attempts to fit rigid, prismatic and revolute joints to all pairs of object parts in a unified probabilistic framework, and then selects the best kinematic tree. It also comes with a GUI for designing and simulating kinematic trees. Manip is in a bit of disrepair after being rejected from IROS 2013, but we found it a good therapist and it's working to get back on its feet.
- Helix fitting: this is the new project, which didn't make it in time for NIPS 2013 but it may be headed for publication soon. It focuses on one joint at a time, but there is no distinction between prismatic and revolute linkages. Instead, everything is viewed as a special case of a screw joint (i.e., a helix).
- Git repository (good luck finding anything in here)
- Ready-to-run packages (coming soon...)
1. Not "in particular" as in "this is the biggest problem," not even close, just that I am currently myopically focusing here. ↩
2. It isn't... ↩