Chelsea Finn and Sergey Levine
UC Berkeley
Robotic Visuomotor Learning
Wednesday 04th of November 2015 at 01:00pm
560 Evans
Policy search methods based on reinforcement learning and optimal control can allow robots to automatically learn a wide range of tasks. However, practical applications of policy search tend to rely on hand-engineered components for perception, state estimation, and low-level control. In this talk, we will present methods for learning policies that map raw, low-level observations, consisting of camera images and joint angles, directly to the torques at the robot’s joints. To do so, we use guided policy search with deep spatial feature representations to efficiently learn policies with only tens of minutes of interaction time. We will show policies learned by a PR2 robot for a number of manipulation tasks which require close coordination between vision and control, including inserting a block into a shape-sorting cube, screwing on a bottle cap, and lifting a bag of rice into a bowl using a spatula.(video)
Join Email List
You can subscribe to our weekly seminar email list by sending an email to
majordomo@lists.berkeley.edu that contains the words
subscribe redwood in the body of the message.
(Note: The subject line can be arbitrary and will be ignored)