Here's a robot video that I found interesting:
This video illustrates our integrated system that allows humanoid robots to autonomously navigate in unknown, cluttered environments. From the data of an on-board consumer-grade depth camera, our system estimates the robot's pose to compensate for drift of odometry and maintains a heightmap representation of the environment. Based on this model, our system iteratively computes sequences of safe actions including footsteps and whole-body motions, leading the robot to target locations. As the video shows, the robot is able to traverse highly challenging passages by building an accurate heightmap from the data of the onboard depth camera and choosing suitable actions.
Details can be found in our upcoming IROS 2013 Paper:
D. Maier, C. Lutz, M. Bennewitz, "Integrated Perception, Mapping, and Footstep Planning" for Humanoid Navigation Among 3D Obstacles"