Although Microsoft doesn't develop or sell robots directly, a lot of their technologies and products are used with robot systems. Without a doubt the most well known Microsoft robot technology is the Kinect sensor system that detects operator (and other object) position and movements. Originally developed to enhance the play experience for X-Box users, Kinect was rapidly hacked and adapted for use in applications that Microsoft never expected nor dreamed possible.
At the World Maker Faire, Microsoft demonstrated the possibilities and capabilities of the Kinect system using a couple of life sized boxing robots. Each robot was equipped with a chest sensor, two arm linkages with boxing gloves, and a head that popped up when the opponent managed to land a preset number of blows to the chest target.
If Rockem Sockem Robots, the kids robot game originally developed decades ago and marketed by Marx Toy company, comes to mind then you would be right on target.
Operators - typically kids - lined up for a chance to prove their boxing skill. After a minute or so for the Kinect sensors to find and map the operators, the bouts would begin. The operators motions were limited to taking a step forward or back and punching with their arms to make the robot attack their opponent.
All things considered, it worked fairly well. There were some tough challenges. For the most part the kids were polite and followed the staff's instructions, though there were a few exceptions. The biggest challenge, while I was observing, was the fact that the system was setup under a large outdoor tent with sunlight - often reflected - impacting the sensors at times during the day.
At the end of the day, I'm sure that Microsoft achieved their goal of demonstrating the technology while generating interest and enthusiasm among the kids so that hopefully some of them will pursue careers in science and technology - maybe even become a key Microsoft employee in the not too distant future.
The Microsoft Innovation Center in Belgium posted a useful article documenting the methodology behind controlling an Aldebaran NAO humanoid robot with the Microsoft Kinect sensor.
Although the article is in French, the mathematics and processes are easy to understand even if you don't speak the language. And, this is one case where Google Translate does a good job of delivering the goods.
While other robot researchers have demonstrated interesting implementations combining the Microsoft Kinect device with robots of various types, Taylor Veltrop has aways had a compelling desire to develop an integrated system that will enable him to experience the world inhabited by his robot.
So far, he's been able to get his NAO humanoid robot to follow his upper torso; then his walking direction and turns; dance; and he's even incorporated a treadmill into the system which allowed him to exceed the normal space limitations imposed by the typical lab environment.
Now, he's taken the overall system design one step further by adding a head mounted display integrated with the NAO robot's cameras and the Kinect system so that it tracks his head movements. This allows him to not only see what his robot sees, but also to 'look around' with considerable freedom of movement.
’Seeing The World As Your Robot Sees It (Video)’ continues
The guys at Laan Labs came up with a novel way to record 3d data from the Kinect then play it back as a 3D augmented reality video on the iPad using the String SDK. It plays on the strength of both devices. The ability of the Kinect to cheaply, easily, and accurately generate 3D data, and the iPad form factor that makes it easy to move around in space to view the 3D image from different angles.