Funny Face – Robo-One Style (Video)

Face recognition is pretty hard to do reliably under idea conditions. If you have control over the lighting, and the subject, then you can produce some fairly consistent and repeatable results. When you take the technology out into the real world with horrible lighting and unpredictable conditions, chances are you won’t get the best results. So, if you are determined to show off the technology in front of a huge crowd of robot fans, the best strategy might be to package it as a really cute robot resembling Reddi-Kilowatt and throw in a scantily clad pretty girl or two to distract the audience if things happen to go astray. . . .

That’s exactly what happened during Saturday’s Demonstration phase of the Robo-One competition here in Tokyo. I Bee (Jin Sato’s company) and KDDI R&D Laboratories teamed up to combine the Pirkus-R Type 1 biped robot with KDDI’s face recognition technology.

KDDI originally developed the technology for use with their cell phones, and has made claims that the system has advanced enough so that lighting variations and positioning are no longer a major concern. The technology worked well out in the main exhibitors area with focused lighting and with the robot sitting still on the table top. We had the chance to actually be a guinea pig and chat with the KDDI staff.


But, in the bright, high contrast stage lighting in the ring, and with the robot bouncing quite a bit as it walked, the results were rather disappointing (see the videoclip below). They did get the robot to respond to some movement and start to punch at a hand held out in front of it. And, of course their primary goal was to generate interest in the technology, not to win the competition. From that perspective, they accomplished what they set out to do.

Besides, the girls were delightful.


One thought on “Funny Face – Robo-One Style (Video)

  1. The Pirkus-R is a new robot to me. I hadn’t heard about it until last weeks announcement about its “new” facial recognition capabilities. I understood the news to mean that the software had been upgraded to tackle unpredictable lighting in addition to giving the robot the ability to track its target and position itself for optimal viewing. Was the recognition software actually upgraded or is this the first version of the robot that has it at all? Just wondering if something got lost in the translation of the article I read. As an AI junkie, I’m excited about the technology but am underwhelmed by its performance in the video. (The quality of the video itself is subperb. Great stuff!)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>