Robot in the wild project lead to robotic stand-up comedian named Data
A group of developers from the Carnegie Mellon University (CMU) is developing a robot capable to tell jokes. Before you skip this article thinking of insignificance of such robots in the future, you might need to know the developers plan to develop more effective personalities for everyday human robot interaction, help machines understand charisma and humor, and explore the applications and impact of having friendly robots in everyday lives.
Robot in the wild is the name of the project lead by Heather Knight, who runs Marilyn Monrobot Labs in NYC and is a PhD student at the Carnegie Mellon’s Robotics Institute, in which developers hope that putting the robot “in-the-wild” also invites the general public to teach robots in natural and often playful social settings. The platform used in the project is Nao – humanoid robot developed by Aldebaran Robotics, which is widely used as a robotic platform in academic research today.
In August 2010, Knight implemented a version which was used in series called “Postcards from New York”, where it was publicly displayed to strangers in Washington Square Park. Although individual comedy sketches were preset, visitors could choose the topic by showing a selected Postcard to the robot. After recognizing the Postcard, the robot would perform a two-minute long sketch relating to its “personal experiences” in that neighborhood.
In December 2010, Knight introduced Data – a new version of her Robot in the wild project which was able to do more than the previous version. By using software co-developed with Scott Satkin and Varun Ramakrishna from the CMU, the robot is able to gather audience feedback and tune its act as the crowd responds. Although Nao robot platform already has various sensors, the developers used an external HD camera and a microphone in order to improve the resolution of the data collection.
The robot stores full set of jokes and corresponding animations in his head, awaiting computer command via Wi-Fi. Off-board, communication between modules is moderated by a mother python script and shared data files. In order to simplify the data collection, red-green indicator paddles were spread among attendees, giving them an opportunity to express their approval or disapproval of a told joke.
Online learning techniques are used to label individual jokes with attribute sets like topic, length, interactivity, movement-level, appropriateness and hilarity. While the robot tells a joke, it the software aggregates the sensor data, categorizing the total enjoyment at the end of the joke as positive or negative on a -1 to 1 scale. The Audience Update uses that number to re-weight its model of what the audience likes and dislikes based on the attributes present in the last joke. The Joke Selector then finds a best match joke given the latest audience model, also accounting for the current story phase desired. The process iterates until the show is done.
In follow-up projects, the researchers hope to partner with and learn from those in the arts community. Future versions might get improvements in the tone of voice, accent, costuming, props, gestures, timing, LED illumination, and pose.
So we’re supposed to use paddles on a comedy night out?
The visual recognition of our behavior could be helpful, but I would focus on audio recognition.
Perhaps it could group voices by pitch and decide to make the jokes more focused for female or male audience (ye I know it’s faulty with children).