Your next kitchen appliance may be on the horizon. In a breakthrough combination of robotics and artificial intelligence, researchers have developed a robot which uses visual data to “learn” from how-to cooking videos on YouTube.
While robots have long had the ability to recognize patterns and objects. However, the ability to interpret and act on the input is relatively new.
Researchers from the University of Maryland recently announced that they have developed a system that allows robots to process and act on data presented to them in “how to” cooking videos on YouTube. The robots demonstrated an ability to grab and manipulate kitchen utensils, manipulate other objects and perform demonstrated tasks, with additional assistance, based on what they were instructed to do by the videos.
“The MSEE program initially focused on sensing, which involves perception and understanding of what’s happening in a visual scene, not simply recognizing and identifying objects. We’ve now taken the next step to execution, where a robot processes visual cues through a manipulation action-grammar module and translates them into actions,” said Reza Ghanadan.
Ghanaian is program manager in DARPA’s Defense Sciences Offices and the project was funded by DARPA’s Mathematics of Sensing, Exploitation and Execution (MSEE) program.
The robots also demonstrated the ability to accumulate and share knowledge they had gained from the cooking program. Until now object recognition and sensor systems live completely in the moment. They do not, in other words, have the ability to retain knowledge for the long term or apply previous experience to a present situation.
“This system allows robots to continuously build on previous learning—such as types of objects and grasps associated with them—which could have a huge impact on teaching and training. Instead of the long and expensive process of programming code to teach robots to do tasks, this research opens the potential for robots to learn much faster, at much lower cost and, to the extent they are authorized to do so, share that knowledge with other robots. This learning-based approach is a significant step towards developing technologies that could have benefits in areas such as military repair and logistics,” said Ghanadan.
The paper on this latest project “Robot Learning Manipulation Action Plans by “Watching” Unconstrained Videos from the World Wide Web” can be downloaded from the University of Maryland.
While having a robot in your kitchen that can learn to prepare your favorite dishes by watching YouTube videos, and maybe a second one that can learn to do the dishes, might sound nice the potential applications for the technology are almost endless.
The ability of machines to learn by watching instructional videos would have major implications for the future of computer programming as well as robotics and artificial intelligence.
The implications for the fast food industry could also be significant. This could be the a major step in the development of automation. In a 2013 report, researchers at the Oxford Martin School stated that 47% of current US jobs would be taken over by robotics and AI in the next two decades.
Robots like this one certainly make that idea seem more realistic than it may have at the time.