Abstract
Part of this talk is a retrospective on how the RoboCup competition has helped to shape my professional life. I will also discuss why competitions such as RoboCup are vital to the education of our future technical leaders and innovators, and suggest ways in which their appeal can be broadened.
Bio
Spanning academics, business and the arts, Raffaello D'Andrea's career is built on his ability to bridge theory and practice. He is Professor of Dynamic Systems and Control at the Swiss Federal Institute of Technology (ETH) in Zurich, where his research redefines what autonomous systems are capable of. After leading the Cornell RoboCup team to four world championships, he co-founded Kiva Systems, a robotics and logistics company that develops and deploys intelligent automated warehouse systems, and which was recently acquired by Amazon. In addition, he is an internationally-exhibited new media artist, best known for the Robotic Chair (Ars Electronica, ARCO, London Art Fair, National Gallery of Canada) and Flight Assembled Architecture (FRAC Centre, France). Other creations and projects include the Flying Machine Arena, the Distributed Flight Array, the Blind Juggler, the Balancing Cube, and RoboEarth.
Abstract
Our work aims to make robot technology a seamless part of the larger World Wide Web, such as through applications-layer robotics protocols. We posit that the convergence of robotics with Internet and Web technologies will lead to a thriving robotics ecosystem with greater levels of reproducibility, interoperability, and accessibility. Broader populations of users and applications developers will be able to use and create "apps" that address various needs across society, while building on continuing advances in robotics research and development.
Currently, other than online videos, the impact of the great achievements and capabilities in robotics remains relatively limited to robotics research labs. Robotics has made great strides in producing a variety of capable and affordable off-the-shelf platforms comprised of physically capable hardware, richer perception of unstructured environments, and general purpose robot middleware. However, these robots lack a common and general format for exchanging information that both limits interoperability between systems and accessibility of interfaces for developing applications for broader
human use.
In this talk, I will cover our recent work with rosbridge as a lightweight applications-layer protocol for robotics. Assuming only the JSON format and network sockets, rosbridge was originally intended to enable any network process (independent of any specific operating system or build environment) to access, share topic messages with, and
call services provided by a Robot Operating System (ROS) run-time environment. More generally, rosbridge is a step towards a larger robotics World Wide Web, where protocols provide generic messaging and data exchange between robots and robot middleware provides server-side functionality (analogous to a web server).
Using rosbridge, I describe our robot web applications implemented purely through JavaScript/HTML for web-scale robot learning and a PR2 Remote Lab. Such applications demonstrate "no-install" interfaces for reaching broader populations of users as well as platforms for common decentralized experimentation. I will also discuss examples of improved interoperability in robotics projects enabled by rosbridge.
Bio
Odest Chadwicke Jenkins, Ph.D., is an Associate Professor of Computer Science at Brown University. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). Prof. Jenkins was selected as a Sloan Research Fellow in 2009. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based human tracking from video. He has also received Young Investigator awards from the Office of Naval Research (ONR) for his research in learning dynamical primitives from human motion, the Air Force Office of Scientific Research (AFOSR) for his work in manifold learning and multi-robot coordination and the National Science Foundation (NSF) for robot learning from multivalued human demonstrations. His research addresses problems in robot learning and human-robot interaction, primarily focused on robot learning from demonstration, as well as topics in computer vision, machine learning, and computer animation.