Home    ::    Program    ::    Invited Speakers    ::    Organizers


Invited Speakers

Charles C. Kemp

Mobile Manipulators for Intelligent Physical Assistance

Since founding the Healthcare Robotics Lab at Georgia Tech 10 years ago, my research has focused on developing mobile manipulators for intelligent physical assistance. Mobile manipulators are mobile robots with the ability to physically manipulate their surroundings. They offer a number of distinct capabilities compared to other forms of robotic assistance, including being able to operate independently from the user, being appropriate for users with diverse needs, and being able to assist with a wide variety of tasks, such as object retrieval, hygiene, and feeding. We’ve worked with hundreds of representative end users - including older adults, nurses, and people with severe motor impairments - to better understand the challenges and opportunities associated with this technology. Among other points, I’ll provide evidence for the following assertions: 1) many people will be open to assistance from mobile manipulators; 2) assistive mobile manipulation at home is feasible for people with profound motor impairments using off-the-shelf computer access devices; and 3) permitting contact and intelligently controlling forces increases the effectiveness of mobile manipulators.

Charles C. Kemp (Charlie) is an Associate Professor at the Georgia Institute of Technology in the Department of Biomedical Engineering with adjunct appointments in the School of Interactive Computing and the School of Electrical and Computer Engineering. He earned a doctorate in Electrical Engineering and Computer Science (2005), an MEng, and BS from MIT. In 2007, he joined the faculty at Georgia Tech where he directs the Healthcare Robotics Lab. He is an active member of Georgia Tech’s Institute for Robotics & Intelligent Machines (IRIM) and its multidisciplinary Robotics Ph.D. program. He has received a 3M Non-tenured Faculty Award, the Georgia Tech Research Corporation Robotics Award, a Google Faculty Research Award, and an NSF CAREER award. He was a Hesburgh Award Teaching Fellow in 2017. His research has been covered extensively by the popular media, including the New York Times, Technology Review, ABC, and CNN.




Jonathan Lussier

Collaboration and Understanding in Empowering Humanity

Kinova’s arms are well known (and used) in the research community due to their compact, integrated nature. What is required for the next generation of Kinova products? How can we better integrate the research community’s work into our assistive products, and how can our products better support the research community? The answer to how best empower humanity and make people more independent is still open and we will need to collaborate with researchers on many different human-robot interaction topics. Independence, control interfaces, observation, grasping, risk and trust handling, and context awareness will all be addressed in this presentation.

Jonathan Lussier has worked in aerospace system engineering for 10 years before he career transition to robotics (just 6 months ago). Because of this he have a pretty broad knowledge of different engineering disciplines (mechanical principally but also in software and electrical). His career switch came about rather quickly - from a hobby in robotics in my basement (passion and curiosity leading to me working on a low-cost arm) to the deep end - Advanced Research at Kinova. He is working on making robots more accessible, more capable and easier to use.




Hermano Igo Krebs

Performance-Based Adaptive Control for Rehabilitation Robotics

In this talk I will describe our concept of performance-based adaptive robotic therapy that uses speed, time, or EMG thresholds to initiate robot assistance. We have employed this approach since 1990's and have completed several clinical studies involving well over 1,000 stroke patients. Research to date has shown that repetitive task-specific, goal-directed, assist-as-needed robotic therapy is effective in reducing motor impairments and improving motor function in the affected arm after stroke. Our goal is to tailor therapy to each stroke patient maximizing his/her recovery. I will discuss also how we are expanding the approach to lower extremity and gait.

Hermano Igo Krebs has been a Principal Research Scientist and Lecturer at MIT’s Mechanical Engineering Department since 1997. He also holds an affiliate position as an Adjunct Professor at University of Maryland School of Medicine, Department of Neurology, and as a Visiting Professor at Fujita Health University, Department of Physical Medicine and Rehabilitation (Japan), at University of Newcastle, Institute of Neuroscience (UK), at Osaka University, Mechanical Science and Bioengineering Department (Japan), and at Loughborough University, Rehabilitation Robotics of The Wolfson School of Mechanical, Electrical, and Manufacturing Engineering (UK). He is a member of the Collegio dei Docenti of the PhD programme in Biomedical Engineering of the University Campus Bio-Medico of Rome, Italy (“Dottorato di Ricerca in Ingegneria Biomedica”). He is a Fellow of the IEEE. Dr. Krebs was nominated by two of IEEE societies: IEEE-EMBS (Engineering in Medicine & Biology Society) and IEEE-RAS (Robotics and Automation Society) to this distinguished engineering status “for contributions to rehabilitation robotics and the understanding of neuro-rehabilitation.” His work goes beyond Stroke and has been extended to Cerebral Palsy for which he received “The 2009 Isabelle and Leonard H. Goldenson Technology and Rehabilitation Award,” from the Cerebral Palsy International Research Foundation (CPIRF). In 2015, he received the prestigious IEEE-INABA Technical Award for Innovation leading to Production “for contributions to medical technology innovation and translation into commercial applications for Rehabilitation Robotics.” His goal is to revolutionize the way rehabilitation medicine is practiced today by applying robotics and information technology to assist, enhance, and quantify rehabilitation. He was one of the founders, member of the Board of Directors, and the Chairman of the Board of Directors of Interactive Motion Technologies from 1998 to 2016. He successfully merged it with Bionik Laboratories, a publicly traded company, where he served as its Chief Science Officer until June 2017 and where he continues to serve as a member of the Board of Directors.



Conor Walsh

Soft Wearable Robots for Restoring Mobility and Manipulation for Patients with Physical Impairments

The goal of this talk is to highlight recent and growing efforts in the field of soft wearable robotics and discuss how these technologies may be used in a variety of contexts. This rapidly emerging field of soft robotics presents a new opportunity to develop a new class of wearable assistive technology optimized for the needs of individuals with residual capacity, i.e. where only small to moderate levels of assistance is needed to improve function. The technical requirements for actuation, human interface, and sensors/control needed to realize soft wearable robots are fundamentally different than those for rigid exoskeletons, necessitating fundamental technological development in areas of actuation, sensing, flexible electronics, control and system integration. This talk will present this technology in two application areas, for stroke and spinal cord injury for both the upper and lower extremity.

Conor Walsh is the John L. Loeb Associate Professor of Engineering and Applied Sciences at the John A. Paulson Harvard School of Engineering and Applied Sciences and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering. He is the is the founder of the Harvard Biodesign Lab, which brings together researchers from the engineering, industrial design, apparel, clinical and business communities to develop new disruptive robotic technologies for augmenting and restoring human performance. He is the winner of multiple wards including the MIT Technology Review Innovator Under 35 Award, Best Paper Award at the 2015 International Conference on Rehabilitation Robotics, National Science Foundation Career Award, the Robotics Business Review Next generation Game Changer Award and the MIT 100K Entrepreneurship Competition Grand Prize.



Paolo Bonato

Improving Clinical Outcomes of Robot-Assisted Gait Training by Focusing on the Human-Robot Interface

Over the past decades, several technologies have gained an important role in rehabilitation medicine, but few of them have been subject to debate as much as rehabilitation robotics. Growing evidence has been brought to the attention of clinicians and researchers in support of the relevance of the intensity and task-specificity of motor training protocols. Robotics has been looked upon as a means to deliver interventions marked by high intensity and task-specificity of motor training with positive impact on the magnitude of motor gains associated with rehabilitation. An emerging area of interest in the field of rehabilitation robotics is the design of individualized rehabilitation interventions. The interest for individualized protocols is motivated by the observation that different subjects respond differently to a given robot-assisted rehabilitation protocol. To illustrate this concept, we will discuss recent results of robot-assisted gait training in children with Cerebral Palsy (CP). CP is a group of neurological disorders caused by damage to the brain at birth, during infancy or in early childhood that affects about 1 in 500 newborns. Diminished gait proficiency is one of the main physical disabilities in children with CP. Several studies have demonstrated the beneficial effects of intensive gait training in children with CP. Clinical outcomes of robot-assisted gait training in children with CP are encouraging but they are marked by high variability across individuals. While very large motor gains are observed in some children, in others the intervention leads to modest or no motor gains. Motor gains are mediated by motor learning, which in turn is the result of adaptations that occur as part of the interaction between the child and the robot. We have hypothesized that children who do not respond to robot-assisted gait training have an impaired ability to generate motor adaptations. We will discuss robot-based methodologies to quantify motor adaptations, their applicability to children with CP, the relationship between motor adaptations and learning, and the use of emerging techniques to facilitate the generation of motor adaptation strategies.

Paolo Bonato, Ph.D., serves as Director of the Motion Analysis Laboratory at Spaulding Rehabilitation Hospital, Boston MA. He is an Associate Professor in the Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston MA, an Adjunct Professor of Biomedical Engineering at the MGH Institute of Health Professions, Boston MA, an Associate Faculty Member at the Wyss Institute for Biologically Inspired Engineering, and an Adjunct Professor of Electrical and Computer Engineering at Northeastern University. He has held Adjunct Faculty positions at MIT, the University of Ireland Galway, and the University of Melbourne. His research work is focused on the development of rehabilitation technologies with special emphasis on wearable technology and robotics. Dr. Bonato served as the Founding Editor-in- Chief of Journal on NeuroEngineering and Rehabilitation. He serves as a Member of the Advisory Board of the IEEE Journal of Biomedical and Health Informatics and as Associate Editor of the IEEE Journal of Translational Engineering in Health and Medicine. Dr. Bonato served as an Elected Member of the IEEE Engineering in Medicine and Biology Society (EMBS) AdCom (2007-2010) and as IEEE EMBS Vice President for Publications (2013-2016). He also served as President of the International Society of Electrophysiology and Kinesiology (2008-2010). He received the M.S. degree in electrical engineering from Politecnico di Torino, Turin, Italy in 1989 and the Ph.D. degree in biomedical engineering from Universita` di Roma “La Sapienza” in 1995. Dr. Bonato's work has received more than 7,500 citations (Google Scholar).



Brian Scassellati

Building Robots that Teach

Robots have long been used to provide assistance to individual users through physical interaction, typically by supporting direct physical rehabilitation or by providing a service such as retrieving items or cleaning floors. Socially assistive robotics (SAR) is a comparatively new field of robotics that focuses on developing robots capable of assisting users through social rather than physical interaction. Just as a good coach or teacher can provide motivation, guidance, and support without making physical contact with a student, socially assistive robots attempt to provide the appropriate emotional, cognitive, and social cues to encourage development, learning, or therapy for an individual. In this talk, I will review some of the reasons why physical robots rather than virtual agents are essential to this effort, highlight some of the major research issues within this area, and describe some of our recent results building supportive robots for teaching social skills to children with autism spectrum disorder and for teaching nutrition to typically developing children.

Brian Scassellati is a Professor of Computer Science, Cognitive Science, and Mechanical Engineering at Yale University and Director of the NSF Expedition on Socially Assistive Robotics. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills. Using computational modeling and socially interactive robots, his research evaluates models of how infants acquire social skills and assists in the diagnosis and quantification of disorders of social development (such as autism). His other interests include humanoid robots, human-robot interaction, artificial intelligence, machine perception, and social learning.




Giulio Sandini

Humanizing Robots

In the recent years robot technology has advanced dramatically producing machines able to move like a human and, at the same time, being faster, stronger and more resilient than humans are. The variety of humanoid robots being built and, to some extent, commercialized has increased enormously since the first humanoid robot announced by Honda 30 years ago. Since then the complexity and the performance of these robots has been steadily increasing and nowadays we can claim that more and more sensing and motion abilities of robots are approaching those of humans. Moreover, the computational power of today’s computers and the possibility and the possibility to process gargantuan amount of data, has created the impression that the science fiction world described by Asimov where humans and robots co-exist and collaborate is not very far away. Is this true? Is there some major missing ingredient we have to develop? What is the role of robotics research in this endeavor? Does it still make sense to think to robotics as an engineering activity waiting for the technological solutions required to fulfill Asimov’s dream, or should robotics get involved head-on in actively seeking the knowledge which is still missing? During the talk I will argue that robots interacting with humans in everyday situations, even if motorically and sensorially very skilled and extremely clever in action execution are still very much primitive in their ability to understand actions executed by others and that this is the major obstacle for the advancement of social robotics. I will argue that the reason why this is happening is rooted in our limited knowledge about ourselves and the way we interact socially. I will also argue that robotics can serve a very crucial role in advancing this knowledge by joining forces with the communities studying the cognitive aspects of social interaction and by co-designing robots able to establish a mutual communication channel with the human partner to discover and fulfill a shared goal (the distinctive mark of human social interaction).

Giulio Sandini is Director of Research at the Italian Institute of Technology and full professor of bioengineering at the University of Genoa. After his graduation in Electronic Engineering (Bioengineering) at the University of Genova he was research fellow and assistant professor at the Scuola Normale Superiore in Pisa where he investigated aspects of visual processing at the level of single neurons as well as aspects of visual perception in human adults and children. He has been Visiting Research Associate at the Department of Neurology of the Harvard Medical School in Boston where he developed diagnostic techniques based on brain electrical activity mapping. After his return to Genova in 1984 as associate professor, in 1990 he founded the LIRA-Lab (Laboratory for Integrated Advanced Robotics). In 1996 he was Visiting Scientist at the Artificial Intelligence Lab of MIT. Since July 2006 Giulio Sandini has been appointed Director of Research at the Italian Institute of Technology where he has established and is currently directing the department of Robotics, Brain and Cognitive Sciences. RBCS department concentrates on a multidisciplinary approach to human centered technologies encompassing machine learning and artificial cognition, exploring the brain mechanisms at the basis of motor behavior, learning, multimodal interaction, and sensorimotor integration.




Julie A. Shah

What happens when robots are too good at their jobs?: How to optimize for the human in the human-robot team

Advancements in robotic technology are making it increasingly possible to integrate robots into the human workspace in order to improve productivity and decrease worker strain resulting from the performance of repetitive, arduous physical tasks. While new computational methods have significantly enhanced the ability of people and robots to work flexibly together, there has been little study into the ways in which human factors influence the design of these computational techniques. In particular, collaboration with robots presents unique challenges related to preserving human situational awareness and optimizing workload allocation for human teammates while respecting their workflow preferences. We conducted a series of three human subject experiments to investigate these human factors, and provide design guidelines for the development of intelligent collaborative robots based on our results.

Julie A. Shah is an Associate Professor of Aeronautics and Astronautics at MIT and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. As a current fellow of Harvard University's Radcliffe Institute for Advanced Study, she is expanding the use of human cognitive models for artificial intelligence. She has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. Her work on industrial human-robot collaboration was also in Technology Review’s 2013 list of 10 Breakthrough Technologies. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT.



Nathan Ratliff

Simplifying robotics for a new era of human robot collaboration

If we want humans and robots working together, robots need to be much more reactive than they currently are. Human worlds are unstructured and unpredictable, they require a level of reaction and adaptation unseen in mainstream systems. But we still need to be able to program them, to tell them what to do. To balance these competing requirements, we need to give up control and hand off the details of motion and perception, the details we don’t care about, to the robot. Let the robot figure out how to get the arm into configuration with the end-effector poised to pick up the part for packaging, and even what that configuration should be given symmetries or redundancies. And let the robot find and track relevant objects innately. The simpler the programming API, the easier it is for users to be creative. Any programming API should abstract away these sophisticated underlying technologies. And by handing over the details to the robot, these robots--all robots--can achieve unprecedented levels of adaptation in changing, unstructured worlds. Rather than measuring safety by how lightly the robot hits someone, we want systems that maintaining industry level speed and precision while striving to never touch anyone at all. In this talk, I’ll outline our efforts at Lula Robotics to build and deliver such a system to both research and industry. Robots today are incredibly sophisticated machines. With the right software, we can turn these precise, repeatable, automata into cognizant, programmable, co-workers to enable creative minds from all backgrounds to build unimaginable new clever applications across a spectrum of human-centered industries.

Nathan Ratliff has been working with robots for a decade and a half. He received his PhD in Robotics from Carnegie Mellon University’s Robotics Institute in 2009 for his work on Imitation Learning and Motion Optimization. He has been a Research Assistant Professor at the Toyota Technological Institute in Chicago, a Research Scientist at Intel Labs in Pittsburgh, and a software engineer at Amazon and Google. From 2013-2015 he worked at the Max Planck Institute for Intelligent Systems and the University of Stuttgart in Germany teaching and researching optimization and geometric methods for humanoid motion and manipulation. In late 2015, he co-founded Lula Robotics Inc., a company dedicated building intelligence innately into robotic systems to enable simple access to powerful reactive and adaptive behavior.



John J. Leonard

Paths to Autonomous Driving -- With and Without the Driver

We will describe some of the challenges involved in developing autonomous vehicles, with a focus on interaction with humans both inside and outside the vehicle. We will also discuss creating a Parallel Autonomy system, in which self-driving technology operates in parallel with a human driver, to improve the safety of human driving.

John J. Leonard is Samuel C. Collins Professor of Mechanical and Ocean Engineering in the MIT Department of Mechanical Engineering. He is also a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research addresses the problems of navigation and mapping for autonomous mobile robots. He holds the degrees of B.S.E.E. in Electrical Engineering and Science from the University of Pennsylvania (1987) and D.Phil. in Engineering Science from the University of Oxford (1994). He is an IEEE Fellow (2014). Professor Leonard is currently on sabbatical leave from MIT working on research for active safety and autonomous driving for Toyota Research Institute.




© 2017.