Prof. Cynthia Matuszek named one of AI’s 10 to Watch

Cynthia Matuszek named one of AI’s 10 to Watch 

UMBC CSEE Professor Cynthia Matuszek was named as one AI’s 10 to Watch by IEEE Intelligent Systems. The designation is given every two years to a group of “10 young stars who have demonstrated outstanding AI achievements”.  IEEE Intelligent Systems accepts nominations from around the world, which are then evaluated by the the publication’s  editorial and advisory boards based on reputation, impact, expert endorsement, and diversity.  Dr. Matuszek was recognized for her research that “combined robotics, natural language processing, and machine learning to build systems that nonspecialists can instruct, control, and interact with intuitively and naturally”.

Professor Matuszek joined UMBC in 2014 after receiving her Ph.D. in Computer Science from the University of Washington.  At UMBC, she established and leads the Interactive Robotics and Language Lab that integrates research on robotics and natural language processing with the goal of “bringing the fields together: developing robots that everyday people can talk to, telling them to do tasks or about the world around them”.

Here is how she describes her research in the IEEE Intelligent Systems article.

Robot Learning from Language and Context

As robots become more powerful, capable, and autonomous, they are moving from controlled industrial settings to human-centric spaces such as medical environments, workplaces, and homes. As physical agents, they will soon be able help with entirely new categories of tasks that require intelligence. Before that can happen, though, robots must be able to interact gracefully with people and the noisy, unpredictable world they occupy.

This undertaking requires insight from multiple areas of AI. Useful robots will need to be flexible in dynamic environments with evolving tasks, meaning they must learn and must also be able to communicate effectively with people. Building advanced intelligent agents that interact robustly with nonspecialists in various domains requires insights from robotics, machine learning, and natural language processing.

My research focuses on developing statistical learning approaches that let robots gain knowledge about the world from multimodal interactions with users, while simultaneously learning to understand the language surrounding novel objects and tasks. Rather than considering these problems separately, we can efficiently handle them concurrently by employing joint learning models that treat language, perception, and task understanding as strongly associated training inputs. This lets each of these channels provide mutually reinforcing inductive bias, constraining an otherwise unmanageable search space and allowing robots to learn from a reasonable number of ongoing interactions.

Combining natural language processing and robotic understanding of environments improves the efficiency and efficacy of both approaches. Intuitively, learning language is easier in the physical context of the world it describes. And robots are more useful and helpful if people can talk naturally to them and teach them about the world. We’ve used this insight to demonstrate that robots can learn unanticipated language that describes completely novel objects. They can also learn to follow instructions for performing tasks and interpret unscripted human gestures, all from interactions with nonspecialist users.

Bringing together these disparate research areas enables the creation of learning methods that let robots use language to learn, adapt, and follow instructions. Understanding humans’ needs and communications is a long-standing AI problem, which fits within the larger context of understanding how to interact gracefully in primarily human environments. Incorporating these capabilities will let us develop flexible, inexpensive robots that can integrate into real-world settings such as the workplace and home.

​You can access a pdf version of the full IEEE AI’s 10 to Watch article here.


Posted

in

, , , , ,

by

Tags: