Interpretable Machine Teaching via Feature Feedback

Shihan Su, Yuxin Chen, Oisin Mac Aodha, Pietro Perona, Yisong Yue

Research output: Contribution to conferencePaperpeer-review


A student’s ability to learn a new concept can be greatly improved by providing them with clear and easy to understand explanations from a knowledgeable teacher. However, many existing approaches for machine teaching only give a limited amount of feedback to the student. For example, in the case of learning visual categories, this feedback could be the class label of the object present in the image. Instead, we propose a teaching framework that includes both instance-level labels as well as explanations in the form of feature-level feedback to the human learners. For image categorization, our feature-level feedback consists of a highlighted part or region in an image that explains the class label. We perform experiments onreal human participants and show that learners that are taught with feature-level feedback perform better at test time compared to existing methods.
Original languageEnglish
Number of pages9
Publication statusPublished - 9 Dec 2017
EventNIPS 2017 Workshop on Teaching Machines, Robots, and Humans. - Long Beach, United States
Duration: 9 Dec 20179 Dec 2017


WorkshopNIPS 2017 Workshop on Teaching Machines, Robots, and Humans.
Country/TerritoryUnited States
CityLong Beach
Internet address


Dive into the research topics of 'Interpretable Machine Teaching via Feature Feedback'. Together they form a unique fingerprint.

Cite this