On the interpretability of part-prototype based classifiers: a human centric analysis

The interpretability of a prototype-based machine learning method as a whole is too broad to be effectively evaluated. As a result, our experiments were designed to measure the human opinion on the individual properties required for an interpretable method. This focus on single aspects allowed us to gain fine-grained insight into how much human users understand and agree with the explanations of different methods.

Human annotators were recruited via Amazon Mechanical Turk. To ensure that workers have a good understanding of the task, they were required to first complete a qualification…

Continue Reading


News Source: www.nature.com


Posted

in

by

Tags: