Just a second....

Neural Prototype Trees for Interpretable Fine-grained Image Recognition – #17

Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, as alternative to post-hoc explanations that only approximate a trained model. Aiming for better interpretability and fewer prototypes to not overwhelm a user, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in a hierarchical decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in the tree explains a single prediction. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this learned prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it’s a hummingbird! We tune the accuracy-interpretability trade off using ensembling and pruning. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 learned prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200-2011 and Stanford Cars data sets. Code is available at https://github.com/M-Nauta/ProtoTree. Full paper published at CVPR 2021.

Full article: Paper 17