Abstract / Description of output
Semmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM). Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box. Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts. Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions. We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014). We further uncovered that the network paid attention to experimental artifacts. Removing these artifacts ensured the validity of predictions. After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%. Our work thus demonstrates the utility of AI explainability for CNNs.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Conference on Learning Representations, 2020 |
Place of Publication | Addis Ababa, Ethiopia |
Pages | 1-18 |
Number of pages | 18 |
Publication status | Published - 30 Apr 2020 |
Event | Eighth International Conference on Learning Representations - Millennium Hall, Virtual conference formerly Addis Ababa, Ethiopia Duration: 26 Apr 2020 → 30 Apr 2020 https://iclr.cc/Conferences/2020 |
Conference
Conference | Eighth International Conference on Learning Representations |
---|---|
Abbreviated title | ICLR 2020 |
Country/Territory | Ethiopia |
City | Virtual conference formerly Addis Ababa |
Period | 26/04/20 → 30/04/20 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- cs.CV
- cs.LG
- eess.IV