Abstract
The use of machine learning instead of traditional models in neuroscience raises sig- nificant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some bench- marks for the interpretability of artificial neural networks (ANN’s) used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding of the brain trades off against the predictive accuracy of the models. This trade-off between prediction and understanding is better explained by a non-factivist account of scientific understanding.
Original language | English |
---|---|
Journal | Synthese |
Early online date | 28 May 2020 |
Publication status | E-pub ahead of print - 28 May 2020 |
Fingerprint
Dive into the research topics of 'Prediction versus understanding in computationally enhanced neuroscience'. Together they form a unique fingerprint.Profiles
-
Mazviita Chirimuuta
- School of Philosophy, Psychology and Language Sciences - Senior Lecturer In Philosophy
Person: Academic: Research Active