Prediction versus understanding in computationally enhanced neuroscience

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

The use of machine learning instead of traditional models in neuroscience raises sig- nificant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some bench- marks for the interpretability of artificial neural networks (ANN’s) used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding of the brain trades off against the predictive accuracy of the models. This trade-off between prediction and understanding is better explained by a non-factivist account of scientific understanding.
Original languageEnglish
Early online date28 May 2020
Publication statusE-pub ahead of print - 28 May 2020


Dive into the research topics of 'Prediction versus understanding in computationally enhanced neuroscience'. Together they form a unique fingerprint.

Cite this