TY - GEN
T1 - SPATIO-temporal memory for navigation in a mushroom body model
AU - Zhu, Le
AU - Mangan, Michael
AU - Webb, Barbara
N1 - Funding Information:
Supported by China Scholarships Council (Grant No.201808060165).
Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020/12/23
Y1 - 2020/12/23
N2 - Insects, despite relatively small brains, can perform complex navigation tasks such as memorising a visual route The exact format of visual memory encoded by neural systems during route learning and following is still unclear Here we propose that interconnections between Kenyon cells in the Mushroom Body (MB) could encode spatio-temporal memory of visual motion experienced when moving along a route In our implementation, visual motion is sensed using an event-based camera mounted on a robot, and learned by a biologically constrained spiking neural network model, based on simplified MB architecture and using modified leaky integrate-and-fire neurons In contrast to previous imagematching models where all memories are stored in parallel, the continuous visual flow is inherently sequential Our results show that the model can distinguish learned from unlearned route segments, with some tolerance to internal and external noise, including small displacements The neural response can also explain observed behaviour taken to support sequential memory in ant experiments However, obtaining comparable robustness to insect navigation might require the addition of biomimetic pre-processing of the input stream, and determination of the appropriate motor strategy to exploit the memory output.
AB - Insects, despite relatively small brains, can perform complex navigation tasks such as memorising a visual route The exact format of visual memory encoded by neural systems during route learning and following is still unclear Here we propose that interconnections between Kenyon cells in the Mushroom Body (MB) could encode spatio-temporal memory of visual motion experienced when moving along a route In our implementation, visual motion is sensed using an event-based camera mounted on a robot, and learned by a biologically constrained spiking neural network model, based on simplified MB architecture and using modified leaky integrate-and-fire neurons In contrast to previous imagematching models where all memories are stored in parallel, the continuous visual flow is inherently sequential Our results show that the model can distinguish learned from unlearned route segments, with some tolerance to internal and external noise, including small displacements The neural response can also explain observed behaviour taken to support sequential memory in ant experiments However, obtaining comparable robustness to insect navigation might require the addition of biomimetic pre-processing of the input stream, and determination of the appropriate motor strategy to exploit the memory output.
KW - Event-based camera
KW - Insect navigation
KW - Insect visual motion
KW - Mushroom body learning
KW - Sequence learning
KW - Spatio-temporal memory
UR - http://www.scopus.com/inward/record.url?scp=85107287858&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-64313-3_39
DO - 10.1007/978-3-030-64313-3_39
M3 - Conference contribution
AN - SCOPUS:85107287858
SN - 978-3-030-64312-6
T3 - Lecture Notes in Computer Science
SP - 415
EP - 426
BT - Biomimetic and Biohybrid Systems - 9th International Conference, Living Machines 2020, Proceedings
A2 - Vouloutsi, Vasiliki
A2 - Mura, Anna
A2 - Verschure, Paul F. M. J.
A2 - Tauber, Falk
A2 - Speck, Thomas
A2 - Prescott, Tony J.
PB - Springer
T2 - 9th International Conference on Biomimetic and Biohybrid Systems, Living Machines 2020
Y2 - 28 July 2019 through 30 July 2019
ER -