Abstract
[Introduction/Motivation:]
Due to the severe incidence of dementia across the world, its care and prevention are increasingly
demanded by public health [1], with a focus on early detection and improved caregiving. As language
impairment is a common symptom of dementia and a good source of clinical information for its
assessment [2-4], our research aims to characterise potentially disrupted communication patterns
related to cognitive function and decline. Identifying such features will ultimately help us design assistive
technologies able to automatically monitor cognitive status (i.e. adaptive interfaces, social robotics), in
order to allow older people to live at home longer, and as independently as possible [5, 6]. In the present
work, our hypothesis is that patients suffering from Alzheimer’s Disease (AD) will show identifiable
patterns during dialogue interactions (i.e. disrupted turn-taking patterns, differences in speech rate).
[Methods:]
We employ spontaneous, conversational data gathered by the Carolina Conversations Collections [7]
to train a machine learning model to differentiate AD and non-AD patients. Here, we included 21 patients
and 17 controls, over 65 years old. The data was pre-processed to generate vocalisation graphs (figure
1) and extract speech rate information. These and the diagnostic annotations (AD vs. non-AD) were
used for the supervised learning training of the model. Then, this classifier was evaluated on its ability
to predict such annotations (AD vs. non-AD), implementing 10-fold cross-validation.
[Results and Discussion:]
The classifier reached up to 83% accuracy, based on turn-taking patterns and speech rate. Precision,
recall and �" scores were also calculated (figure 2). These are preliminary results of a research in
progress, as we are currently pre-processing the rest of the dataset and will be trying different methods
for dialogue analysis and natural language processing in the short term.
All in all, there are several linguistic parameters that are promising to be helpful in the assessment of
cognitive functioning [2-4]. Our approach does not rely on speech transcription content, but on speechsilence patterns and basic prosodic information extracted from spontaneous spoken dialogue. Still, it
obtains levels of accuracy comparable to state-of-the-art systems that rely on more complex features.
This opens the possibility of devising mental health monitoring methods which would be non-invasive
and low-cost in terms of time and resources.
[Aknowledments:]
We aknowledge C. Pope and B. H. Davis, from the Medical University of South Carolina, host to the
Carolina Conversation Collection [15]. Our research is supported by the UK Medical Research Council.
Original language | English |
---|---|
Publication status | Published - 14 Feb 2018 |
Event | 2ND Human Brain Project Student Conference : Transdisciplinary Research Linking Neuroscience, Brain Medicine and Computer Science - Ljubliana, Slovenia Duration: 14 Feb 2018 → 16 Feb 2018 https://education.humanbrainproject.eu/documents/275408/0/Proceedings_2nd+HBP+SC_190408_TR.PDF/ae09c785-1efa-421a-959f-be7df4e15b82 |
Conference
Conference | 2ND Human Brain Project Student Conference |
---|---|
Country/Territory | Slovenia |
City | Ljubliana |
Period | 14/02/18 → 16/02/18 |
Internet address |