Research
My research is centered on the development of AI and speech technology for healthcare. I am particularly interested in how spontaneous, conversational speech can serve as a window into a person’s cognitive and neurological health.
Automated Clinical Assessment & CognoSpeak™
A primary focus of my current work is the CognoSpeak™ system. CognoSpeak is an automated, web-based tool designed to help clinicians detect early signs of cognitive impairment and dementia through natural conversation.
Recent breakthroughs (2025-2026) have expanded this work into:
- Clinical Utility: Our latest study in the IEEE Journal of Biomedical and Health Informatics (2025) demonstrates CognoSpeak’s effectiveness in remote, real-world diagnostic settings. You can read more about the clinical applications on the CognoSpeak Research page.
- Diagnostic Breadth: We are currently adapting these speech-based biomarkers for the detection of Stroke-related cognitive changes and Motor Neurone Disease (MND), supported by recent funding from the Rosetrees Trust.
The PROCESS Data Challenge & Open Science
In 2025, my team and I organized the PROCESS Challenge (Early dementia detection using multiple spontaneous speech prompts) as part of the IEEE ICASSP conference. The challenge attracted 50 international participating teams and focused on advancing the state-of-the-art in detecting early cognitive decline from real-world, conversational speech data.
I am a strong advocate for open science and the belief that high-quality clinical datasets should be made available to the wider research community to accelerate innovation in healthcare AI. My team and I are committed to making data available through platforms like Zenodo, ensuring that non-proprietary audio and metadata can benefit the global research community while maintaining the highest ethical and privacy standards.
Speech and Language Biomarkers
Beyond dementia, my lab investigates the intersection of natural language understanding (NLU) and clinical diagnostics. This includes:
- Primary Progressive Aphasia: Automated sub-typing of PPA using cross-attention systems (see our Interspeech 2025 paper).
- Inclusive AI: Ensuring that speech-based diagnostics are accent-agnostic and robust across diverse demographic groups.
- Multimodal Analysis: Integrating speech with visual cues, such as eye-blink rates and facial expressions, to increase diagnostic sensitivity.
Key Recent Publications
For a full list of my 120+ publications, please visit my Google Scholar profile. Selected recent highlights include:
- Pahar, M., Mirheidari, B., Blackburn, D., O’Malley, R., Walker, T., Reuber, M., & Christensen, H. (2025). "CognoSpeak: an automatic, remote assessment of early cognitive decline in real-world conversational speech." IEEE Journal of Biomedical and Health Informatics.
- Tao, F., Mirheidari, B., Pahar, M., Blackburn, D., & Christensen, H. (2025). "Early dementia detection using multiple spontaneous speech prompts: The PROCESS challenge." Proc. ICASSP 2025.
- Pan, Y., Mirheidari, B., Tu, Z., O'Malley, R., Walker, T., Blackburn, D., & Christensen, H. (2025). "A Two-Step Attention-Based Feature Combination Cross-Attention System for Speech-Based Dementia Detection." IEEE/ACM Transactions on Audio, Speech, and Language Processing.
- Mirheidari, B., Walker, T., Blackburn, D., & Christensen, H. (2025). "Automatic Detection and Sub-typing of Primary Progressive Aphasia from Speech." Proc. Interspeech 2025.
