News, research, resources, and personal stories about mania, manic episodes, and hypomania, Bipolar I Disorder.

,

AI Listens For Mood Swings In The Voices Of Those With Bipolar Disorder

AI model analyzes three-second audio clips to classify mood states

–A new study from China suggests that artificial intelligence may one day help doctors spot mood swings in people with bipolar disorder just by listening to how they talk. The experimental system correctly identified manic, depressed, and stable states in Mandarin-speaking patients with around 86% accuracy.

Researchers at Tianjin Anding Hospital and collaborating universities built a deep-learning model that analyzes short clips of patients’ speech and classifies their mood. The work, published in BMC Psychiatry in 2025, aims to add a more objective tool to the current mix of checklists, rating scales, and self-report that can miss early warning signs or be skewed by memory and insight problems.

Bipolar disorder, which affects more than 1% of the global population, is marked by episodes of mania or hypomania and depression, along with long stretches of stability. Misjudging where someone is on that spectrum can delay treatment changes, increase relapse risk, and raise the chance of hospitalization or suicide. The authors note that traditional assessments are vulnerable to observer bias and subjective recall, especially during mania, when insight is often reduced.

Clinical observation has long shown that speech shifts with mood: faster and louder during mania, slower and flatter in depression. Earlier studies have used voice to track depression or distinguish bipolar disorder from other conditions, mostly in English speakers. Mandarin, however, is a tonal language where pitch also carries meaning, making emotion detection more complex and requiring language-specific models.

To build their system, the team recruited 53 adults with bipolar I or II disorder, ages 18 to 65, from an outpatient clinic in Tianjin between March 2023 and April 2024. All met DSM-5 diagnostic criteria and completed structured interviews and standardized mood rating scales. Based on those scores, 19 were in a manic or hypomanic episode, 15 in a depressive episode, and 19 in a stable, or euthymic, state.

Participants sat in a soundproof, electromagnetically shielded room and spoke with trained clinicians while being recorded on a high-quality digital device. The conversations—including the diagnostic interviews and mood scales—were then cut into three-second audio segments. In total, the dataset contained 2,990 clips: 1,041 from manic states, 1,028 from depressive states, and 921 from euthymic states.

Technically, the researchers compared three types of speech features. A traditional Mel spectrogram captured how sound energy changed across frequencies. Two newer methods—HuBERT and WavLM—generated richer, pre-trained representations learned from vast speech datasets. They fed those features into three deep-learning models

The authors say speech-based tools could eventually help patients track mood shifts in real time. “Speech features are promising for differentiating mood states in bipolar disorder because they are accessible, objective, and non-invasive,” the team wrote.

The study was small, limited to one hospital, and recorded under controlled conditions. Real-world audio—captured over phones or in noisy homes—may be far harder for AI to interpret. The model was trained only on Mandarin speech, meaning it cannot be assumed to work in English or other languages without retraining. And because the study captured only a single moment in each patient’s mood cycle, longer-term research is needed to see whether speech patterns can predict mood swings before they happen.

Source: Li J. et al., “Mood states recognition based on Mandarin speech and deep learning in patients with bipolar disorder,” BMC Psychiatry (2025).
Full paper: https://link.springer.com/article/10.1186/s12888-025-07630-5

Note from the editor: Even with these limits, the study provides a promising step toward using the sound of a person’s voice as a noninvasive marker of mood in bipolar disorder. If validated, such tools could offer earlier warnings, support treatment decisions, and reduce the burden of relapse. – Alex Rowan

Recent articles

3 responses to “AI Listens For Mood Swings In The Voices Of Those With Bipolar Disorder”

  1. Pieter D Avatar

    It is a good diagnostic tool. It is known that AI trained to analyze retina could distinguish male from female retina. This wasn’t on purpose as sex was part of the the data they trained with by accident. Turns out that no human can determine sex by looking at the retina. So expect those AI to become better in determining mood than humans….

    Like

  2. Mania Insights: AI Listens For Mood Swings In The Voices Of Those With Bipolar Disorder | ResearchBuzz: Firehose Avatar

    […] Insights: AI Listens For Mood Swings In The Voices Of Those With Bipolar Disorder. “A new study from China suggests that artificial intelligence may one day help doctors spot […]

    Like

  3. National Library of Wales, Vermont Accessible Trails, Curating in the Crossfire, More: Wednesday ResearchBuzz, December 10, 2025 – ResearchBuzz Avatar

    […] Insights: AI Listens For Mood Swings In The Voices Of Those With Bipolar Disorder. “A new study from China suggests that artificial intelligence may one day help doctors spot […]

    Like

Leave a comment

About


Mania Insights reports news, scientific research, helpful resources, and real-life experiences about mania and manic episodes. Mania Insights aims to break the silence and reduce the stigma, empowering individuals and families to better understand the bipolar I condition and thrive.

Share your experiences or comment: mania.insights@gmail.com
https://x.com/ManiaInsights