Skip to main content

AI Uses Vocal Cues to Detect Depression

July 25, 2019

Artificial intelligence (AI) may be able to use the sound of a person’s voice to detect depressed mood, indicates a study presented at the recent Canadian Conference on Artificial Intelligence.

Mashrura Tasnim, a PhD student at the University of Alberta, and Eleni Stroulia, PhD, a professor in the university’s department of computing science, based their study on previous findings suggesting a person’s voice contains features that provide information about mood.

The two used standard benchmark datasets to create a methodology that combines multiple machine-learning algorithms that assess acoustic cues to detect depression in a speaker. They anticipate their technology to eventually be used in a smartphone app that monitors a user’s mood.

Coming up at Psych Congress 2019: Future Psychiatry: A Technological and Neuroscientific Convergence with Arshya Vahabzadeh, MD

“A realistic scenario is to have people use an app that will collect voice samples as they speak naturally,” Dr. Stroulia said. “The app, running on the user’s phone, will recognize and track indicators of mood, such as depression, over time. Much like you have a step counter on your phone, you could have a depression indicator based on your voice as you use the phone.”

The data collected could ultimately help health care providers and patients track mood over time, the scientist noted.

“This work—developing more accurate detection in standard benchmark datasets—is the first step,” Dr. Stroulia said.

—Jolynn Tumolo

References

Tasnim M, Stroulia E. Detecting depression from voice. Paper presented at the Canadian Conference on Artificial Intelligence. 2019 April 24;[Epub ahead of print].

Willis K. Sound mind: detecting depression through voice [press release]. Edmonton, Alberta, Canada: University of Alberta; July 11, 2019.

Back to Top