Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Machine Learning for Multimodal Healthcare Data

Multi-Modal Biomarker Extraction Framework for Therapy Monitoring of Social Anxiety and Depression Using Audio and Video

Paula Andrea PĂ©rez-Toro · Tobias Weise · Andrea Deitermann · Bettina Hoffmann · Kubilay Demir · Theresa Straetz · Elmar Noeth · Andreas Maier · Thomas Kallert · Seung Hee Yang

Keywords: [ Digital pathology ] [ Multimodal biomarkers ]


Abstract:

This paper introduces a framework that can be used for feature extraction, relevant to monitoring the speech therapy progress of individuals suffering from social anxiety or depression. It operates multi-modal (decision fusion) by incorporating audio and video recordings of a patient and the corresponding interviewer, at two separate test assessment sessions.The used data is provided by an ongoing project at a medical institution in Germany, with the goal of investigating whether an established speech therapy group program for adolescents, which is implemented in a stationary and semi-stationary setting, can be successfully carried out through telemedicine. The features proposed in this multi-modal approach could form the basis for interpretation and analysis by medical experts and therapists, in addition to acquired data in the form of questionaries.Extracted audio features focus on prosody (intonation, stress, rhythm, and timing), as well as predictions from a deep neural network model, which is inspired by the Pleasure, Arousal, Dominance (PAD) emotional model space. Video features are based on a pipeline that is designed to enable visualization of the interaction between the patient and the interviewer in terms of Facial Emotion Recognition (FER), utilizing the mini-Xception network architecture.

Chat is not available.