Zhenyu Liu, B. Hu, Lihua Yan, Tian-Zhong Wang, Fei Liu, Xiaoyu Li, Huanyu Kang
{"title":"Detection of depression in speech","authors":"Zhenyu Liu, B. Hu, Lihua Yan, Tian-Zhong Wang, Fei Liu, Xiaoyu Li, Huanyu Kang","doi":"10.1109/ACII.2015.7344652","DOIUrl":null,"url":null,"abstract":"Depression is a common mental disorder and one of the main causes of disability worldwide. Lacking objective depressive disorder assessment methods is the key reason that many depressive patients can't be treated properly. Developments in affective sensing technology with a focus on acoustic features will potentially bring a change due to depressed patient's slow, hesitating, monotonous voice as remarkable characteristics. Our motivation is to find out a speech feature set to detect, evaluate and even predict depression. For these goals, we investigate a large sample of 300 subjects (100 depressed patients, 100 healthy controls and 100 high-risk people) through comparative analysis and follow-up study. For examining the correlation between depression and speech, we extract features as many as possible according to previous research to create a large voice feature set. Then we employ some feature selection methods to eliminate irrelevant, redundant and noisy features to form a compact subset. To measure effectiveness of this new subset, we test it on our dataset with 300 subjects using several common classifiers and 10-fold cross-validation. Since we are collecting data currently, we have no result to report yet.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"32 1","pages":"743-747"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"34","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACII.2015.7344652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 34
Abstract
Depression is a common mental disorder and one of the main causes of disability worldwide. Lacking objective depressive disorder assessment methods is the key reason that many depressive patients can't be treated properly. Developments in affective sensing technology with a focus on acoustic features will potentially bring a change due to depressed patient's slow, hesitating, monotonous voice as remarkable characteristics. Our motivation is to find out a speech feature set to detect, evaluate and even predict depression. For these goals, we investigate a large sample of 300 subjects (100 depressed patients, 100 healthy controls and 100 high-risk people) through comparative analysis and follow-up study. For examining the correlation between depression and speech, we extract features as many as possible according to previous research to create a large voice feature set. Then we employ some feature selection methods to eliminate irrelevant, redundant and noisy features to form a compact subset. To measure effectiveness of this new subset, we test it on our dataset with 300 subjects using several common classifiers and 10-fold cross-validation. Since we are collecting data currently, we have no result to report yet.