{"title":"An Experimental System to Analyze or Synthesize Baby's Emotional Voice using the Varidation of Pitch Frequencies","authors":"Chikahiro Araki, Shin-ichiro Hashimukai, Satoshi Motomani, Mikio Mori, S. Taniguchi, Shozo Kato, Yasuhiro Ogoshi","doi":"10.1109/ISUC.2008.57","DOIUrl":null,"url":null,"abstract":"It has been expected to utlize the technology of speech recognition and speech synthesis in order to improve the human interface. However, it is a difficult problem to extract and analyze emotional information contained in human voice in speech recognition and speech synthesis. This paper describes an experimental system to analyze and synthesize baby's emotional voice using the perturbation parameters of pitch frequencies in order to improve human interface. The system is composed of three parts: the first part is to record and analyze the voice and to indicate the perturbation of pitch frequencies in real-time. The next part is to evaluate four perturbation parameters of pitch frequency for baby's voice. Finally, baby's voice is synthesized using 2-mass model of vocal cords vibration. From the experiment of several baby's voices, it is shown that the proposed method is useful to analyze and synthesize baby's emotional voice.","PeriodicalId":339811,"journal":{"name":"2008 Second International Symposium on Universal Communication","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Second International Symposium on Universal Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISUC.2008.57","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
It has been expected to utlize the technology of speech recognition and speech synthesis in order to improve the human interface. However, it is a difficult problem to extract and analyze emotional information contained in human voice in speech recognition and speech synthesis. This paper describes an experimental system to analyze and synthesize baby's emotional voice using the perturbation parameters of pitch frequencies in order to improve human interface. The system is composed of three parts: the first part is to record and analyze the voice and to indicate the perturbation of pitch frequencies in real-time. The next part is to evaluate four perturbation parameters of pitch frequency for baby's voice. Finally, baby's voice is synthesized using 2-mass model of vocal cords vibration. From the experiment of several baby's voices, it is shown that the proposed method is useful to analyze and synthesize baby's emotional voice.