{"title":"Evaluating BSS Algorithms in a Mobile Context Realized by a Client-Server Architecture","authors":"M. Offiah, T. Gross, M. Borschbach","doi":"10.1145/2897073.2897098","DOIUrl":null,"url":null,"abstract":"The human daily and the professional life demand a high amount of communication ability, but every fourth adult above 50 is hearing-impaired, a fraction that steadily in- creases in an aging society. For an autonomous, self-condent and long productive life, a good speech understanding in ev- eryday life situations is necessary to reduce the listening ef- fort. For this purpose, a mobile app-based assistance system based on Blind Source Separation is required that makes every-day acoustic scenarios more transparent by the op- portunity of an interactive focusing on the preferred sound source in as close to real-time as possible. In case of highly costly BSS algorithms, at least an oine separation is to be provided. Developing such an app in the context of a short-term research project with limited budget to realize this goal statement makes it impossible to meet the chal- lenge as a stand-alone solution with existing technologies and hardware. As an alternative, employing part of the re- quired soft- and/or hardware on a remote server at least maintains the mobile context, given sucient connectivity. For this purpose, a client-server architecture that combines Android, Java, MatlabControl and MATLAB, and that con- ducts separation of live recorded audio data remotely, is ex- plained and tested. Conclusions about what is possible for the oine case are drawn from that. Tests are evaluated using a set of objective and subjective criteria. This demon- strates the possibility of realizing the assistance system in a mobile context.","PeriodicalId":296509,"journal":{"name":"2016 IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2897073.2897098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The human daily and the professional life demand a high amount of communication ability, but every fourth adult above 50 is hearing-impaired, a fraction that steadily in- creases in an aging society. For an autonomous, self-condent and long productive life, a good speech understanding in ev- eryday life situations is necessary to reduce the listening ef- fort. For this purpose, a mobile app-based assistance system based on Blind Source Separation is required that makes every-day acoustic scenarios more transparent by the op- portunity of an interactive focusing on the preferred sound source in as close to real-time as possible. In case of highly costly BSS algorithms, at least an oine separation is to be provided. Developing such an app in the context of a short-term research project with limited budget to realize this goal statement makes it impossible to meet the chal- lenge as a stand-alone solution with existing technologies and hardware. As an alternative, employing part of the re- quired soft- and/or hardware on a remote server at least maintains the mobile context, given sucient connectivity. For this purpose, a client-server architecture that combines Android, Java, MatlabControl and MATLAB, and that con- ducts separation of live recorded audio data remotely, is ex- plained and tested. Conclusions about what is possible for the oine case are drawn from that. Tests are evaluated using a set of objective and subjective criteria. This demon- strates the possibility of realizing the assistance system in a mobile context.