{"title":"AI, IoT hardware and Algorithmic Considerations for Hearing aid and Extreme Edge Applications","authors":"R. Brennan","doi":"10.1109/MWSCAS.2019.8884886","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence (AI) has made many significant advances over recent years. Starting out as mainly university research endeavors, recent significant breakthroughs have pushed the practicality of AI to the forefront. It is making fast inroads to traditional industrial applications and it is clear that given sufficient computing resources, AI is applicable to almost anything.The breakthrough referenced is mostly the result of the diligent persistence of a number of AI researchers in combination with large increases in available computation power to reconsider much deeper neural networks than previously used which were consistently rejected because of their large complexity reasons. Deep Neural Nets have proven to be an adept framework and up to solving the difficult challenges proven previous machine learning approaches could not solve.Recently, driven by enhanced computational power and necessity, edge applications have arisen to the forefront. Generally, now that cloud computing is available, a choice may be made where to locate the recognition engine, local to the data source or on the cloud where considerable computing resources are available.Hearing aids, a product on the edge now connected via one or more wireless links and fully immersed in IoT are the subject and consideration of this paper. It is natural to consider whether, via these links, remote computation is possible and appropriate for hearing aid applications. Difficulties arise when remote computation is attempted simply because the local data to be inferenced must be transmitted. Summarizing, utilizing remote computing for local recognition creates, two immediate problems: 1) the transmission of possibly private data across an insecure channel, and 2) the channel may not exist in remote or adverse transmission environments. Two further difficulties emerge in an important subset of applications, including hearing aids and hearable products: 1) Delay from the transmission latency required to obtain Cloud computation – although fast and capable – once the information is obtained and 2) Transmission power.","PeriodicalId":287815,"journal":{"name":"2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MWSCAS.2019.8884886","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence (AI) has made many significant advances over recent years. Starting out as mainly university research endeavors, recent significant breakthroughs have pushed the practicality of AI to the forefront. It is making fast inroads to traditional industrial applications and it is clear that given sufficient computing resources, AI is applicable to almost anything.The breakthrough referenced is mostly the result of the diligent persistence of a number of AI researchers in combination with large increases in available computation power to reconsider much deeper neural networks than previously used which were consistently rejected because of their large complexity reasons. Deep Neural Nets have proven to be an adept framework and up to solving the difficult challenges proven previous machine learning approaches could not solve.Recently, driven by enhanced computational power and necessity, edge applications have arisen to the forefront. Generally, now that cloud computing is available, a choice may be made where to locate the recognition engine, local to the data source or on the cloud where considerable computing resources are available.Hearing aids, a product on the edge now connected via one or more wireless links and fully immersed in IoT are the subject and consideration of this paper. It is natural to consider whether, via these links, remote computation is possible and appropriate for hearing aid applications. Difficulties arise when remote computation is attempted simply because the local data to be inferenced must be transmitted. Summarizing, utilizing remote computing for local recognition creates, two immediate problems: 1) the transmission of possibly private data across an insecure channel, and 2) the channel may not exist in remote or adverse transmission environments. Two further difficulties emerge in an important subset of applications, including hearing aids and hearable products: 1) Delay from the transmission latency required to obtain Cloud computation – although fast and capable – once the information is obtained and 2) Transmission power.