Kyungdeuk Ko;Bokyeung Lee;Jonghwan Hong;Hanseok Ko
{"title":"KFA:用于发现开放集关键词的关键词特征增强技术","authors":"Kyungdeuk Ko;Bokyeung Lee;Jonghwan Hong;Hanseok Ko","doi":"10.1109/LSP.2024.3484932","DOIUrl":null,"url":null,"abstract":"In recent years, with the advancement of deep learning technology and the emergence of smart devices, there has been a growing interest in keyword spotting (KWS), which is used to activate AI systems with automatic speech recognition and text-to-speech. However, smart devices with KWS often encounter false alarm errors when inputting unexpected words. To address this issue, existing KWS methods typically train non-target words as an \n<italic>unknown</i>\n class. Despite these efforts, there is still a possibility that unseen words not trained as part of the \n<italic>unknown</i>\n class could be misclassified as one of the target words. To overcome this limitation, we propose a new method named Keyword Feature Augmentation (KFA) for open-set KWS. KFA performs feature augmentation through adversarial learning to increase the loss. The augmented features are constrained within a limited space using label smoothing. Unlike other generative model-based open set recognition (OSR) methods, KFA does not require any additional training parameters or repeated operation for inference. As a result, KFA has achieved a 0.955 AUROC score and 97.34% target class accuracy for Google Speech Commands V1, and a 0.959 AUROC score and 98.17% target class accuracy for Google Speech Commands V2, which is the highest performance when compared to various OSR methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"31 ","pages":"2985-2989"},"PeriodicalIF":3.2000,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"KFA: Keyword Feature Augmentation for Open Set Keyword Spotting\",\"authors\":\"Kyungdeuk Ko;Bokyeung Lee;Jonghwan Hong;Hanseok Ko\",\"doi\":\"10.1109/LSP.2024.3484932\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, with the advancement of deep learning technology and the emergence of smart devices, there has been a growing interest in keyword spotting (KWS), which is used to activate AI systems with automatic speech recognition and text-to-speech. However, smart devices with KWS often encounter false alarm errors when inputting unexpected words. To address this issue, existing KWS methods typically train non-target words as an \\n<italic>unknown</i>\\n class. Despite these efforts, there is still a possibility that unseen words not trained as part of the \\n<italic>unknown</i>\\n class could be misclassified as one of the target words. To overcome this limitation, we propose a new method named Keyword Feature Augmentation (KFA) for open-set KWS. KFA performs feature augmentation through adversarial learning to increase the loss. The augmented features are constrained within a limited space using label smoothing. Unlike other generative model-based open set recognition (OSR) methods, KFA does not require any additional training parameters or repeated operation for inference. As a result, KFA has achieved a 0.955 AUROC score and 97.34% target class accuracy for Google Speech Commands V1, and a 0.959 AUROC score and 98.17% target class accuracy for Google Speech Commands V2, which is the highest performance when compared to various OSR methods.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"31 \",\"pages\":\"2985-2989\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10726800/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10726800/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
KFA: Keyword Feature Augmentation for Open Set Keyword Spotting
In recent years, with the advancement of deep learning technology and the emergence of smart devices, there has been a growing interest in keyword spotting (KWS), which is used to activate AI systems with automatic speech recognition and text-to-speech. However, smart devices with KWS often encounter false alarm errors when inputting unexpected words. To address this issue, existing KWS methods typically train non-target words as an
unknown
class. Despite these efforts, there is still a possibility that unseen words not trained as part of the
unknown
class could be misclassified as one of the target words. To overcome this limitation, we propose a new method named Keyword Feature Augmentation (KFA) for open-set KWS. KFA performs feature augmentation through adversarial learning to increase the loss. The augmented features are constrained within a limited space using label smoothing. Unlike other generative model-based open set recognition (OSR) methods, KFA does not require any additional training parameters or repeated operation for inference. As a result, KFA has achieved a 0.955 AUROC score and 97.34% target class accuracy for Google Speech Commands V1, and a 0.959 AUROC score and 98.17% target class accuracy for Google Speech Commands V2, which is the highest performance when compared to various OSR methods.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.