首页 > 最新文献

2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)最新文献

英文 中文
Learning of Evaluation Functions via Self-Play Enhanced by Checkmate Search 基于将军搜索的自我游戏评价函数学习
T. Nakayashiki, Tomoyuki Kaneko
As shown in AlphaGo, AlphaGo Zero, and AlphaZero, reinforcement learning is effective in learning of evaluation functions (or value networks) in Go, Chess and Shogi. In their training, two procedures are repeated in parallel; self-play with a current evaluation function and improvement of the evaluation function by using game records yielded by recent self-play. Although AlphaGo, AlphaGo Zero, and AlphaZero have achieved super human performance, the method requires enormous computation resources. To alleviate the problem, this paper proposes to incorporate a checkmate solver in self-play. We show that this small enhancement dramatically improves the efficiency of our experiments in Minishogi, via the quality of game records in self-play. It should be noted that our method is still free from human knowledge about a target domain, though the implementation of checkmate solvers is domain dependent.
如AlphaGo、AlphaGo Zero和AlphaZero所示,强化学习在围棋、国际象棋和棋棋的评估函数(或价值网络)的学习中是有效的。在他们的训练中,两个程序并行重复;具有当前评估功能的自我游戏,以及通过使用最近自我游戏产生的游戏记录来改进评估功能。虽然AlphaGo、AlphaGo Zero和AlphaZero已经取得了超人类的表现,但这种方法需要巨大的计算资源。为了解决这一问题,本文提出了将死求解器。我们发现,通过自我游戏记录的质量,这一小小的改进显著提高了我们在迷你hogi中的实验效率。应该注意的是,我们的方法仍然不需要人类对目标领域的知识,尽管将军求解器的实现依赖于领域。
{"title":"Learning of Evaluation Functions via Self-Play Enhanced by Checkmate Search","authors":"T. Nakayashiki, Tomoyuki Kaneko","doi":"10.1109/TAAI.2018.00036","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00036","url":null,"abstract":"As shown in AlphaGo, AlphaGo Zero, and AlphaZero, reinforcement learning is effective in learning of evaluation functions (or value networks) in Go, Chess and Shogi. In their training, two procedures are repeated in parallel; self-play with a current evaluation function and improvement of the evaluation function by using game records yielded by recent self-play. Although AlphaGo, AlphaGo Zero, and AlphaZero have achieved super human performance, the method requires enormous computation resources. To alleviate the problem, this paper proposes to incorporate a checkmate solver in self-play. We show that this small enhancement dramatically improves the efficiency of our experiments in Minishogi, via the quality of game records in self-play. It should be noted that our method is still free from human knowledge about a target domain, though the implementation of checkmate solvers is domain dependent.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123165547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Comparison of Loss Functions for Training of Deep Neural Networks in Shogi Shogi中深度神经网络训练损失函数的比较
Hanhua Zhu, Tomoyuki Kaneko
Evaluation functions are crucial for building strong computer players in two-player games, such as chess, Go, and shogi. Although a linear combination of a large number of features has been popular representation of an evaluation function in shogi, deep neural networks (DNNs) are recently considered to be more promising by the success of AlphaZero in multiple domains, chess, Go, and shogi. This paper shows that three loss functions, loss in comparison training, temporal difference (TD) errors and cross entropy loss in win prediction, are effective for the training of evaluation functions in shogi, presented in deep neural networks. For the training of DNNs in AlphaZero, the main loss function only consists of win prediction, though it is augmented with move prediction for regularization. On the other hand, for training in traditional shogi programs, various losses including loss in comparison training, TD errors, and cross entropy loss in win prediction, have contributed to yield accurate evaluation functions which are the linear combination of a large number of features. Therefore, it is promising to combine these loss functions and to apply them to the training of modern DNNs. In our experiments, we show that training with combinations of loss functions improved the accuracy of evaluation functions represented by DNNs. The performance of trained evaluation functions is tested through top-1 accuracy, 1-1 accuracy, and self-play.
评估功能对于在双人游戏(如国际象棋、围棋和棋)中培养强大的计算机玩家至关重要。尽管大量特征的线性组合已成为棋棋中评估函数的流行表示,但由于AlphaZero在多个领域、国际象棋、围棋和棋棋中的成功,深度神经网络(dnn)最近被认为更有前途。本文提出了三种损失函数,即比较训练中的损失、时间差(TD)误差和赢值预测中的交叉熵损失,可以有效地训练深度神经网络中的shogi评价函数。对于在AlphaZero中训练dnn,主要的损失函数只包括赢预测,尽管它被增强了正则化的移动预测。另一方面,对于传统的将棋程序的训练,各种损失,包括比较训练中的损失、TD误差和赢度预测中的交叉熵损失,有助于产生准确的评价函数,这些函数是大量特征的线性组合。因此,将这些损失函数结合起来并应用于现代dnn的训练是很有希望的。在我们的实验中,我们证明了使用损失函数组合的训练提高了dnn表示的评估函数的准确性。通过top-1准确率、1-1准确率和自玩来测试训练好的评价函数的性能。
{"title":"Comparison of Loss Functions for Training of Deep Neural Networks in Shogi","authors":"Hanhua Zhu, Tomoyuki Kaneko","doi":"10.1109/TAAI.2018.00014","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00014","url":null,"abstract":"Evaluation functions are crucial for building strong computer players in two-player games, such as chess, Go, and shogi. Although a linear combination of a large number of features has been popular representation of an evaluation function in shogi, deep neural networks (DNNs) are recently considered to be more promising by the success of AlphaZero in multiple domains, chess, Go, and shogi. This paper shows that three loss functions, loss in comparison training, temporal difference (TD) errors and cross entropy loss in win prediction, are effective for the training of evaluation functions in shogi, presented in deep neural networks. For the training of DNNs in AlphaZero, the main loss function only consists of win prediction, though it is augmented with move prediction for regularization. On the other hand, for training in traditional shogi programs, various losses including loss in comparison training, TD errors, and cross entropy loss in win prediction, have contributed to yield accurate evaluation functions which are the linear combination of a large number of features. Therefore, it is promising to combine these loss functions and to apply them to the training of modern DNNs. In our experiments, we show that training with combinations of loss functions improved the accuracy of evaluation functions represented by DNNs. The performance of trained evaluation functions is tested through top-1 accuracy, 1-1 accuracy, and self-play.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133830723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Rating Matrix Pre-Padding for Video Recommendation 视频推荐的评级矩阵预填充
Yang Liu, Guijuan Zhang, Xiaoning Jin, Yaozong Jia
The personalized video recommendation system provides users with great convenience while surfing in the video websites. Among many algorithms adopted by recommendation system, the collaborative filtering algorithm is the most widely used and has achieved great success in practical applications, however, the recommended performance suffers from the problem of data sparsity severely. We propose a model that adopts Doc2Vec to deal with video's text information and integrates genre information into rating matrix pre-padding to reduce the sparsity of ratings. The experimental results show that pre-padding ratings is of high quality and the algorithms based on collaborative filtering achieve better performance on the padded datasets.
个性化视频推荐系统为用户浏览视频网站提供了极大的便利。在推荐系统采用的众多算法中,协同过滤算法应用最为广泛,在实际应用中取得了巨大成功,但推荐性能受到数据稀疏性问题的严重影响。我们提出了一种采用Doc2Vec对视频文本信息进行处理的模型,并将类型信息集成到评级矩阵预填充中,以降低评级的稀疏性。实验结果表明,预填充评级具有较高的质量,基于协同过滤的算法在填充数据集上取得了更好的性能。
{"title":"Rating Matrix Pre-Padding for Video Recommendation","authors":"Yang Liu, Guijuan Zhang, Xiaoning Jin, Yaozong Jia","doi":"10.1109/TAAI.2018.00044","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00044","url":null,"abstract":"The personalized video recommendation system provides users with great convenience while surfing in the video websites. Among many algorithms adopted by recommendation system, the collaborative filtering algorithm is the most widely used and has achieved great success in practical applications, however, the recommended performance suffers from the problem of data sparsity severely. We propose a model that adopts Doc2Vec to deal with video's text information and integrates genre information into rating matrix pre-padding to reduce the sparsity of ratings. The experimental results show that pre-padding ratings is of high quality and the algorithms based on collaborative filtering achieve better performance on the padded datasets.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"386 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116125211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development of an Intelligent Dialogue Agent with Smart Devices for Older Adults: A Preliminary Study 基于智能设备的老年人智能对话代理开发:初步研究
Satoshi Yamada, D. Kitakoshi, Akihiro Yamashita, Kentarou Suzuki, Masato Suzuki
This study aimed to examine the application of an Intelligent Dialogue Agent (IDA) in preventive care frameworks for healthy older adults. Introducing the agent increases familiarity with the frameworks, encourages performance of preventive care exercises, and helps older adults turn using the frameworks into a habit. We used a questionnaire to collect data on older adults' impressions of Information Technology (IT) devices, smart speakers in particular (main components of the IDA), and interviewed the participants after they actually used the smart speaker in order to discuss required functions and expected roles in developing the IDA. Results from the questionnaire and interview revealed promising characteristics of smart speakers and problems concerning Japanese speech recognition.
本研究旨在研究智能对话代理(IDA)在健康老年人预防保健框架中的应用。引入代理增加了对框架的熟悉程度,鼓励预防性保健练习的表现,并帮助老年人将使用框架变成一种习惯。我们使用问卷调查收集老年人对信息技术(IT)设备,特别是智能扬声器(IDA的主要组成部分)的印象数据,并在他们实际使用智能扬声器后采访参与者,以讨论开发IDA所需的功能和预期角色。问卷调查和访谈的结果揭示了智能扬声器的良好特性和日语语音识别的问题。
{"title":"Development of an Intelligent Dialogue Agent with Smart Devices for Older Adults: A Preliminary Study","authors":"Satoshi Yamada, D. Kitakoshi, Akihiro Yamashita, Kentarou Suzuki, Masato Suzuki","doi":"10.1109/TAAI.2018.00020","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00020","url":null,"abstract":"This study aimed to examine the application of an Intelligent Dialogue Agent (IDA) in preventive care frameworks for healthy older adults. Introducing the agent increases familiarity with the frameworks, encourages performance of preventive care exercises, and helps older adults turn using the frameworks into a habit. We used a questionnaire to collect data on older adults' impressions of Information Technology (IT) devices, smart speakers in particular (main components of the IDA), and interviewed the participants after they actually used the smart speaker in order to discuss required functions and expected roles in developing the IDA. Results from the questionnaire and interview revealed promising characteristics of smart speakers and problems concerning Japanese speech recognition.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127652554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Named Entity Filters for Robust Machine Reading Comprehension 鲁棒机器阅读理解的命名实体过滤器
Yuxing Peng, Jane Yung-jen Hsu
The machine reading comprehension problem aims to extract crucial information from the given document to answer the relevant questions. Although many methods regarding the problem have been proposed, the similarity distraction problem inside remains unsolved. The similarity distraction problem addresses the error caused by some sentences being very similar to the question but not containing the answer. Named entities have the uniqueness which can be utilized to distinguish similar sentences to prevent models from being distracted. In this paper, named entity filters (NE filters) are proposed. NE filters can utilize the information of named entities to alleviate the similarity distraction problem. Experiment results in this paper show that the NE filter can enhance the robustness of the used model. The baseline model increases 5% to 10% F1 score on two adversarial SQuAD datasets without decreasing the F1 score on the original SQuAD dataset. Besides, by adding the NE filter, other existing models increases 5% F1 score on the adversarial datasets with less than 1% loss on the original one.
机器阅读理解问题旨在从给定文档中提取关键信息以回答相关问题。虽然针对该问题已经提出了许多方法,但其中的相似性分散问题仍未得到解决。相似度分散问题解决了一些句子与问题非常相似但不包含答案所导致的错误。命名实体具有唯一性,可以用来区分相似的句子,防止模型分心。本文提出了命名实体过滤器(网元过滤器)。网元过滤器可以利用命名实体的信息来缓解相似度分散问题。实验结果表明,该滤波方法可以增强模型的鲁棒性。基线模型在不降低原始SQuAD数据集上的F1分数的情况下,在两个敌对SQuAD数据集上增加5%到10%的F1分数。此外,其他已有模型通过添加网元过滤器,在对抗数据集上提高了5%的F1分数,而在原始数据集上的损失小于1%。
{"title":"Named Entity Filters for Robust Machine Reading Comprehension","authors":"Yuxing Peng, Jane Yung-jen Hsu","doi":"10.1109/TAAI.2018.00048","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00048","url":null,"abstract":"The machine reading comprehension problem aims to extract crucial information from the given document to answer the relevant questions. Although many methods regarding the problem have been proposed, the similarity distraction problem inside remains unsolved. The similarity distraction problem addresses the error caused by some sentences being very similar to the question but not containing the answer. Named entities have the uniqueness which can be utilized to distinguish similar sentences to prevent models from being distracted. In this paper, named entity filters (NE filters) are proposed. NE filters can utilize the information of named entities to alleviate the similarity distraction problem. Experiment results in this paper show that the NE filter can enhance the robustness of the used model. The baseline model increases 5% to 10% F1 score on two adversarial SQuAD datasets without decreasing the F1 score on the original SQuAD dataset. Besides, by adding the NE filter, other existing models increases 5% F1 score on the adversarial datasets with less than 1% loss on the original one.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133144308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Replay Spoofing Detection System for Automatic Speaker Verification Using Multi-Task Learning of Noise Classes 基于噪声类多任务学习的说话人自动验证重放欺骗检测系统
Hye-jin Shim, Jee-weon Jung, Hee-Soo Heo, Sung-Hyun Yoon, Ha-jin Yu
In this paper, we propose a replay attack spoofing detection system for automatic speaker verification using multi-task learning of noise classes. We define the noise that is caused by the replay attack as replay noise. We explore the effectiveness of training a deep neural network simultaneously for replay attack spoofing detection and replay noise classification. The multi-task learning includes classifying the noise of playback devices, recording environments, and recording devices as well as the spoofing detection. Each of the three types of the noise classes also includes a genuine class. The experiment results on the version 1.0 of ASVspoof2017 datasets demonstrate that the performance of our proposed system is improved by 30% relatively on the evaluation set.
在本文中,我们提出了一个重放攻击欺骗检测系统,该系统使用多任务学习噪声类来自动验证说话人。我们将重放攻击产生的噪声定义为重放噪声。我们探索了同时训练深度神经网络用于重播攻击欺骗检测和重播噪声分类的有效性。多任务学习包括对播放设备、录音环境和录音设备的噪声进行分类,以及欺骗检测。这三种噪音类别中的每一种还包括一个真正的类别。在asvspof2017 1.0版本数据集上的实验结果表明,我们提出的系统在评估集上的性能相对提高了30%。
{"title":"Replay Spoofing Detection System for Automatic Speaker Verification Using Multi-Task Learning of Noise Classes","authors":"Hye-jin Shim, Jee-weon Jung, Hee-Soo Heo, Sung-Hyun Yoon, Ha-jin Yu","doi":"10.1109/TAAI.2018.00046","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00046","url":null,"abstract":"In this paper, we propose a replay attack spoofing detection system for automatic speaker verification using multi-task learning of noise classes. We define the noise that is caused by the replay attack as replay noise. We explore the effectiveness of training a deep neural network simultaneously for replay attack spoofing detection and replay noise classification. The multi-task learning includes classifying the noise of playback devices, recording environments, and recording devices as well as the spoofing detection. Each of the three types of the noise classes also includes a genuine class. The experiment results on the version 1.0 of ASVspoof2017 datasets demonstrate that the performance of our proposed system is improved by 30% relatively on the evaluation set.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134244051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1