首页 > 最新文献

AI Open最新文献

英文 中文
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 142-154"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 323-328"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Client: Cross-variable linear integrated enhanced transformer for multivariate long-term time series forecasting 客户:用于多变量长期时间序列预测的交叉变量线性集成增强变压器
Pub Date : 2025-01-01 DOI: 10.1016/j.aiopen.2025.06.001
Jiaxin Gao , Wenbo Hu , Dongxiao Zhang , Yuntian Chen
Long-term time series forecasting (LTSF) is crucial in modern society, playing a pivotal role in facilitating long-term planning and developing early warning systems. While many Transformer-based models have recently been introduced for LTSF, a doubt has been raised regarding the effectiveness of attention modules in capturing cross-time dependencies. In this study, we design a mask-series experiment to validate this assumption and subsequently propose the ”Cross-variable Linear Integrated ENhanced Transformer for Multivariate Long-Term Time Series Forecasting” (Client), an advanced model that outperforms both traditional Transformer-based models and linear models. Client employs the linear module to learn trend information and the enhanced Transformer module to capture cross-variable dependencies. Meanwhile, the cross-variable Transformer module in Client simplifies the embedding and position encoding layers and replaces the decoder module with a projection layer. Extensive experiments with nine real-world datasets have confirmed the SOTA performance of Client with the least computation time and memory consumption compared with the previous Transformer-based models. Our code is available at https://github.com/daxin007/Client.
长期时间序列预测(LTSF)在现代社会中至关重要,在促进长期规划和建立预警系统方面发挥着关键作用。虽然最近为LTSF引入了许多基于transformer的模型,但是对于注意力模块在捕获跨时间依赖性方面的有效性提出了疑问。在本研究中,我们设计了一个掩模系列实验来验证这一假设,并随后提出了“用于多元长期时间序列预测的交叉变量线性集成增强型变压器”(Client),这是一个优于传统基于变压器的模型和线性模型的先进模型。客户端使用线性模块来学习趋势信息,使用增强的Transformer模块来捕获跨变量依赖关系。同时,Client中的跨变量Transformer模块简化了嵌入层和位置编码层,并将解码器模块替换为投影层。在9个真实数据集上的大量实验证明,与之前基于transformer的模型相比,Client的SOTA性能具有最小的计算时间和内存消耗。我们的代码可在https://github.com/daxin007/Client上获得。
{"title":"Client: Cross-variable linear integrated enhanced transformer for multivariate long-term time series forecasting","authors":"Jiaxin Gao ,&nbsp;Wenbo Hu ,&nbsp;Dongxiao Zhang ,&nbsp;Yuntian Chen","doi":"10.1016/j.aiopen.2025.06.001","DOIUrl":"10.1016/j.aiopen.2025.06.001","url":null,"abstract":"<div><div>Long-term time series forecasting (LTSF) is crucial in modern society, playing a pivotal role in facilitating long-term planning and developing early warning systems. While many Transformer-based models have recently been introduced for LTSF, a doubt has been raised regarding the effectiveness of attention modules in capturing cross-time dependencies. In this study, we design a mask-series experiment to validate this assumption and subsequently propose the ”Cross-variable Linear Integrated ENhanced Transformer for Multivariate Long-Term Time Series Forecasting” (<em>Client</em>), an advanced model that outperforms both traditional Transformer-based models and linear models. <em>Client</em> employs the linear module to learn trend information and the enhanced Transformer module to capture cross-variable dependencies. Meanwhile, the cross-variable Transformer module in <em>Client</em> simplifies the embedding and position encoding layers and replaces the decoder module with a projection layer. Extensive experiments with nine real-world datasets have confirmed the SOTA performance of <em>Client</em> with the least computation time and memory consumption compared with the previous Transformer-based models. Our code is available at <span><span>https://github.com/daxin007/Client</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 93-107"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 53-69"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 331-332"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 276-298"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust emotion recognition using hybrid Bayesian LSTM based on Laban movement analysis 基于Laban动作分析的混合贝叶斯LSTM鲁棒情绪识别
IF 14.8 Pub Date : 2025-01-01 DOI: 10.1016/j.aiopen.2025.09.002
Shuang Wu , Daniela M. Romano
Emotion recognition has become increasingly significant in artificial intelligence; however, the impact of body movements on emotion interpretation remains under-explored. This paper presents a novel Hybrid Bayesian Pre-trained Long Short-Term Memory (HBP-LSTM) framework that combines low-level pose data with high-level kinematic features, utilising Bayesian inference to enhance the accuracy and robustness of emotion recognition. The proposed model is trained on high-quality laboratory data to capture the fundamental patterns of emotional expression through body movements. We introduce noise and employ adversarial attack methods such as the Fast Gradient Sign Method (FGSM) to evaluate the model’s robustness during testing. This approach assesses the HBP-LSTM’s ability to maintain performance under data degradation and adversarial conditions, common challenges in real-world scenarios. We validated the HBP-LSTM on two public datasets, EGBM and KDAEE, demonstrating that the model exhibits high robustness against noise and adversarial perturbations, outperforming traditional models. The HBP-LSTM accurately identifies seven basic emotions (happiness, sadness, surprise, fear, anger, disgust, and neutrality) with accuracies of 98% and 88% on the EGBM and KDAEE datasets, respectively. HBP-LSTM is a noise-resistant model with a reliable emotion recognition framework, which lays the foundation for future applications of emotion recognition technology in more challenging real-world environments.
情感识别在人工智能中变得越来越重要;然而,身体动作对情绪解释的影响仍未得到充分探讨。本文提出了一种新的混合贝叶斯预训练长短期记忆(HBP-LSTM)框架,该框架将低级姿态数据与高级运动特征相结合,利用贝叶斯推理来提高情绪识别的准确性和鲁棒性。所提出的模型是在高质量的实验室数据上训练的,以捕捉通过身体动作表达情感的基本模式。在测试过程中,我们引入噪声并采用对抗攻击方法(如快速梯度符号法(FGSM))来评估模型的鲁棒性。该方法评估了HBP-LSTM在数据退化和对抗条件下保持性能的能力,这是现实场景中的常见挑战。我们在两个公共数据集(EGBM和KDAEE)上验证了HBP-LSTM,结果表明该模型对噪声和对抗性扰动具有很高的鲁棒性,优于传统模型。HBP-LSTM准确识别七种基本情绪(快乐、悲伤、惊讶、恐惧、愤怒、厌恶和中立),在EGBM和KDAEE数据集上的准确率分别为98%和88%。HBP-LSTM是一种具有可靠情绪识别框架的抗噪声模型,为未来情绪识别技术在更具挑战性的现实环境中的应用奠定了基础。
{"title":"Robust emotion recognition using hybrid Bayesian LSTM based on Laban movement analysis","authors":"Shuang Wu ,&nbsp;Daniela M. Romano","doi":"10.1016/j.aiopen.2025.09.002","DOIUrl":"10.1016/j.aiopen.2025.09.002","url":null,"abstract":"<div><div>Emotion recognition has become increasingly significant in artificial intelligence; however, the impact of body movements on emotion interpretation remains under-explored. This paper presents a novel Hybrid Bayesian Pre-trained Long Short-Term Memory (HBP-LSTM) framework that combines low-level pose data with high-level kinematic features, utilising Bayesian inference to enhance the accuracy and robustness of emotion recognition. The proposed model is trained on high-quality laboratory data to capture the fundamental patterns of emotional expression through body movements. We introduce noise and employ adversarial attack methods such as the Fast Gradient Sign Method (FGSM) to evaluate the model’s robustness during testing. This approach assesses the HBP-LSTM’s ability to maintain performance under data degradation and adversarial conditions, common challenges in real-world scenarios. We validated the HBP-LSTM on two public datasets, EGBM and KDAEE, demonstrating that the model exhibits high robustness against noise and adversarial perturbations, outperforming traditional models. The HBP-LSTM accurately identifies seven basic emotions (happiness, sadness, surprise, fear, anger, disgust, and neutrality) with accuracies of 98% and 88% on the EGBM and KDAEE datasets, respectively. HBP-LSTM is a noise-resistant model with a reliable emotion recognition framework, which lays the foundation for future applications of emotion recognition technology in more challenging real-world environments.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 183-203"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145218905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 82-92"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 329-330"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 14.8 Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 299-313"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146851522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI Open
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1