首页 > 最新文献

IEEE transactions on artificial intelligence最新文献

英文 中文
IEEE Transactions on Artificial Intelligence Publication Information IEEE人工智能学报
Pub Date : 2025-03-31 DOI: 10.1109/TAI.2025.3551528
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2025.3551528","DOIUrl":"https://doi.org/10.1109/TAI.2025.3551528","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10946100","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143740222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Artificial Intelligence Publication Information
Pub Date : 2025-03-10 DOI: 10.1109/TAI.2025.3546710
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2025.3546710","DOIUrl":"https://doi.org/10.1109/TAI.2025.3546710","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 3","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918892","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Artificial Intelligence Publication Information
Pub Date : 2025-03-03 DOI: 10.1109/TAI.2025.3544009
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2025.3544009","DOIUrl":"https://doi.org/10.1109/TAI.2025.3544009","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 2","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10908601","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial: Operationalizing Responsible AI
Pub Date : 2025-03-03 DOI: 10.1109/TAI.2025.3527806
Qinghua Lu;Apostol Vassilev;Jun Zhu;Foutse Khomh
{"title":"Guest Editorial: Operationalizing Responsible AI","authors":"Qinghua Lu;Apostol Vassilev;Jun Zhu;Foutse Khomh","doi":"10.1109/TAI.2025.3527806","DOIUrl":"https://doi.org/10.1109/TAI.2025.3527806","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 2","pages":"252-253"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10908600","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Index IEEE Transactions on Artificial Intelligence Vol. 5 2024 Index IEEE Transactions on Artificial Intelligence Vol.
Pub Date : 2025-01-20 DOI: 10.1109/TAI.2025.3531741
{"title":"2024 Index IEEE Transactions on Artificial Intelligence Vol. 5","authors":"","doi":"10.1109/TAI.2025.3531741","DOIUrl":"https://doi.org/10.1109/TAI.2025.3531741","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"1-93"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10847313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143183912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Artificial Intelligence Publication Information IEEE人工智能学报
Pub Date : 2025-01-14 DOI: 10.1109/TAI.2024.3525221
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2024.3525221","DOIUrl":"https://doi.org/10.1109/TAI.2024.3525221","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 1","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10841916","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Future Directions in Artificial Intelligence Research 社论:人工智能研究的未来方向
Pub Date : 2024-12-11 DOI: 10.1109/TAI.2024.3501912
Hussein Abbass
{"title":"Editorial: Future Directions in Artificial Intelligence Research","authors":"Hussein Abbass","doi":"10.1109/TAI.2024.3501912","DOIUrl":"https://doi.org/10.1109/TAI.2024.3501912","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"5858-5862"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10794556","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Artificial Intelligence Publication Information IEEE人工智能学报
Pub Date : 2024-12-11 DOI: 10.1109/TAI.2024.3509237
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2024.3509237","DOIUrl":"https://doi.org/10.1109/TAI.2024.3509237","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10794554","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClusVPR: Efficient Visual Place Recognition With Clustering-Based Weighted Transformer 基于聚类加权变压器的高效视觉位置识别
Pub Date : 2024-12-02 DOI: 10.1109/TAI.2024.3510479
Yifan Xu;Pourya Shamsolmoali;Masoume Zareapoor;Jie Yang
Visual place recognition (VPR) is a highly challenging task that has a wide range of applications, including robot navigation and self-driving vehicles. VPR is a difficult task due to duplicate regions and insufficient attention to small objects in complex scenes, resulting in recognition deviations. In this article, we present ClusVPR, a novel approach that tackles the specific issues of redundant information in duplicate regions and representations of small objects. Different from existing methods that rely on convolutional neural networks (CNNs) for feature map generation, ClusVPR introduces a unique paradigm called clustering-based weighted transformer network (CWTNet). CWTNet uses the power of clustering-based weighted feature maps and integrates global dependencies to effectively address visual deviations encountered in large-scale VPR problems. We also introduce the optimized-VLAD (OptLAD) layer, which significantly reduces the number of parameters and enhances model efficiency. This layer is specifically designed to aggregate the information obtained from scale-wise image patches. Additionally, our pyramid self-supervised strategy focuses on extracting representative and diverse features from scale-wise image patches rather than from entire images. This approach is essential for capturing a broader range of information required for robust VPR. Extensive experiments on four VPR datasets show our model's superior performance compared to existing models while being less complex.
视觉位置识别(VPR)是一项极具挑战性的任务,具有广泛的应用,包括机器人导航和自动驾驶汽车。VPR是一项困难的任务,因为在复杂的场景中存在重复的区域和对小物体的关注不足,导致识别偏差。在本文中,我们介绍了ClusVPR,这是一种解决重复区域和小对象表示中冗余信息的特定问题的新方法。与现有的依赖卷积神经网络(cnn)生成特征图的方法不同,ClusVPR引入了一种独特的范式,称为基于聚类的加权变压器网络(CWTNet)。CWTNet利用基于聚类的加权特征映射的力量,整合全局依赖关系,有效解决大规模VPR问题中遇到的视觉偏差。我们还引入了优化的vlad (OptLAD)层,该层显著减少了参数的数量,提高了模型效率。这一层专门用于聚合从按比例的图像补丁中获得的信息。此外,我们的金字塔自监督策略侧重于从尺度图像斑块中提取具有代表性和多样性的特征,而不是从整个图像中提取。这种方法对于捕获鲁棒VPR所需的更广泛的信息至关重要。在四个VPR数据集上的大量实验表明,与现有模型相比,我们的模型性能优越,且复杂性较低。
{"title":"ClusVPR: Efficient Visual Place Recognition With Clustering-Based Weighted Transformer","authors":"Yifan Xu;Pourya Shamsolmoali;Masoume Zareapoor;Jie Yang","doi":"10.1109/TAI.2024.3510479","DOIUrl":"https://doi.org/10.1109/TAI.2024.3510479","url":null,"abstract":"Visual place recognition (VPR) is a highly challenging task that has a wide range of applications, including robot navigation and self-driving vehicles. VPR is a difficult task due to duplicate regions and insufficient attention to small objects in complex scenes, resulting in recognition deviations. In this article, we present ClusVPR, a novel approach that tackles the specific issues of redundant information in duplicate regions and representations of small objects. Different from existing methods that rely on convolutional neural networks (CNNs) for feature map generation, ClusVPR introduces a unique paradigm called clustering-based weighted transformer network (CWTNet). CWTNet uses the power of clustering-based weighted feature maps and integrates global dependencies to effectively address visual deviations encountered in large-scale VPR problems. We also introduce the optimized-VLAD (OptLAD) layer, which significantly reduces the number of parameters and enhances model efficiency. This layer is specifically designed to aggregate the information obtained from scale-wise image patches. Additionally, our pyramid self-supervised strategy focuses on extracting representative and diverse features from scale-wise image patches rather than from entire images. This approach is essential for capturing a broader range of information required for robust VPR. Extensive experiments on four VPR datasets show our model's superior performance compared to existing models while being less complex.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"1038-1049"},"PeriodicalIF":0.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unformer: A Transformer-Based Approach for Adaptive Multiscale Feature Aggregation in Underwater Image Enhancement 一种基于变换的水下图像自适应多尺度特征聚合方法
Pub Date : 2024-11-29 DOI: 10.1109/TAI.2024.3508667
Yuhao Qing;Yueying Wang;Huaicheng Yan;Xiangpeng Xie;Zhengguang Wu
Underwater imaging is often compromised by light scattering and absorption, resulting in image degradation and distortion. This manifests as blurred details, color shifts, and diminished illumination and contrast, thereby hindering advancements in underwater research. To mitigate these issues, we propose Unformer, an innovative underwater image enhancement (UIE) technique that leverages a transformer-based architecture for multiscale adaptive feature aggregation. Our approach employs a multiscale feature fusion strategy that adaptively restores illumination and detail features. We reevaluate the relationship between convolution and transformer to develop a novel encoder structure. This structure effectively integrates both long-range and short-range dependencies, dynamically combines local and global features, and constructs a comprehensive global context. Furthermore, we propose a unique multibranch decoder architecture that enhances and efficiently extracts spatial context information through the transformer module. Extensive experiments on three datasets demonstrate that our proposed method outperforms other techniques in both subjective and objective evaluations. Compared with the latest methods, Unformer has improved the peak signal-to-noise ratio (PSNR) by 19.5% and 14.8% respectively on the LSUI and EUVP datasets. The code is available at: https://github.com/yhflq/Unformer.
水下成像经常受到光散射和吸收的影响,导致图像退化和失真。这表现为细节模糊、色彩偏移、照明和对比度降低,从而阻碍了水下研究的进展。为了缓解这些问题,我们提出了一种创新的水下图像增强(UIE)技术Unformer,该技术利用基于变压器的架构进行多尺度自适应特征聚合。该方法采用自适应恢复光照和细节特征的多尺度特征融合策略。我们重新评估了卷积和变压器之间的关系,以开发一种新的编码器结构。这种结构有效地整合了长期和短期的依赖关系,动态地结合了局部和全局特征,构建了一个全面的全球语境。此外,我们提出了一种独特的多分支解码器架构,通过变压器模块增强并有效地提取空间上下文信息。在三个数据集上进行的大量实验表明,我们提出的方法在主观和客观评估方面都优于其他技术。与最新方法相比,Unformer在LSUI和EUVP数据集上的峰值信噪比(PSNR)分别提高了19.5%和14.8%。代码可从https://github.com/yhflq/Unformer获得。
{"title":"Unformer: A Transformer-Based Approach for Adaptive Multiscale Feature Aggregation in Underwater Image Enhancement","authors":"Yuhao Qing;Yueying Wang;Huaicheng Yan;Xiangpeng Xie;Zhengguang Wu","doi":"10.1109/TAI.2024.3508667","DOIUrl":"https://doi.org/10.1109/TAI.2024.3508667","url":null,"abstract":"Underwater imaging is often compromised by light scattering and absorption, resulting in image degradation and distortion. This manifests as blurred details, color shifts, and diminished illumination and contrast, thereby hindering advancements in underwater research. To mitigate these issues, we propose Unformer, an innovative underwater image enhancement (UIE) technique that leverages a transformer-based architecture for multiscale adaptive feature aggregation. Our approach employs a multiscale feature fusion strategy that adaptively restores illumination and detail features. We reevaluate the relationship between convolution and transformer to develop a novel encoder structure. This structure effectively integrates both long-range and short-range dependencies, dynamically combines local and global features, and constructs a comprehensive global context. Furthermore, we propose a unique multibranch decoder architecture that enhances and efficiently extracts spatial context information through the transformer module. Extensive experiments on three datasets demonstrate that our proposed method outperforms other techniques in both subjective and objective evaluations. Compared with the latest methods, Unformer has improved the peak signal-to-noise ratio (PSNR) by 19.5% and 14.8% respectively on the LSUI and EUVP datasets. The code is available at: <uri>https://github.com/yhflq/Unformer</uri>.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"1024-1037"},"PeriodicalIF":0.0,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143740385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on artificial intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1