首页 > 最新文献

IEEE transactions on artificial intelligence最新文献

英文 中文
IEEE Transactions on Artificial Intelligence Publication Information
Pub Date : 2025-01-14 DOI: 10.1109/TAI.2024.3525221
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2024.3525221","DOIUrl":"https://doi.org/10.1109/TAI.2024.3525221","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 1","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10841916","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Future Directions in Artificial Intelligence Research
Pub Date : 2024-12-11 DOI: 10.1109/TAI.2024.3501912
Hussein Abbass
{"title":"Editorial: Future Directions in Artificial Intelligence Research","authors":"Hussein Abbass","doi":"10.1109/TAI.2024.3501912","DOIUrl":"https://doi.org/10.1109/TAI.2024.3501912","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"5858-5862"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10794556","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Artificial Intelligence Publication Information
Pub Date : 2024-12-11 DOI: 10.1109/TAI.2024.3509237
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2024.3509237","DOIUrl":"https://doi.org/10.1109/TAI.2024.3509237","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10794554","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Artificial Intelligence Publication Information IEEE Transactions on Artificial Intelligence 出版信息
Pub Date : 2024-11-12 DOI: 10.1109/TAI.2024.3489337
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2024.3489337","DOIUrl":"https://doi.org/10.1109/TAI.2024.3489337","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10750915","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Better Accuracy-Efficiency Trade-Offs: Dynamic Activity Inference via Collaborative Learning From Various Width-Resolution Configurations
Pub Date : 2024-11-04 DOI: 10.1109/TAI.2024.3489532
Lutong Qin;Lei Zhang;Chengrun Li;Chaoda Song;Dongzhou Cheng;Shuoyuan Wang;Hao Wu;Aiguo Song
Recently, deep neural networks have triumphed over a large variety of human activity recognition (HAR) applications on resource-constrained mobile devices. However, most existing works are static and ignore the fact that the computational budget usually changes drastically across various devices, which prevent real-world HAR deployment. It still remains a major challenge: how to adaptively and instantly tradeoff accuracy and latency at runtime for on-device activity inference using time series sensor data? To address this issue, this article introduces a new collaborative learning scheme by training a set of subnetworks executed at varying network widths when fueled with different sensor input resolutions as data augmentation, which can instantly switch on-the-fly at different width-resolution configurations for flexible and dynamic activity inference under varying resource budgets. Particularly, it offers a promising performance-boosting solution by utilizing self-distillation to transfer the unique knowledge among multiple width-resolution configuration, which can capture stronger feature representations for activity recognition. Extensive experiments and ablation studies on three public HAR benchmark datasets validate the effectiveness and efficiency of our approach. A real implementation is evaluated on a mobile device. This discovery opens up the possibility to directly access accuracy-latency spectrum of deep learning models in versatile real-world HAR deployments. Code is available at https://github.com/Lutong-Qin/Collaborative_HAR.
最近,深度神经网络在资源受限的移动设备上的各种人类活动识别(HAR)应用中大放异彩。然而,现有的大多数作品都是静态的,忽略了计算预算通常会在不同设备上发生巨大变化这一事实,从而阻碍了真实世界中的人类活动识别部署。如何在运行时利用时间序列传感器数据自适应地即时权衡设备上活动推理的准确性和延迟,仍然是一个重大挑战。为了解决这个问题,本文介绍了一种新的协作学习方案,即在使用不同传感器输入分辨率作为数据增强时,通过训练一组以不同网络宽度执行的子网络,在不同的宽度分辨率配置下即时切换,从而在不同的资源预算下实现灵活、动态的活动推断。特别是,它提供了一种很有前景的性能提升解决方案,利用自蒸发功能在多种宽度分辨率配置之间转移独特的知识,从而为活动识别捕捉到更强的特征表征。在三个公共 HAR 基准数据集上进行的广泛实验和消融研究验证了我们方法的有效性和效率。我们还在移动设备上评估了实际实施情况。这一发现为在多用途真实 HAR 部署中直接获取深度学习模型的准确性-延迟谱提供了可能性。代码见 https://github.com/Lutong-Qin/Collaborative_HAR。
{"title":"Towards Better Accuracy-Efficiency Trade-Offs: Dynamic Activity Inference via Collaborative Learning From Various Width-Resolution Configurations","authors":"Lutong Qin;Lei Zhang;Chengrun Li;Chaoda Song;Dongzhou Cheng;Shuoyuan Wang;Hao Wu;Aiguo Song","doi":"10.1109/TAI.2024.3489532","DOIUrl":"https://doi.org/10.1109/TAI.2024.3489532","url":null,"abstract":"Recently, deep neural networks have triumphed over a large variety of human activity recognition (HAR) applications on resource-constrained mobile devices. However, most existing works are static and ignore the fact that the computational budget usually changes drastically across various devices, which prevent real-world HAR deployment. It still remains a major challenge: how to adaptively and instantly tradeoff accuracy and latency at runtime for on-device activity inference using time series sensor data? To address this issue, this article introduces a new collaborative learning scheme by training a set of subnetworks executed at varying network widths when fueled with different sensor input resolutions as data augmentation, which can instantly switch on-the-fly at different width-resolution configurations for flexible and dynamic activity inference under varying resource budgets. Particularly, it offers a promising performance-boosting solution by utilizing self-distillation to transfer the unique knowledge among multiple width-resolution configuration, which can capture stronger feature representations for activity recognition. Extensive experiments and ablation studies on three public HAR benchmark datasets validate the effectiveness and efficiency of our approach. A real implementation is evaluated on a mobile device. This discovery opens up the possibility to directly access accuracy-latency spectrum of deep learning models in versatile real-world HAR deployments. Code is available at \u0000<uri>https://github.com/Lutong-Qin/Collaborative_HAR</uri>\u0000.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6723-6738"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Empirical Inherited Intelligent MPC for Switched Systems With Network Security Communication
Pub Date : 2024-10-25 DOI: 10.1109/TAI.2024.3486276
Yiwen Qi;Yiwen Tang;Wenke Yu
This article studies learning empirical inherited intelligent model predictive control (LEII-MPC) for switched systems. For complex environments and systems, an intelligent control method design with learning ability is necessary and meaningful. First, a switching law that coordinates the iterative learning control action is devised according to the average dwell time approach. Second, an intelligent MPC mechanism with the iteration learning experience is designed to optimize the control action. With the designed LEII-MPC, sufficient conditions for the switched systems stability equipped with the event-triggering schemes (ETSs) in both the time domain and the iterative domain are presented. The ETS in the iterative domain is to solve unnecessary iterative updates. The ETS in the time domain is to deal with potential denial of service (DoS) attacks, which includes two parts: 1) for detection, an attack-dependent event-triggering method is presented to determine attack sequence and reduce lost packets; and 2) for compensation, a buffer is used to ensure system performance during the attack period. Last, a numerical example shows the effectiveness of the proposed method.
{"title":"Learning Empirical Inherited Intelligent MPC for Switched Systems With Network Security Communication","authors":"Yiwen Qi;Yiwen Tang;Wenke Yu","doi":"10.1109/TAI.2024.3486276","DOIUrl":"https://doi.org/10.1109/TAI.2024.3486276","url":null,"abstract":"This article studies learning empirical inherited intelligent model predictive control (LEII-MPC) for switched systems. For complex environments and systems, an intelligent control method design with learning ability is necessary and meaningful. First, a switching law that coordinates the iterative learning control action is devised according to the average dwell time approach. Second, an intelligent MPC mechanism with the iteration learning experience is designed to optimize the control action. With the designed LEII-MPC, sufficient conditions for the switched systems stability equipped with the event-triggering schemes (ETSs) in both the time domain and the iterative domain are presented. The ETS in the iterative domain is to solve unnecessary iterative updates. The ETS in the time domain is to deal with potential denial of service (DoS) attacks, which includes two parts: 1) for detection, an attack-dependent event-triggering method is presented to determine attack sequence and reduce lost packets; and 2) for compensation, a buffer is used to ensure system performance during the attack period. Last, a numerical example shows the effectiveness of the proposed method.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6342-6355"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Dual Watermarking for Image Copyright Protection and Authentication
Pub Date : 2024-10-24 DOI: 10.1109/TAI.2024.3485519
Sudev Kumar Padhi;Archana Tiwari;Sk. Subidh Ali
Advancements in digital technologies make it easy to modify the content of digital images. Hence, ensuring digital images’ integrity and authenticity is necessary to protect them against various attacks that manipulate them. We present a deep learning (DL) based dual invisible watermarking technique for performing source authentication, content authentication, and protecting digital content copyright of images sent over the internet. Beyond securing images, the proposed technique demonstrates robustness to content-preserving image manipulation attacks. It is also impossible to imitate or overwrite watermarks because the cryptographic hash of the image and the dominant features of the image in the form of perceptual hash are used as watermarks. We highlighted the need for source authentication to safeguard image integrity and authenticity, along with identifying similar content for copyright protection. After exhaustive testing, our technique obtained a high peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), which implies there is a minute change in the original image after embedding our watermarks. Our trained model achieves high watermark extraction accuracy and satisfies two different objectives of verification and authentication on the same watermarked image.
{"title":"Deep Learning-Based Dual Watermarking for Image Copyright Protection and Authentication","authors":"Sudev Kumar Padhi;Archana Tiwari;Sk. Subidh Ali","doi":"10.1109/TAI.2024.3485519","DOIUrl":"https://doi.org/10.1109/TAI.2024.3485519","url":null,"abstract":"Advancements in digital technologies make it easy to modify the content of digital images. Hence, ensuring digital images’ integrity and authenticity is necessary to protect them against various attacks that manipulate them. We present a deep learning (DL) based dual invisible watermarking technique for performing source authentication, content authentication, and protecting digital content copyright of images sent over the internet. Beyond securing images, the proposed technique demonstrates robustness to content-preserving image manipulation attacks. It is also impossible to imitate or overwrite watermarks because the cryptographic hash of the image and the dominant features of the image in the form of perceptual hash are used as watermarks. We highlighted the need for source authentication to safeguard image integrity and authenticity, along with identifying similar content for copyright protection. After exhaustive testing, our technique obtained a high peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), which implies there is a minute change in the original image after embedding our watermarks. Our trained model achieves high watermark extraction accuracy and satisfies two different objectives of verification and authentication on the same watermarked image.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6134-6145"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTECC: A Multitask Learning Framework for Esophageal Cancer Analysis
Pub Date : 2024-10-24 DOI: 10.1109/TAI.2024.3485524
Jianpeng An;Wenqi Li;Yunhao Bai;Huazhen Chen;Gang Zhao;Qing Cai;Zhongke Gao
In the field of esophageal cancer diagnostics, the accurate identification and classification of tumors and adjacent tissues within whole slide images (WSIs) are critical. However, this task is complicated by the difficulty in annotating normal tissue on tumor-bearing slides, as the infiltration results in a blend of different tissue types, making annotation difficult for pathologists. To overcome this challenge, we introduce the multitask esophageal cancer classification (MTECC) framework, featuring an innovative dual-branch architecture that operates at both global and local levels. The framework initially employs a masked autoencoder (MAE) for self-supervised learning. A distinctive feature of MTECC is the integration of RandoMix, an innovative image augmentation technique that randomly exchanges patches between different images. This method significantly enhances the model's generalization ability, especially for recognizing tissues within cancerous slides. MTECC ingeniously integrates two tasks: tumor detection using global tokens, and fine-grained tissue classification at the patch level using local tokens. The empirical evaluation of the MTECC on our extensive esophageal cancer dataset substantiates its efficacy. The performance metrics indicate robust results, with an accuracy of 0.811, an F1 score of 0.735, and an AUC of 0.957. The MTECC method represents a significant advancement in applying deep learning to complex pathological image analysis, offering valuable tools for pathologists in diagnosing and treating esophageal cancer.
在食管癌诊断领域,准确识别和分类全切片图像(WSI)中的肿瘤和邻近组织至关重要。然而,由于肿瘤浸润会导致不同组织类型的混合,病理学家很难对肿瘤载玻片上的正常组织进行标注,从而使这项任务变得复杂。为了克服这一难题,我们推出了多任务食管癌分类(MTECC)框架,该框架采用创新的双分支架构,可在全局和局部两个层面上运行。该框架最初采用掩码自动编码器(MAE)进行自我监督学习。MTECC 的一个显著特点是集成了 RandoMix,这是一种创新的图像增强技术,可在不同图像之间随机交换斑块。这种方法大大增强了模型的泛化能力,尤其是在识别癌变切片中的组织时。MTECC 巧妙地整合了两项任务:使用全局标记检测肿瘤,以及使用局部标记在补丁级进行细粒度组织分类。我们在广泛的食管癌数据集上对 MTECC 进行了实证评估,证实了它的功效。性能指标显示结果很稳定,准确率为 0.811,F1 得分为 0.735,AUC 为 0.957。MTECC 方法代表了将深度学习应用于复杂病理图像分析的重大进展,为病理学家诊断和治疗食管癌提供了宝贵的工具。
{"title":"MTECC: A Multitask Learning Framework for Esophageal Cancer Analysis","authors":"Jianpeng An;Wenqi Li;Yunhao Bai;Huazhen Chen;Gang Zhao;Qing Cai;Zhongke Gao","doi":"10.1109/TAI.2024.3485524","DOIUrl":"https://doi.org/10.1109/TAI.2024.3485524","url":null,"abstract":"In the field of esophageal cancer diagnostics, the accurate identification and classification of tumors and adjacent tissues within whole slide images (WSIs) are critical. However, this task is complicated by the difficulty in annotating normal tissue on tumor-bearing slides, as the infiltration results in a blend of different tissue types, making annotation difficult for pathologists. To overcome this challenge, we introduce the multitask esophageal cancer classification (MTECC) framework, featuring an innovative dual-branch architecture that operates at both global and local levels. The framework initially employs a masked autoencoder (MAE) for self-supervised learning. A distinctive feature of MTECC is the integration of RandoMix, an innovative image augmentation technique that randomly exchanges patches between different images. This method significantly enhances the model's generalization ability, especially for recognizing tissues within cancerous slides. MTECC ingeniously integrates two tasks: tumor detection using global tokens, and fine-grained tissue classification at the patch level using local tokens. The empirical evaluation of the MTECC on our extensive esophageal cancer dataset substantiates its efficacy. The performance metrics indicate robust results, with an accuracy of 0.811, an F1 score of 0.735, and an AUC of 0.957. The MTECC method represents a significant advancement in applying deep learning to complex pathological image analysis, offering valuable tools for pathologists in diagnosing and treating esophageal cancer.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6739-6751"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Domain Adaptation on Point Clouds via High-Order Geometric Structure Modeling
Pub Date : 2024-10-18 DOI: 10.1109/TAI.2024.3483199
Jiang-Xing Cheng;Huibin Lin;Chun-Yang Zhang;C. L. Philip Chen
Point clouds can capture the precise geometric information of objects and scenes, which are an important source of 3-D data and one of the most popular 3-D geometric data structures for cognitions in many real-world applications like automatic driving and remote sensing. However, due to the influence of sensors and varieties of objects, the point clouds obtained by different devices may suffer obvious geometric changes, resulting in domain gaps that are prone to the neural networks trained in one domain failing to preserve the performance in other domains. To alleviate the above problem, this article proposes an unsupervised domain adaptation framework, named HO-GSM, as the first attempt to model high-order geometric structures of point clouds. First, we construct multiple self-supervised tasks to learn the invariant semantic and geometric features of the source and target domains, especially to capture the feature invariance of high-order geometric structures of point clouds. Second, the discriminative feature space of target domain is acquired by using contrastive learning to refine domain alignment to specific class level. Experiments on the PointDA-10 and GraspNetPC-10 collection of datasets show that the proposed HO-GSM can significantly outperform the state-of-the-art counterparts.
{"title":"Unsupervised Domain Adaptation on Point Clouds via High-Order Geometric Structure Modeling","authors":"Jiang-Xing Cheng;Huibin Lin;Chun-Yang Zhang;C. L. Philip Chen","doi":"10.1109/TAI.2024.3483199","DOIUrl":"https://doi.org/10.1109/TAI.2024.3483199","url":null,"abstract":"Point clouds can capture the precise geometric information of objects and scenes, which are an important source of 3-D data and one of the most popular 3-D geometric data structures for cognitions in many real-world applications like automatic driving and remote sensing. However, due to the influence of sensors and varieties of objects, the point clouds obtained by different devices may suffer obvious geometric changes, resulting in domain gaps that are prone to the neural networks trained in one domain failing to preserve the performance in other domains. To alleviate the above problem, this article proposes an unsupervised domain adaptation framework, named HO-GSM, as the first attempt to model high-order geometric structures of point clouds. First, we construct multiple self-supervised tasks to learn the invariant semantic and geometric features of the source and target domains, especially to capture the feature invariance of high-order geometric structures of point clouds. Second, the discriminative feature space of target domain is acquired by using contrastive learning to refine domain alignment to specific class level. Experiments on the PointDA-10 and GraspNetPC-10 collection of datasets show that the proposed HO-GSM can significantly outperform the state-of-the-art counterparts.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6121-6133"},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Artificial Intelligence Publication Information IEEE Transactions on Artificial Intelligence 出版信息
Pub Date : 2024-10-16 DOI: 10.1109/TAI.2024.3470571
{"title":"IEEE Transactions on Artificial Intelligence Publication Information","authors":"","doi":"10.1109/TAI.2024.3470571","DOIUrl":"https://doi.org/10.1109/TAI.2024.3470571","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 10","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10720653","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on artificial intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1