首页 > 最新文献

AI最新文献

英文 中文
A Model for Feature Selection with Binary Particle Swarm Optimisation and Synthetic Features 使用二元粒子群优化和合成特征的特征选择模型
AI
Pub Date : 2024-07-25 DOI: 10.3390/ai5030060
S. Ojo, J. Adisa, P. Owolawi, Chunling Tu
Recognising patterns and inferring nonlinearities between data that are seemingly random and stochastic in nature is one of the strong suites of machine learning models. Given a set of features, the ability to distinguish between useful features and seemingly useless features, and thereafter extract a subset of features that will result in the best prediction on data that are highly stochastic, remains an open issue. This study presents a model for feature selection by generating synthetic features and applying Binary Particle Swarm Optimisation with a Long Short-Term Memory-based model. The study analyses the correlation between data and makes use of Apple stock market data as a use case. Synthetic features are created from features that have weak/low correlation to the label and analysed how synthetic features that are descriptive of features can enhance the model’s predictive capability. The results obtained show that by expanding the dataset to contain synthetic features before applying feature selection, the objective function was better optimised as compared to when no synthetic features were added.
识别看似随机和随机性质的数据之间的模式并推断其非线性关系,是机器学习模型的强项之一。在给定一组特征的情况下,如何区分有用特征和看似无用的特征,进而提取出能对高度随机的数据做出最佳预测的特征子集,仍然是一个有待解决的问题。本研究通过生成合成特征并应用基于长短期记忆模型的二元粒子群优化技术,提出了一种特征选择模型。该研究分析了数据之间的相关性,并将苹果股票市场数据作为使用案例。从与标签相关性弱/低的特征中创建了合成特征,并分析了描述特征的合成特征如何增强模型的预测能力。结果表明,在应用特征选择之前扩展数据集以包含合成特征,与不添加合成特征的情况相比,目标函数得到了更好的优化。
{"title":"A Model for Feature Selection with Binary Particle Swarm Optimisation and Synthetic Features","authors":"S. Ojo, J. Adisa, P. Owolawi, Chunling Tu","doi":"10.3390/ai5030060","DOIUrl":"https://doi.org/10.3390/ai5030060","url":null,"abstract":"Recognising patterns and inferring nonlinearities between data that are seemingly random and stochastic in nature is one of the strong suites of machine learning models. Given a set of features, the ability to distinguish between useful features and seemingly useless features, and thereafter extract a subset of features that will result in the best prediction on data that are highly stochastic, remains an open issue. This study presents a model for feature selection by generating synthetic features and applying Binary Particle Swarm Optimisation with a Long Short-Term Memory-based model. The study analyses the correlation between data and makes use of Apple stock market data as a use case. Synthetic features are created from features that have weak/low correlation to the label and analysed how synthetic features that are descriptive of features can enhance the model’s predictive capability. The results obtained show that by expanding the dataset to contain synthetic features before applying feature selection, the objective function was better optimised as compared to when no synthetic features were added.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141804898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Advances in 3D Object Detection for Self-Driving Vehicles: A Survey 自动驾驶汽车 3D 物体检测的最新进展:调查
AI
Pub Date : 2024-07-25 DOI: 10.3390/ai5030061
Oluwajuwon A. Fawole, Danda B. Rawat
The development of self-driving or autonomous vehicles has led to significant advancements in 3D object detection technologies, which are critical for the safety and efficiency of autonomous driving. Despite recent advances, several challenges remain in sensor integration, handling sparse and noisy data, and ensuring reliable performance across diverse environmental conditions. This paper comprehensively surveys state-of-the-art 3D object detection techniques for autonomous vehicles, emphasizing the importance of multi-sensor fusion techniques and advanced deep learning models. Furthermore, we present key areas for future research, including enhancing sensor fusion algorithms, improving computational efficiency, and addressing ethical, security, and privacy concerns. The integration of these technologies into real-world applications for autonomous driving is presented by highlighting potential benefits and limitations. We also present a side-by-side comparison of different techniques in a tabular form. Through a comprehensive review, this paper aims to provide insights into the future directions of 3D object detection and its impact on the evolution of autonomous driving.
自驾车或自动驾驶汽车的发展带动了三维物体检测技术的重大进步,这对自动驾驶的安全性和效率至关重要。尽管最近取得了一些进展,但在传感器集成、处理稀疏和噪声数据以及确保不同环境条件下的可靠性能等方面仍存在一些挑战。本文全面介绍了最先进的自动驾驶汽车三维物体检测技术,强调了多传感器融合技术和高级深度学习模型的重要性。此外,我们还介绍了未来研究的关键领域,包括增强传感器融合算法、提高计算效率以及解决道德、安全和隐私问题。通过强调潜在的优势和局限性,介绍了如何将这些技术整合到自动驾驶的实际应用中。我们还以表格形式对不同技术进行了并列比较。通过全面回顾,本文旨在深入探讨三维物体检测的未来发展方向及其对自动驾驶发展的影响。
{"title":"Recent Advances in 3D Object Detection for Self-Driving Vehicles: A Survey","authors":"Oluwajuwon A. Fawole, Danda B. Rawat","doi":"10.3390/ai5030061","DOIUrl":"https://doi.org/10.3390/ai5030061","url":null,"abstract":"The development of self-driving or autonomous vehicles has led to significant advancements in 3D object detection technologies, which are critical for the safety and efficiency of autonomous driving. Despite recent advances, several challenges remain in sensor integration, handling sparse and noisy data, and ensuring reliable performance across diverse environmental conditions. This paper comprehensively surveys state-of-the-art 3D object detection techniques for autonomous vehicles, emphasizing the importance of multi-sensor fusion techniques and advanced deep learning models. Furthermore, we present key areas for future research, including enhancing sensor fusion algorithms, improving computational efficiency, and addressing ethical, security, and privacy concerns. The integration of these technologies into real-world applications for autonomous driving is presented by highlighting potential benefits and limitations. We also present a side-by-side comparison of different techniques in a tabular form. Through a comprehensive review, this paper aims to provide insights into the future directions of 3D object detection and its impact on the evolution of autonomous driving.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141803608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks 基于动态编程的深度神经网络白盒对抗攻击
AI
Pub Date : 2024-07-24 DOI: 10.3390/ai5030059
Swati Aggarwal, Anshul Mittal, Sanchit Aggarwal, Anshul Kumar Singh
Recent studies have exposed the vulnerabilities of deep neural networks to some carefully perturbed input data. We propose a novel untargeted white box adversarial attack, the dynamic programming-based sub-pixel score method (SPSM) attack (DPSPSM), which is a variation of the traditional gradient-based white box adversarial approach that is limited by a fixed hamming distance using a dynamic programming-based structure. It is stimulated using a pixel score metric technique, the SPSM, which is introduced in this paper. In contrast to the conventional gradient-based adversarial attacks, which alter entire images almost imperceptibly, the DPSPSM is swift and offers the robustness of manipulating only a small number of input pixels. The presented algorithm quantizes the gradient update with a score generated for each pixel, incorporating contributions from each channel. The results show that the DPSPSM deceives the model with a success rate of 30.45% in the CIFAR-10 test set and 29.30% in the CIFAR-100 test set.
最近的研究暴露了深度神经网络在一些精心扰动的输入数据面前的脆弱性。我们提出了一种新颖的非目标白盒对抗攻击--基于动态编程的子像素得分法(SPSM)攻击(DPSPSM),它是传统的基于梯度的白盒对抗方法的一种变体,该方法使用基于动态编程的结构,受到固定汉明距离的限制。本文介绍的 SPSM 是一种像素分数度量技术。传统的基于梯度的对抗攻击对整个图像的改变几乎难以察觉,与之相比,DPSPSM 不仅速度快,而且只对少量输入像素进行处理,具有很强的鲁棒性。所介绍的算法利用为每个像素生成的分数量化梯度更新,并纳入了每个通道的贡献。结果表明,DPSPSM 在 CIFAR-10 测试集中的成功率为 30.45%,在 CIFAR-100 测试集中的成功率为 29.30%。
{"title":"Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks","authors":"Swati Aggarwal, Anshul Mittal, Sanchit Aggarwal, Anshul Kumar Singh","doi":"10.3390/ai5030059","DOIUrl":"https://doi.org/10.3390/ai5030059","url":null,"abstract":"Recent studies have exposed the vulnerabilities of deep neural networks to some carefully perturbed input data. We propose a novel untargeted white box adversarial attack, the dynamic programming-based sub-pixel score method (SPSM) attack (DPSPSM), which is a variation of the traditional gradient-based white box adversarial approach that is limited by a fixed hamming distance using a dynamic programming-based structure. It is stimulated using a pixel score metric technique, the SPSM, which is introduced in this paper. In contrast to the conventional gradient-based adversarial attacks, which alter entire images almost imperceptibly, the DPSPSM is swift and offers the robustness of manipulating only a small number of input pixels. The presented algorithm quantizes the gradient update with a score generated for each pixel, incorporating contributions from each channel. The results show that the DPSPSM deceives the model with a success rate of 30.45% in the CIFAR-10 test set and 29.30% in the CIFAR-100 test set.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141808420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer Vision for Safety Management in the Steel Industry 计算机视觉在钢铁行业安全管理中的应用
AI
Pub Date : 2024-07-19 DOI: 10.3390/ai5030058
Roy Lan, I. Awolusi, Jiannan Cai
The complex nature of the steel manufacturing environment, characterized by different types of hazards from materials and large machinery, makes the need for objective and automated monitoring very critical to replace the traditional methods, which are manual and subjective. This study explores the feasibility of implementing computer vision for safety management in steel manufacturing, with a case study implementation for automated hard hat detection. The research combines hazard characterization, technology assessment, and a pilot case study. First, a comprehensive review of steel manufacturing hazards was conducted, followed by the application of TOPSIS, a multi-criteria decision analysis method, to select a candidate computer vision system from eight commercially available systems. This pilot study evaluated YOLOv5m, YOLOv8m, and YOLOv9c models on 703 grayscale images from a steel mini-mill, assessing performance through precision, recall, F1-score, mAP, specificity, and AUC metrics. Results showed high overall accuracy in hard hat detection, with YOLOv9c slightly outperforming others, particularly in detecting safety violations. Challenges emerged in handling class imbalance and accurately identifying absent hard hats, especially given grayscale imagery limitations. Despite these challenges, this study affirms the feasibility of computer vision-based safety management in steel manufacturing, providing a foundation for future automated safety monitoring systems. Findings underscore the need for larger, diverse datasets and advanced techniques to address industry-specific complexities, paving the way for enhanced workplace safety in challenging industrial environments.
钢铁制造环境复杂,材料和大型机械会带来不同类型的危险,因此需要客观和自动化的监控来取代传统的人工和主观方法。本研究探讨了在钢铁制造行业实施计算机视觉安全管理的可行性,并以自动检测硬礼帽为案例进行了研究。研究结合了危险特征描述、技术评估和试点案例研究。首先,对钢铁制造业的危险性进行了全面审查,然后应用多标准决策分析方法 TOPSIS,从八个市售系统中选择了一个候选计算机视觉系统。这项试点研究评估了 YOLOv5m、YOLOv8m 和 YOLOv9c 模型在 703 幅小型钢厂灰度图像上的表现,通过精确度、召回率、F1 分数、mAP、特异性和 AUC 指标进行评估。结果表明,硬礼帽检测的总体准确率很高,YOLOv9c 略胜一筹,尤其是在检测安全违规方面。在处理类别不平衡和准确识别不存在的硬礼帽方面存在挑战,特别是考虑到灰度图像的局限性。尽管存在这些挑战,但本研究证实了基于计算机视觉的安全管理在钢铁制造业中的可行性,为未来的自动安全监控系统奠定了基础。研究结果强调,需要更大、更多样的数据集和先进技术来解决特定行业的复杂性,从而为在具有挑战性的工业环境中加强工作场所安全铺平道路。
{"title":"Computer Vision for Safety Management in the Steel Industry","authors":"Roy Lan, I. Awolusi, Jiannan Cai","doi":"10.3390/ai5030058","DOIUrl":"https://doi.org/10.3390/ai5030058","url":null,"abstract":"The complex nature of the steel manufacturing environment, characterized by different types of hazards from materials and large machinery, makes the need for objective and automated monitoring very critical to replace the traditional methods, which are manual and subjective. This study explores the feasibility of implementing computer vision for safety management in steel manufacturing, with a case study implementation for automated hard hat detection. The research combines hazard characterization, technology assessment, and a pilot case study. First, a comprehensive review of steel manufacturing hazards was conducted, followed by the application of TOPSIS, a multi-criteria decision analysis method, to select a candidate computer vision system from eight commercially available systems. This pilot study evaluated YOLOv5m, YOLOv8m, and YOLOv9c models on 703 grayscale images from a steel mini-mill, assessing performance through precision, recall, F1-score, mAP, specificity, and AUC metrics. Results showed high overall accuracy in hard hat detection, with YOLOv9c slightly outperforming others, particularly in detecting safety violations. Challenges emerged in handling class imbalance and accurately identifying absent hard hats, especially given grayscale imagery limitations. Despite these challenges, this study affirms the feasibility of computer vision-based safety management in steel manufacturing, providing a foundation for future automated safety monitoring systems. Findings underscore the need for larger, diverse datasets and advanced techniques to address industry-specific complexities, paving the way for enhanced workplace safety in challenging industrial environments.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization Strategies for Atari Game Environments: Integrating Snake Optimization Algorithm and Energy Valley Optimization in Reinforcement Learning Models 雅达利游戏环境的优化策略:强化学习模型中的蛇优化算法与能量谷优化相结合
AI
Pub Date : 2024-07-17 DOI: 10.3390/ai5030057
Sadeq Mohammed Kadhm Sarkhi, Hakan Koyuncu
One of the biggest problems in gaming AI is related to how we can optimize and adapt a deep reinforcement learning (DRL) model, especially when it is running inside complex, dynamic environments like “PacMan”. The existing research has concentrated more or less on basic DRL approaches though the utilization of advanced optimization methods. This paper tries to fill these gaps by proposing an innovative methodology that combines DRL with high-level metaheuristic optimization methods. The work presented in this paper specifically refactors DRL models on the “PacMan” domain with Energy Serpent Optimizer (ESO) for hyperparameter search. These novel adaptations give a major performance boost to the AI agent, as these are where its adaptability, response time, and efficiency gains start actually showing in the more complex game space. This work innovatively incorporates the metaheuristic optimization algorithm into another field—DRL—for Atari gaming AI. This integration is essential for the improvement of DRL models in general and allows for more efficient and real-time game play. This work delivers a comprehensive empirical study for these algorithms that not only verifies their capabilities in practice but also sets a state of the art through the prism of AI-driven game development. More than simply improving gaming AI, the developments could eventually apply to more sophisticated gaming environments, ongoing improvement of algorithms during execution, real-time adaptation regarding learning, and likely even robotics/autonomous systems. This study further illustrates the necessity for even-handed and conscientious application of AI in gaming—specifically regarding questions of fairness and addiction.
游戏人工智能领域最大的问题之一是如何优化和调整深度强化学习(DRL)模型,尤其是当它在像 "吃豆人 "这样复杂的动态环境中运行时。现有的研究或多或少都集中在基本的 DRL 方法上,而没有利用先进的优化方法。本文试图通过提出一种将 DRL 与高级元启发式优化方法相结合的创新方法来填补这些空白。本文所介绍的工作特别针对 "PacMan "领域的 DRL 模型进行了重构,并将能量蛇优化器(ESO)用于超参数搜索。这些新颖的适应性极大地提升了人工智能代理的性能,因为在更为复杂的游戏空间中,人工智能代理的适应性、响应时间和效率的提升开始真正显现出来。这项工作创新性地将元启发式优化算法融入到 Atari 游戏人工智能的另一个领域--DRL。这种整合对于改进一般 DRL 模型至关重要,可实现更高效、更实时的游戏。这项工作对这些算法进行了全面的实证研究,不仅验证了它们在实践中的能力,而且还通过人工智能驱动的游戏开发棱镜设定了技术状态。这些开发成果不仅能改善游戏人工智能,最终还能应用于更复杂的游戏环境、执行过程中算法的不断改进、学习方面的实时适应,甚至机器人/自主系统。这项研究进一步说明,在游戏中应用人工智能时,特别是在公平性和成瘾性问题上,必须做到公平公正、认真负责。
{"title":"Optimization Strategies for Atari Game Environments: Integrating Snake Optimization Algorithm and Energy Valley Optimization in Reinforcement Learning Models","authors":"Sadeq Mohammed Kadhm Sarkhi, Hakan Koyuncu","doi":"10.3390/ai5030057","DOIUrl":"https://doi.org/10.3390/ai5030057","url":null,"abstract":"One of the biggest problems in gaming AI is related to how we can optimize and adapt a deep reinforcement learning (DRL) model, especially when it is running inside complex, dynamic environments like “PacMan”. The existing research has concentrated more or less on basic DRL approaches though the utilization of advanced optimization methods. This paper tries to fill these gaps by proposing an innovative methodology that combines DRL with high-level metaheuristic optimization methods. The work presented in this paper specifically refactors DRL models on the “PacMan” domain with Energy Serpent Optimizer (ESO) for hyperparameter search. These novel adaptations give a major performance boost to the AI agent, as these are where its adaptability, response time, and efficiency gains start actually showing in the more complex game space. This work innovatively incorporates the metaheuristic optimization algorithm into another field—DRL—for Atari gaming AI. This integration is essential for the improvement of DRL models in general and allows for more efficient and real-time game play. This work delivers a comprehensive empirical study for these algorithms that not only verifies their capabilities in practice but also sets a state of the art through the prism of AI-driven game development. More than simply improving gaming AI, the developments could eventually apply to more sophisticated gaming environments, ongoing improvement of algorithms during execution, real-time adaptation regarding learning, and likely even robotics/autonomous systems. This study further illustrates the necessity for even-handed and conscientious application of AI in gaming—specifically regarding questions of fairness and addiction.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141830616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConVision Benchmark: A Contemporary Framework to Benchmark CNN and ViT Models ConVision 基准:对 CNN 和 ViT 模型进行基准测试的当代框架
AI
Pub Date : 2024-07-11 DOI: 10.3390/ai5030056
Shreyas Bangalore Vijayakumar, Krishna Teja Chitty-Venkata, Kanishk Arya, Arun Somani
Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have shown remarkable performance in computer vision tasks, including object detection and image recognition. These models have evolved significantly in architecture, efficiency, and versatility. Concurrently, deep-learning frameworks have diversified, with versions that often complicate reproducibility and unified benchmarking. We propose ConVision Benchmark, a comprehensive framework in PyTorch, to standardize the implementation and evaluation of state-of-the-art CNN and ViT models. This framework addresses common challenges such as version mismatches and inconsistent validation metrics. As a proof of concept, we performed an extensive benchmark analysis on a COVID-19 dataset, encompassing nearly 200 CNN and ViT models in which DenseNet-161 and MaxViT-Tiny achieved exceptional accuracy with a peak performance of around 95%. Although we primarily used the COVID-19 dataset for image classification, the framework is adaptable to a variety of datasets, enhancing its applicability across different domains. Our methodology includes rigorous performance evaluations, highlighting metrics such as accuracy, precision, recall, F1 score, and computational efficiency (FLOPs, MACs, CPU, and GPU latency). The ConVision Benchmark facilitates a comprehensive understanding of model efficacy, aiding researchers in deploying high-performance models for diverse applications.
卷积神经网络(CNN)和视觉变换器(ViT)在计算机视觉任务(包括物体检测和图像识别)中表现出色。这些模型在架构、效率和多功能性方面都有了长足的发展。与此同时,深度学习框架也变得多样化,其版本往往使可重复性和统一基准变得复杂。我们提出的 ConVision Benchmark 是 PyTorch 中的一个综合框架,旨在对最先进的 CNN 和 ViT 模型的实现和评估进行标准化。该框架解决了版本不匹配和验证指标不一致等常见难题。作为概念验证,我们在 COVID-19 数据集上进行了广泛的基准分析,该数据集包含近 200 个 CNN 和 ViT 模型,其中 DenseNet-161 和 MaxViT-Tiny 实现了约 95% 的峰值准确率。虽然我们主要将 COVID-19 数据集用于图像分类,但该框架可适用于各种数据集,从而增强了其在不同领域的适用性。我们的方法包括严格的性能评估,重点关注准确率、精确度、召回率、F1 分数和计算效率(FLOPs、MACs、CPU 和 GPU 延迟)等指标。ConVision 基准有助于全面了解模型的功效,帮助研究人员为各种应用部署高性能模型。
{"title":"ConVision Benchmark: A Contemporary Framework to Benchmark CNN and ViT Models","authors":"Shreyas Bangalore Vijayakumar, Krishna Teja Chitty-Venkata, Kanishk Arya, Arun Somani","doi":"10.3390/ai5030056","DOIUrl":"https://doi.org/10.3390/ai5030056","url":null,"abstract":"Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have shown remarkable performance in computer vision tasks, including object detection and image recognition. These models have evolved significantly in architecture, efficiency, and versatility. Concurrently, deep-learning frameworks have diversified, with versions that often complicate reproducibility and unified benchmarking. We propose ConVision Benchmark, a comprehensive framework in PyTorch, to standardize the implementation and evaluation of state-of-the-art CNN and ViT models. This framework addresses common challenges such as version mismatches and inconsistent validation metrics. As a proof of concept, we performed an extensive benchmark analysis on a COVID-19 dataset, encompassing nearly 200 CNN and ViT models in which DenseNet-161 and MaxViT-Tiny achieved exceptional accuracy with a peak performance of around 95%. Although we primarily used the COVID-19 dataset for image classification, the framework is adaptable to a variety of datasets, enhancing its applicability across different domains. Our methodology includes rigorous performance evaluations, highlighting metrics such as accuracy, precision, recall, F1 score, and computational efficiency (FLOPs, MACs, CPU, and GPU latency). The ConVision Benchmark facilitates a comprehensive understanding of model efficacy, aiding researchers in deploying high-performance models for diverse applications.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141655384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Number of Vehicles Involved in Rural Crashes Using Learning Vector Quantization Algorithm 利用学习矢量量化算法预测农村交通事故中的肇事车辆数量
AI
Pub Date : 2024-07-08 DOI: 10.3390/ai5030054
Sina Shaffiee Haghshenas, G. Guido, Sami Shaffiee Haghshenas, V. Astarita
Roads represent very important infrastructure and play a significant role in economic, cultural, and social growth. Therefore, there is a critical need for many researchers to model crash injury severity in order to study how safe roads are. When measuring the cost of crashes, the severity of the crash is a critical criterion, and it is classified into various categories. The number of vehicles involved in the crash (NVIC) is a crucial factor in all of these categories. For this purpose, this research examines road safety and provides a prediction model for the number of vehicles involved in a crash. Specifically, learning vector quantization (LVQ 2.1), one of the sub-branches of artificial neural networks (ANNs), is used to build a classification model. The novelty of this study demonstrates LVQ 2.1’s efficacy in categorizing accident data and its ability to improve road safety strategies. The LVQ 2.1 algorithm is particularly suitable for classification tasks and works by adjusting prototype vectors to improve the classification performance. The research emphasizes how urgently better prediction algorithms are needed to handle issues related to road safety. In this study, a dataset of 564 crash records from rural roads in Calabria between 2017 and 2048, a region in southern Italy, was utilized. The study analyzed several key parameters, including daylight, the crash type, day of the week, location, speed limit, average speed, and annual average daily traffic, as input variables to predict the number of vehicles involved in rural crashes. The findings revealed that the “crash type” parameter had the most significant impact, whereas “location” had the least significant impact on the occurrence of rural crashes in the investigated areas.
道路是非常重要的基础设施,在经济、文化和社会发展中发挥着重要作用。因此,许多研究人员亟需建立碰撞伤害严重程度模型,以研究道路的安全程度。在衡量撞车成本时,撞车严重程度是一个重要标准,可分为不同类别。在所有这些类别中,车祸所涉车辆数量(NVIC)都是一个关键因素。为此,本研究对道路安全进行了研究,并提供了一个碰撞事故所涉车辆数量的预测模型。具体来说,学习矢量量化(LVQ 2.1)是人工神经网络(ANN)的分支之一,被用于建立分类模型。这项研究的新颖之处在于证明了 LVQ 2.1 在对事故数据进行分类方面的功效及其改善道路安全策略的能力。LVQ 2.1 算法特别适用于分类任务,通过调整原型向量来提高分类性能。这项研究强调了如何迫切需要更好的预测算法来处理与道路安全相关的问题。在这项研究中,利用了意大利南部卡拉布里亚大区 2017 年至 2048 年间 564 条农村道路碰撞记录的数据集。研究分析了几个关键参数,包括白天、碰撞类型、星期、地点、限速、平均速度和年平均日交通量,将其作为输入变量来预测农村碰撞事故中涉及的车辆数量。研究结果表明,"车祸类型 "参数对调查地区农村车祸发生率的影响最大,而 "地点 "参数对调查地区农村车祸发生率的影响最小。
{"title":"Predicting Number of Vehicles Involved in Rural Crashes Using Learning Vector Quantization Algorithm","authors":"Sina Shaffiee Haghshenas, G. Guido, Sami Shaffiee Haghshenas, V. Astarita","doi":"10.3390/ai5030054","DOIUrl":"https://doi.org/10.3390/ai5030054","url":null,"abstract":"Roads represent very important infrastructure and play a significant role in economic, cultural, and social growth. Therefore, there is a critical need for many researchers to model crash injury severity in order to study how safe roads are. When measuring the cost of crashes, the severity of the crash is a critical criterion, and it is classified into various categories. The number of vehicles involved in the crash (NVIC) is a crucial factor in all of these categories. For this purpose, this research examines road safety and provides a prediction model for the number of vehicles involved in a crash. Specifically, learning vector quantization (LVQ 2.1), one of the sub-branches of artificial neural networks (ANNs), is used to build a classification model. The novelty of this study demonstrates LVQ 2.1’s efficacy in categorizing accident data and its ability to improve road safety strategies. The LVQ 2.1 algorithm is particularly suitable for classification tasks and works by adjusting prototype vectors to improve the classification performance. The research emphasizes how urgently better prediction algorithms are needed to handle issues related to road safety. In this study, a dataset of 564 crash records from rural roads in Calabria between 2017 and 2048, a region in southern Italy, was utilized. The study analyzed several key parameters, including daylight, the crash type, day of the week, location, speed limit, average speed, and annual average daily traffic, as input variables to predict the number of vehicles involved in rural crashes. The findings revealed that the “crash type” parameter had the most significant impact, whereas “location” had the least significant impact on the occurrence of rural crashes in the investigated areas.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141669976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZTCloudGuard: Zero Trust Context-Aware Access Management Framework to Avoid Medical Errors in the Era of Generative AI and Cloud-Based Health Information Ecosystems ZTCloudGuard:零信任情境感知访问管理框架,避免生成式人工智能和云健康信息生态系统时代的医疗错误
AI
Pub Date : 2024-07-08 DOI: 10.3390/ai5030055
Khalid Al-hammuri, F. Gebali, Awos Kanan
Managing access between large numbers of distributed medical devices has become a crucial aspect of modern healthcare systems, enabling the establishment of smart hospitals and telehealth infrastructure. However, as telehealth technology continues to evolve and Internet of Things (IoT) devices become more widely used, they are also increasingly exposed to various types of vulnerabilities and medical errors. In healthcare information systems, about 90% of vulnerabilities emerge from medical error and human error. As a result, there is a need for additional research and development of security tools to prevent such attacks. This article proposes a zero-trust-based context-aware framework for managing access to the main components of the cloud ecosystem, including users, devices, and output data. The main goal and benefit of the proposed framework is to build a scoring system to prevent or alleviate medical errors while using distributed medical devices in cloud-based healthcare information systems. The framework has two main scoring criteria to maintain the chain of trust. First, it proposes a critical trust score based on cloud-native microservices for authentication, encryption, logging, and authorizations. Second, a bond trust scoring system is created to assess the real-time semantic and syntactic analysis of attributes stored in a healthcare information system. The analysis is based on a pre-trained machine learning model that generates the semantic and syntactic scores. The framework also takes into account regulatory compliance and user consent in the creation of the scoring system. The advantage of this method is that it applies to any language and adapts to all attributes, as it relies on a language model, not just a set of predefined and limited attributes. The results show a high F1 score of 93.5%, which proves that it is valid for detecting medical errors.
管理大量分布式医疗设备之间的访问已成为现代医疗保健系统的一个重要方面,使智能医院和远程医疗基础设施的建立成为可能。然而,随着远程医疗技术的不断发展和物联网(IoT)设备的广泛应用,它们也越来越多地暴露于各种类型的漏洞和医疗差错之中。在医疗信息系统中,约 90% 的漏洞来自医疗错误和人为错误。因此,有必要进一步研究和开发安全工具,以防止此类攻击。本文提出了一种基于零信任的情境感知框架,用于管理对云生态系统主要组件(包括用户、设备和输出数据)的访问。所提框架的主要目标和优点是建立一个评分系统,以防止或减轻在基于云的医疗保健信息系统中使用分布式医疗设备时出现的医疗错误。该框架有两个主要评分标准来维护信任链。首先,它提出了基于云原生微服务的验证、加密、日志记录和授权的关键信任评分。其次,创建了一个债券信任评分系统,用于评估医疗信息系统中存储的属性的实时语义和语法分析。该分析基于一个预先训练好的机器学习模型,该模型可生成语义和语法评分。该框架在创建评分系统时还考虑到了法规遵从性和用户同意。这种方法的优势在于它适用于任何语言,并能适应所有属性,因为它依赖的是一个语言模型,而不仅仅是一组预定义的有限属性。结果显示,F1 得分高达 93.5%,这证明它对检测医疗差错是有效的。
{"title":"ZTCloudGuard: Zero Trust Context-Aware Access Management Framework to Avoid Medical Errors in the Era of Generative AI and Cloud-Based Health Information Ecosystems","authors":"Khalid Al-hammuri, F. Gebali, Awos Kanan","doi":"10.3390/ai5030055","DOIUrl":"https://doi.org/10.3390/ai5030055","url":null,"abstract":"Managing access between large numbers of distributed medical devices has become a crucial aspect of modern healthcare systems, enabling the establishment of smart hospitals and telehealth infrastructure. However, as telehealth technology continues to evolve and Internet of Things (IoT) devices become more widely used, they are also increasingly exposed to various types of vulnerabilities and medical errors. In healthcare information systems, about 90% of vulnerabilities emerge from medical error and human error. As a result, there is a need for additional research and development of security tools to prevent such attacks. This article proposes a zero-trust-based context-aware framework for managing access to the main components of the cloud ecosystem, including users, devices, and output data. The main goal and benefit of the proposed framework is to build a scoring system to prevent or alleviate medical errors while using distributed medical devices in cloud-based healthcare information systems. The framework has two main scoring criteria to maintain the chain of trust. First, it proposes a critical trust score based on cloud-native microservices for authentication, encryption, logging, and authorizations. Second, a bond trust scoring system is created to assess the real-time semantic and syntactic analysis of attributes stored in a healthcare information system. The analysis is based on a pre-trained machine learning model that generates the semantic and syntactic scores. The framework also takes into account regulatory compliance and user consent in the creation of the scoring system. The advantage of this method is that it applies to any language and adapts to all attributes, as it relies on a language model, not just a set of predefined and limited attributes. The results show a high F1 score of 93.5%, which proves that it is valid for detecting medical errors.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141668168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach 阿拉伯语垃圾推文分类:全面的机器学习方法
AI
Pub Date : 2024-07-02 DOI: 10.3390/ai5030052
W. Hantom, Atta Rahman
Nowadays, one of the most common problems faced by Twitter (also known as X) users, including individuals as well as organizations, is dealing with spam tweets. The problem continues to proliferate due to the increasing popularity and number of users of social media platforms. Due to this overwhelming interest, spammers can post texts, images, and videos containing suspicious links that can be used to spread viruses, rumors, negative marketing, and sarcasm, and potentially hack the user’s information. Spam detection is among the hottest research areas in natural language processing (NLP) and cybersecurity. Several studies have been conducted in this regard, but they mainly focus on the English language. However, Arabic tweet spam detection still has a long way to go, especially emphasizing the diverse dialects other than modern standard Arabic (MSA), since, in the tweets, the standard dialect is seldom used. The situation demands an automated, robust, and efficient Arabic spam tweet detection approach. To address the issue, in this research, various machine learning and deep learning models have been investigated to detect spam tweets in Arabic, including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB) and Long-Short Term Memory (LSTM). In this regard, we have focused on the words as well as the meaning of the tweet text. Upon several experiments, the proposed models have produced promising results in contrast to the previous approaches for the same and diverse datasets. The results showed that the RF classifier achieved 96.78% and the LSTM classifier achieved 94.56%, followed by the SVM classifier that achieved 82% accuracy. Further, in terms of F1-score, there is an improvement of 21.38%, 19.16% and 5.2% using RF, LSTM and SVM classifiers compared to the schemes with same dataset.
如今,Twitter(又称 X)用户(包括个人和组织)面临的最常见问题之一就是处理垃圾推文。由于社交媒体平台越来越受欢迎,用户数量也越来越多,这个问题不断扩散。由于人们对垃圾推文的兴趣与日俱增,垃圾推文发送者可以发布包含可疑链接的文本、图片和视频,这些链接可用于传播病毒、谣言、负面营销和讽刺,还有可能入侵用户的信息。垃圾邮件检测是自然语言处理(NLP)和网络安全领域最热门的研究领域之一。在这方面已经开展了多项研究,但主要集中在英语领域。然而,阿拉伯语推文垃圾邮件检测仍有很长的路要走,尤其是强调现代标准阿拉伯语(MSA)以外的各种方言,因为在推文中很少使用标准方言。这种情况需要一种自动、稳健、高效的阿拉伯语垃圾推文检测方法。为解决这一问题,本研究研究了各种机器学习和深度学习模型,以检测阿拉伯语垃圾推文,包括随机森林(RF)、支持向量机(SVM)、奈夫贝叶斯(NB)和长短期记忆(LSTM)。在这方面,我们的重点是推文文本的单词和含义。经过多次实验,在相同和不同的数据集上,与之前的方法相比,所提出的模型产生了很好的结果。结果显示,RF 分类器的准确率达到 96.78%,LSTM 分类器的准确率达到 94.56%,其次是 SVM 分类器,准确率达到 82%。此外,在 F1 分数方面,使用 RF、LSTM 和 SVM 分类器与使用相同数据集的方案相比,分别提高了 21.38%、19.16% 和 5.2%。
{"title":"Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach","authors":"W. Hantom, Atta Rahman","doi":"10.3390/ai5030052","DOIUrl":"https://doi.org/10.3390/ai5030052","url":null,"abstract":"Nowadays, one of the most common problems faced by Twitter (also known as X) users, including individuals as well as organizations, is dealing with spam tweets. The problem continues to proliferate due to the increasing popularity and number of users of social media platforms. Due to this overwhelming interest, spammers can post texts, images, and videos containing suspicious links that can be used to spread viruses, rumors, negative marketing, and sarcasm, and potentially hack the user’s information. Spam detection is among the hottest research areas in natural language processing (NLP) and cybersecurity. Several studies have been conducted in this regard, but they mainly focus on the English language. However, Arabic tweet spam detection still has a long way to go, especially emphasizing the diverse dialects other than modern standard Arabic (MSA), since, in the tweets, the standard dialect is seldom used. The situation demands an automated, robust, and efficient Arabic spam tweet detection approach. To address the issue, in this research, various machine learning and deep learning models have been investigated to detect spam tweets in Arabic, including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB) and Long-Short Term Memory (LSTM). In this regard, we have focused on the words as well as the meaning of the tweet text. Upon several experiments, the proposed models have produced promising results in contrast to the previous approaches for the same and diverse datasets. The results showed that the RF classifier achieved 96.78% and the LSTM classifier achieved 94.56%, followed by the SVM classifier that achieved 82% accuracy. Further, in terms of F1-score, there is an improvement of 21.38%, 19.16% and 5.2% using RF, LSTM and SVM classifiers compared to the schemes with same dataset.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141688227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing Genetic Algorithms in Conjunction with ANN-Based Stock Valuation Models to Enhance the Optimization of Stock Investment Decisions 将遗传算法与基于 ANN 的股票估值模型结合使用,提高股票投资决策的优化程度
AI
Pub Date : 2024-07-01 DOI: 10.3390/ai5030050
Ying-Hua Chang, Chen-Wei Huang
Navigating the stock market’s unpredictability and reducing vulnerability to its volatility requires well-informed decisions on stock selection, capital allocation, and transaction timing. While stock selection can be accomplished through fundamental analysis, the extensive data involved often pose challenges in discerning pertinent information. Timing, typically managed through technical analysis, may experience delays, leading to missed opportunities for stock transactions. Capital allocation, a quintessential resource optimization dilemma, necessitates meticulous planning for resolution. Consequently, this thesis leverages the optimization attributes of genetic algorithms, in conjunction with fundamental analysis and the concept of combination with repetition optimization, to identify appropriate stock selection and capital allocation strategies. Regarding timing, it employs deep learning coupled with the Ohlson model for stock valuation to ascertain the intrinsic worth of stocks. This lays the groundwork for transactions to yield favorable returns. In terms of experimentation, this study juxtaposes the integrated analytical approach of this thesis with the equal capital allocation strategy, TAIEX, and the Taiwan 50 index. The findings affirm that irrespective of the Taiwan stock market’s bullish or bearish tendencies, the method proposed in this study indeed facilitates investors in making astute investment decisions and attaining substantial profits.
要驾驭股市的不可预测性并降低易受其波动影响的程度,就必须在股票选择、资本分配和交易时机方面做出明智的决策。虽然选股可以通过基本面分析来完成,但其中涉及的大量数据往往给辨别相关信息带来挑战。通常通过技术分析管理的时机选择可能会出现延误,导致错失股票交易机会。资本分配是典型的资源优化难题,需要精心策划才能解决。因此,本论文利用遗传算法的优化属性,结合基本面分析和重复优化组合的概念,确定适当的选股和资本配置策略。在时机选择方面,论文利用深度学习与股票估值的 Ohlson 模型相结合,确定股票的内在价值。这为交易产生有利回报奠定了基础。在实验方面,本研究将本论文的综合分析方法与等额资本配置策略、TAIEX 和台湾 50 指数并列。研究结果证实,无论台湾股市是牛市还是熊市,本研究提出的方法确实有助于投资者做出明智的投资决策,并获得丰厚的利润。
{"title":"Utilizing Genetic Algorithms in Conjunction with ANN-Based Stock Valuation Models to Enhance the Optimization of Stock Investment Decisions","authors":"Ying-Hua Chang, Chen-Wei Huang","doi":"10.3390/ai5030050","DOIUrl":"https://doi.org/10.3390/ai5030050","url":null,"abstract":"Navigating the stock market’s unpredictability and reducing vulnerability to its volatility requires well-informed decisions on stock selection, capital allocation, and transaction timing. While stock selection can be accomplished through fundamental analysis, the extensive data involved often pose challenges in discerning pertinent information. Timing, typically managed through technical analysis, may experience delays, leading to missed opportunities for stock transactions. Capital allocation, a quintessential resource optimization dilemma, necessitates meticulous planning for resolution. Consequently, this thesis leverages the optimization attributes of genetic algorithms, in conjunction with fundamental analysis and the concept of combination with repetition optimization, to identify appropriate stock selection and capital allocation strategies. Regarding timing, it employs deep learning coupled with the Ohlson model for stock valuation to ascertain the intrinsic worth of stocks. This lays the groundwork for transactions to yield favorable returns. In terms of experimentation, this study juxtaposes the integrated analytical approach of this thesis with the equal capital allocation strategy, TAIEX, and the Taiwan 50 index. The findings affirm that irrespective of the Taiwan stock market’s bullish or bearish tendencies, the method proposed in this study indeed facilitates investors in making astute investment decisions and attaining substantial profits.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141704250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1