首页 > 最新文献

International Journal of Advanced Computer Science and Applications最新文献

英文 中文
Quantum Steganography: Hiding Secret Messages in Images using Quantum Circuits and SIFT 量子隐写术:利用量子电路和SIFT隐藏图像中的秘密信息
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.01410107
Hassan Jameel Azooz, Khawla Ben Salah, Monji Kherallah, Mohamed Saber Naceur
In today’s era of escalating digital threats and the growing need for safeguarding sensitive information, this research strives to advance the field of information concealment by introducing a pioneering steganography methodology. Our approach goes beyond the conventional boundaries of image security by seamlessly integrating classical image processing techniques with the cutting-edge realm of quantum encoding. The foundation of our technique lies in the meticulous identification of distinctive features within the cover image, a crucial step achieved through the utilization of SIFT (Scale-Invariant Feature Transform). These identified key points are further organized into coherent clusters employing the K-means clustering algorithm, forming a structured basis for our covert communication process. The core innovation of this research resides in the transformation of the concealed message into a NEQR (Novel Enhanced Quantum Representation) code, a quantum encoding framework that leverages the power of quantum circuits. This transformative step ensures not only the secrecy but also the integrity of the hidden information, making it highly resistant to even the most sophisticated decryption attempts. The strategic placement of the quantum circuit representing the concealed message at the centroids of the clusters generated by the K-means algorithm conceals it within the cover image seamlessly. This fusion of classical image processing and quantum encoding results in an unprecedented level of security for the embedded information, rendering it virtually impervious to unauthorized access. Empirical findings from extensive experimentation affirm the robustness and efficacy of our proposed strategy.
在当今不断升级的数字威胁和对保护敏感信息的需求日益增长的时代,本研究努力通过引入开创性的隐写术方法来推进信息隐藏领域。我们的方法通过将经典图像处理技术与量子编码的前沿领域无缝集成,超越了图像安全的传统界限。我们技术的基础在于细致地识别封面图像中的显著特征,这是利用SIFT (Scale-Invariant Feature Transform,尺度不变特征变换)实现的关键步骤。这些确定的关键点被进一步组织成使用K-means聚类算法的连贯聚类,形成我们隐蔽通信过程的结构化基础。本研究的核心创新在于将隐藏信息转换为NEQR(新型增强量子表示)代码,这是一种利用量子电路功率的量子编码框架。这一变革性的步骤不仅确保了隐藏信息的保密性,而且确保了信息的完整性,使其对最复杂的解密尝试具有很强的抵抗力。量子电路在K-means算法生成的簇的质心处策略性地放置了隐藏信息,将其无缝地隐藏在封面图像中。这种经典图像处理和量子编码的融合为嵌入信息带来了前所未有的安全水平,使其几乎不受未经授权的访问。大量实验的实证结果证实了我们提出的策略的稳健性和有效性。
{"title":"Quantum Steganography: Hiding Secret Messages in Images using Quantum Circuits and SIFT","authors":"Hassan Jameel Azooz, Khawla Ben Salah, Monji Kherallah, Mohamed Saber Naceur","doi":"10.14569/ijacsa.2023.01410107","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.01410107","url":null,"abstract":"In today’s era of escalating digital threats and the growing need for safeguarding sensitive information, this research strives to advance the field of information concealment by introducing a pioneering steganography methodology. Our approach goes beyond the conventional boundaries of image security by seamlessly integrating classical image processing techniques with the cutting-edge realm of quantum encoding. The foundation of our technique lies in the meticulous identification of distinctive features within the cover image, a crucial step achieved through the utilization of SIFT (Scale-Invariant Feature Transform). These identified key points are further organized into coherent clusters employing the K-means clustering algorithm, forming a structured basis for our covert communication process. The core innovation of this research resides in the transformation of the concealed message into a NEQR (Novel Enhanced Quantum Representation) code, a quantum encoding framework that leverages the power of quantum circuits. This transformative step ensures not only the secrecy but also the integrity of the hidden information, making it highly resistant to even the most sophisticated decryption attempts. The strategic placement of the quantum circuit representing the concealed message at the centroids of the clusters generated by the K-means algorithm conceals it within the cover image seamlessly. This fusion of classical image processing and quantum encoding results in an unprecedented level of security for the embedded information, rendering it virtually impervious to unauthorized access. Empirical findings from extensive experimentation affirm the robustness and efficacy of our proposed strategy.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135318753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition of Human Interactions in Still Images using AdaptiveDRNet with Multi-level Attention 基于多层次注意力的自适应drnet静态图像人机交互识别
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.01410103
Arnab Dey, Samit Biswas, Dac-Nhoung Le
Human-Human Interaction Recognition (H2HIR) is a multidisciplinary field that combines computer vision, deep learning, and psychology. Its primary objective is to decode and understand the intricacies of human-human interactions. H2HIR holds significant importance across various domains as it enables machines to perceive, comprehend, and respond to human social behaviors, gestures, and communication patterns. This study aims to identify human-human interactions from just one frame, i.e. from an image. Diverging from the realm of video-based inter-action recognition, a well-established research domain that relies on the utilization of spatio-temporal information, the complexity of the task escalates significantly when dealing with still images due to the absence of these intrinsic spatio-temporal features. This research introduces a novel deep learning model called AdaptiveDRNet with Multi-level Attention to recognize Human-Human (H2H) interactions. Our proposed method demonstrates outstanding performance on the Human-Human Interaction Image dataset (H2HID), encompassing 4049 meticulously curated images representing fifteen distinct human interactions and on the publicly accessible HII and HIIv2 related benchmark datasets. Notably, our proposed model excels with a validation accuracy of 97.20% in the classification of human-human interaction images, surpassing the performance of EfficientNet, InceptionResNetV2, NASNet Mobile, ConvXNet, ResNet50, and VGG-16 models. H2H interaction recognition’s significance lies in its capacity to enhance communication, improve decision-making, and ultimately contribute to the well-being and efficiency of individuals and society as a whole.
人机交互识别(H2HIR)是一个结合了计算机视觉、深度学习和心理学的多学科领域。它的主要目标是解码和理解人类相互作用的复杂性。H2HIR在各个领域都具有重要意义,因为它使机器能够感知、理解和响应人类的社会行为、手势和通信模式。这项研究旨在从一帧图像中识别人与人之间的互动。与基于视频的交互识别领域不同,基于视频的交互识别是一个成熟的研究领域,它依赖于时空信息的利用,当处理静止图像时,由于缺乏这些固有的时空特征,任务的复杂性会显著增加。本研究引入了一种新的深度学习模型,称为AdaptiveDRNet,具有多层次注意力来识别人与人(H2H)的交互。我们提出的方法在人机交互图像数据集(H2HID)上表现出色,该数据集包括4049张精心策划的图像,代表15种不同的人类交互,以及可公开访问的HII和HIIv2相关基准数据集。值得注意的是,我们提出的模型在人机交互图像分类方面具有97.20%的验证准确率,超过了EfficientNet、InceptionResNetV2、NASNet Mobile、ConvXNet、ResNet50和VGG-16模型的性能。H2H交互识别的意义在于它能够增强沟通,改善决策,最终促进个人和整个社会的福祉和效率。
{"title":"Recognition of Human Interactions in Still Images using AdaptiveDRNet with Multi-level Attention","authors":"Arnab Dey, Samit Biswas, Dac-Nhoung Le","doi":"10.14569/ijacsa.2023.01410103","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.01410103","url":null,"abstract":"Human-Human Interaction Recognition (H2HIR) is a multidisciplinary field that combines computer vision, deep learning, and psychology. Its primary objective is to decode and understand the intricacies of human-human interactions. H2HIR holds significant importance across various domains as it enables machines to perceive, comprehend, and respond to human social behaviors, gestures, and communication patterns. This study aims to identify human-human interactions from just one frame, i.e. from an image. Diverging from the realm of video-based inter-action recognition, a well-established research domain that relies on the utilization of spatio-temporal information, the complexity of the task escalates significantly when dealing with still images due to the absence of these intrinsic spatio-temporal features. This research introduces a novel deep learning model called AdaptiveDRNet with Multi-level Attention to recognize Human-Human (H2H) interactions. Our proposed method demonstrates outstanding performance on the Human-Human Interaction Image dataset (H2HID), encompassing 4049 meticulously curated images representing fifteen distinct human interactions and on the publicly accessible HII and HIIv2 related benchmark datasets. Notably, our proposed model excels with a validation accuracy of 97.20% in the classification of human-human interaction images, surpassing the performance of EfficientNet, InceptionResNetV2, NASNet Mobile, ConvXNet, ResNet50, and VGG-16 models. H2H interaction recognition’s significance lies in its capacity to enhance communication, improve decision-making, and ultimately contribute to the well-being and efficiency of individuals and society as a whole.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of the False Data Injection Cyberattacks on the Internet of Things by using Deep Learning 基于深度学习的物联网虚假数据注入网络攻击识别
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.01410118
Henghe Zheng, Xiaojing Chen, Xin Liu
With the expanding utilization of cyber-physical structures and communication networks, cyberattacks have become a serious threat in various networks, including the Internet of Things (IoT) sensors. The state estimation algorithms play an important role in defining the present operational scenario of the IoT sensors. The attack of the false data injection (FDI) is the earnest menace for these estimation strategies (adopted by the operators of the IoT sensor) with the injection of the wicked data into the earned mensuration. The real-time recognition of this group of attacks increases the network resilience while it ensures secure network operation. This paper presents a new method for real-time FDI attack detection that uses a state prediction method basis on deep learning along with a new officiousness identification approach with the use of the matrix of the error covariance. The architecture of the presented method, along with its optimal group of meta-parameters, shows a real-time, scalable, effective state prediction method along with a minimal error border. The earned results display that the proposed method performs better than some recent literature about the prediction of the remaining useful life (RUL) with the use of the C-MAPSS dataset. In the following, two types of attacks of the false data injection are modeled, and then, their effectiveness is evaluated by using the proposed method. The earned results show that the attacks of the FDI, even on the low number of the sensors of the IoT, can severely disrupt the prediction of the RUL in all instances. In addition, our proposed model outperforms the FDI attack in terms of accuracy and flexibility.
随着网络物理结构和通信网络的广泛应用,网络攻击已成为包括物联网(IoT)传感器在内的各种网络的严重威胁。状态估计算法在定义物联网传感器的当前运行场景中起着重要作用。虚假数据注入(FDI)的攻击是这些估计策略(由物联网传感器的操作员采用)的最大威胁,它将恶意数据注入到所获得的测量中。对该类攻击的实时识别,在保证网络安全运行的同时,提高了网络的弹性。本文提出了一种基于深度学习的状态预测方法和基于误差协方差矩阵的可靠性识别方法的实时FDI攻击检测新方法。该方法的体系结构及其最优元参数组显示了一种实时、可扩展、有效的状态预测方法,并具有最小的误差边界。获得的结果表明,所提出的方法比最近使用C-MAPSS数据集预测剩余使用寿命(RUL)的一些文献表现得更好。本文对两种类型的虚假数据注入攻击进行了建模,并利用本文提出的方法对其有效性进行了评估。研究结果表明,FDI的攻击,即使是在物联网传感器数量较少的情况下,也会严重破坏所有情况下RUL的预测。此外,我们提出的模型在准确性和灵活性方面优于FDI攻击。
{"title":"Identification of the False Data Injection Cyberattacks on the Internet of Things by using Deep Learning","authors":"Henghe Zheng, Xiaojing Chen, Xin Liu","doi":"10.14569/ijacsa.2023.01410118","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.01410118","url":null,"abstract":"With the expanding utilization of cyber-physical structures and communication networks, cyberattacks have become a serious threat in various networks, including the Internet of Things (IoT) sensors. The state estimation algorithms play an important role in defining the present operational scenario of the IoT sensors. The attack of the false data injection (FDI) is the earnest menace for these estimation strategies (adopted by the operators of the IoT sensor) with the injection of the wicked data into the earned mensuration. The real-time recognition of this group of attacks increases the network resilience while it ensures secure network operation. This paper presents a new method for real-time FDI attack detection that uses a state prediction method basis on deep learning along with a new officiousness identification approach with the use of the matrix of the error covariance. The architecture of the presented method, along with its optimal group of meta-parameters, shows a real-time, scalable, effective state prediction method along with a minimal error border. The earned results display that the proposed method performs better than some recent literature about the prediction of the remaining useful life (RUL) with the use of the C-MAPSS dataset. In the following, two types of attacks of the false data injection are modeled, and then, their effectiveness is evaluated by using the proposed method. The earned results show that the attacks of the FDI, even on the low number of the sensors of the IoT, can severely disrupt the prediction of the RUL in all instances. In addition, our proposed model outperforms the FDI attack in terms of accuracy and flexibility.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic Nuclei Segmentation on Microscopic Images using Deep Residual U-Net 基于深度残差U-Net的显微图像核自动分割
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.0141061
Ramya Shree H P, Minavathi -, Dinesh M S
Nuclei Segmentation is the preliminary step towards the task of medical image analysis. Nowadays, there exists several deep learning-based techniques based on Convolutional Neural Networks (CNNs) for the task of nuclei segmentation. In this study, we present a neural network for semantic segmentation. This network harnesses the strengths in both residual learning and U-Net methodologies, thereby amplifying cell segmentation performance. This hybrid approach also facilitates the creation of network with diminished parameter requirement. The network incorporates residual units contributes to a smoother training process and mitigate the issue of vanishing gradients. Our model is tested on a microscopy image dataset which is publicly available from the 2018 Data Science Bowl grand challenge and assessed against U-Net and several other state-of-the-art deep learning approaches designed for nuclei segmentation. Our proposed approach showcases a notable improvement in average Intersection over Union (IoU) gain compared to prevailing state-of-the-art techniques, by exhibiting a significant margin of 1.1% and 5.8% higher gains over the original U-Net. Our model also excels across various key indicators, including accuracy, precision, recall and dice-coefficient. The outcomes underscore the potential of our proposed approach as a promising nuclei segmentation method for microscopy image analysis.
核分割是医学图像分析的第一步。目前已有几种基于卷积神经网络(cnn)的深度学习技术用于核分割任务。在这项研究中,我们提出了一种用于语义分割的神经网络。该网络利用了残差学习和U-Net方法的优势,从而提高了细胞分割的性能。这种混合方法还有助于减少参数要求的网络的创建。残差单元的加入使得训练过程更加平滑,并缓解了梯度消失的问题。我们的模型在显微镜图像数据集上进行了测试,该数据集可从2018年数据科学碗大挑战中公开获得,并针对U-Net和其他几种用于核分割的最先进深度学习方法进行了评估。与当前最先进的技术相比,我们提出的方法显示了平均交汇交汇(IoU)增益的显着改善,比原始的U-Net显示了1.1%和5.8%的显著增益。我们的模型在各种关键指标上也表现出色,包括准确性、精密度、召回率和骰子系数。这些结果强调了我们提出的方法作为一种有前途的显微镜图像分析的核分割方法的潜力。
{"title":"An Automatic Nuclei Segmentation on Microscopic Images using Deep Residual U-Net","authors":"Ramya Shree H P, Minavathi -, Dinesh M S","doi":"10.14569/ijacsa.2023.0141061","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0141061","url":null,"abstract":"Nuclei Segmentation is the preliminary step towards the task of medical image analysis. Nowadays, there exists several deep learning-based techniques based on Convolutional Neural Networks (CNNs) for the task of nuclei segmentation. In this study, we present a neural network for semantic segmentation. This network harnesses the strengths in both residual learning and U-Net methodologies, thereby amplifying cell segmentation performance. This hybrid approach also facilitates the creation of network with diminished parameter requirement. The network incorporates residual units contributes to a smoother training process and mitigate the issue of vanishing gradients. Our model is tested on a microscopy image dataset which is publicly available from the 2018 Data Science Bowl grand challenge and assessed against U-Net and several other state-of-the-art deep learning approaches designed for nuclei segmentation. Our proposed approach showcases a notable improvement in average Intersection over Union (IoU) gain compared to prevailing state-of-the-art techniques, by exhibiting a significant margin of 1.1% and 5.8% higher gains over the original U-Net. Our model also excels across various key indicators, including accuracy, precision, recall and dice-coefficient. The outcomes underscore the potential of our proposed approach as a promising nuclei segmentation method for microscopy image analysis.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI based Dynamic Prediction Model for Mobile Health Application System 基于AI的移动健康应用系统动态预测模型
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.0140138
Adari Ramesh, C K Subbaraya, G K Ravi Kumar
In recent decades, mobile health (m-health) applications have gained significant attention in the healthcare sector due to their increased support during critical cases like cardiac disease, spinal cord problems, and brain injuries. Also, m-health services are considered more valuable, mainly where facilities are deficient. In addition, it supports wired and advanced wireless technologies for data transmission and communication. In this work, an Artificial Intelligence (AI)-based deep learning model is implemented to predict healthcare data, where the data handling is performed to improve the dynamic prediction performance. It includes the working modules of data collection, normalization, AI-based classification, and decision-making. Here, the m-health data are obtained from the smart devices through the service providers, which comprises the health information related to blood pressure, heart rate, glucose level, etc. The main contribution of this paper is to accurately predict Cardio Vascular Disease (CVD) from the patient dataset stored in cloud using the AI-based m-health system. After obtaining the data, preprocessing can be performed for noise reduction and normalization because prediction performance highly depends on data quality. Consequently, we use the Gorilla Troop Optimization Algorithm (GTOA) to select the most relevant functions for classifier training and testing. Classify his CVD type according to a selected set of features using bidirectional long-term memory (Bi-LSTM). Moreover, the proposed AI-based prediction model's performance is validated and compared using different measures.
近几十年来,移动医疗(m-health)应用程序在医疗保健领域获得了极大的关注,因为它们在心脏病、脊髓问题和脑损伤等危重病例中得到了越来越多的支持。此外,移动医疗服务被认为更有价值,特别是在设施不足的地方。此外,它还支持有线和先进的无线技术,用于数据传输和通信。在这项工作中,实现了基于人工智能(AI)的深度学习模型来预测医疗保健数据,其中执行数据处理以提高动态预测性能。它包括数据收集、规范化、基于人工智能的分类和决策等工作模块。在这里,移动健康数据是通过服务提供商从智能设备获得的,其中包括与血压、心率、血糖水平等相关的健康信息。本文的主要贡献是使用基于人工智能的移动健康系统,从存储在云中的患者数据集中准确预测心血管疾病(CVD)。在获得数据后,由于预测性能高度依赖于数据质量,因此可以进行预处理以进行降噪和归一化。因此,我们使用大猩猩群体优化算法(GTOA)来选择最相关的函数进行分类器训练和测试。使用双向长期记忆(Bi-LSTM)根据一组选定的特征对他的CVD类型进行分类。此外,使用不同的度量对所提出的基于人工智能的预测模型的性能进行了验证和比较。
{"title":"AI based Dynamic Prediction Model for Mobile Health Application System","authors":"Adari Ramesh, C K Subbaraya, G K Ravi Kumar","doi":"10.14569/ijacsa.2023.0140138","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140138","url":null,"abstract":"In recent decades, mobile health (m-health) applications have gained significant attention in the healthcare sector due to their increased support during critical cases like cardiac disease, spinal cord problems, and brain injuries. Also, m-health services are considered more valuable, mainly where facilities are deficient. In addition, it supports wired and advanced wireless technologies for data transmission and communication. In this work, an Artificial Intelligence (AI)-based deep learning model is implemented to predict healthcare data, where the data handling is performed to improve the dynamic prediction performance. It includes the working modules of data collection, normalization, AI-based classification, and decision-making. Here, the m-health data are obtained from the smart devices through the service providers, which comprises the health information related to blood pressure, heart rate, glucose level, etc. The main contribution of this paper is to accurately predict Cardio Vascular Disease (CVD) from the patient dataset stored in cloud using the AI-based m-health system. After obtaining the data, preprocessing can be performed for noise reduction and normalization because prediction performance highly depends on data quality. Consequently, we use the Gorilla Troop Optimization Algorithm (GTOA) to select the most relevant functions for classifier training and testing. Classify his CVD type according to a selected set of features using bidirectional long-term memory (Bi-LSTM). Moreover, the proposed AI-based prediction model's performance is validated and compared using different measures.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135470667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Novel Artifact Removal Strategy and Spatial Attention-based Multiscale CNN for MI Recognition 一种新的伪影去除策略及基于空间注意力的多尺度CNN用于MI识别
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.0140931
Duan Li, Peisen Liu, Yongquan Xia
The brain-computer interface (BCI) based on motor imagery (MI) is a promising technology aimed at assisting individuals with motor impairments in regaining their motor abilities by capturing brain signals during specific tasks. However, non-invasive electroencephalogram (EEG) signals collected using EEG caps often contain large numbers of artifacts. Automatically and effectively removing these artifacts while preserving task-related brain components is a key issue for MI de-coding. Additionally, multi-channel EEG signals encompass temporal, frequency and spatial domain features. Although deep learning has achieved better results in extracting features and de-coding motor imagery EEG (MI-EEG) signals, obtaining a high-performance network on MI that achieves optimal matching of feature extraction, thus classification algorithms is still a challenging issue. In this study, we propose a scheme that combines a novel automatic artifact removal strategy with a spatial attention-based multiscale CNN (SA-MSCNN). This work obtained independent component analysis (ICA) weights from the first subject in the dataset and used K-means clustering to determine the best feature combination, which was then applied to other subjects for artifact removal. Additionally, this work designed an SA-MSCNN which includes multiscale convolution modules capable of extracting information from multiple frequency bands, spatial attention modules weighting spatial information, and separable convolution modules reducing feature information. This work validated the performance of the proposed model using a real-world public dataset, the BCI competition IV dataset 2a. The average accuracy of the method was 79.83%. This work conducted ablation experiments to demonstrate the effectiveness of the proposed artifact removal method and SA-MSCNN network and compared the results with outstanding models and state-of-the-art (SOTA) studies. The results confirm the effectiveness of the proposed method and provide a theoretical and experimental foundation for the development of new MI-BCI systems, which is very useful in helping people with disabilities regain their independence and improve their quality of life.
基于运动图像(MI)的脑机接口(BCI)是一项很有前途的技术,旨在通过捕获特定任务中的大脑信号来帮助运动障碍患者恢复运动能力。然而,利用脑电图帽采集的无创脑电图信号往往含有大量的伪影。自动有效地去除这些伪影,同时保留与任务相关的大脑成分是MI解码的关键问题。此外,多通道脑电信号包含时域、频域和空域特征。虽然深度学习在运动意象脑电图(MI-EEG)信号的特征提取和解码方面取得了较好的效果,但在MI上获得一个实现特征提取最优匹配的高性能网络,分类算法仍然是一个具有挑战性的问题。在这项研究中,我们提出了一种将新的自动伪影去除策略与基于空间注意力的多尺度CNN (SA-MSCNN)相结合的方案。该工作从数据集中的第一个主题获得独立分量分析(ICA)权重,并使用K-means聚类确定最佳特征组合,然后将其应用于其他主题以去除伪影。此外,本工作还设计了一种SA-MSCNN,其中包括能够从多个频带提取信息的多尺度卷积模块、加权空间信息的空间注意模块和减少特征信息的可分离卷积模块。这项工作使用现实世界的公共数据集(BCI competition IV数据集2a)验证了所提出模型的性能。方法平均准确率为79.83%。这项工作进行了消融实验,以证明所提出的伪像去除方法和SA-MSCNN网络的有效性,并将结果与优秀的模型和最先进的(SOTA)研究进行了比较。研究结果证实了该方法的有效性,为开发新的MI-BCI系统提供了理论和实验基础,这对帮助残疾人恢复独立生活和提高生活质量具有重要意义。
{"title":"A Novel Artifact Removal Strategy and Spatial Attention-based Multiscale CNN for MI Recognition","authors":"Duan Li, Peisen Liu, Yongquan Xia","doi":"10.14569/ijacsa.2023.0140931","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140931","url":null,"abstract":"The brain-computer interface (BCI) based on motor imagery (MI) is a promising technology aimed at assisting individuals with motor impairments in regaining their motor abilities by capturing brain signals during specific tasks. However, non-invasive electroencephalogram (EEG) signals collected using EEG caps often contain large numbers of artifacts. Automatically and effectively removing these artifacts while preserving task-related brain components is a key issue for MI de-coding. Additionally, multi-channel EEG signals encompass temporal, frequency and spatial domain features. Although deep learning has achieved better results in extracting features and de-coding motor imagery EEG (MI-EEG) signals, obtaining a high-performance network on MI that achieves optimal matching of feature extraction, thus classification algorithms is still a challenging issue. In this study, we propose a scheme that combines a novel automatic artifact removal strategy with a spatial attention-based multiscale CNN (SA-MSCNN). This work obtained independent component analysis (ICA) weights from the first subject in the dataset and used K-means clustering to determine the best feature combination, which was then applied to other subjects for artifact removal. Additionally, this work designed an SA-MSCNN which includes multiscale convolution modules capable of extracting information from multiple frequency bands, spatial attention modules weighting spatial information, and separable convolution modules reducing feature information. This work validated the performance of the proposed model using a real-world public dataset, the BCI competition IV dataset 2a. The average accuracy of the method was 79.83%. This work conducted ablation experiments to demonstrate the effectiveness of the proposed artifact removal method and SA-MSCNN network and compared the results with outstanding models and state-of-the-art (SOTA) studies. The results confirm the effectiveness of the proposed method and provide a theoretical and experimental foundation for the development of new MI-BCI systems, which is very useful in helping people with disabilities regain their independence and improve their quality of life.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135955484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corpus Generation to Develop Amharic Morphological Segmenter 语料库生成发展阿姆哈拉语词法分词
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.01409116
Terefe Feyisa, Seble Hailu
Morphological segmenter is an important component in Amharic natural language processing systems. Despite this fact, Amharic lacks large amount of morphologically segmented corpus. Large amount of corpus is often a requirement to develop neural network-based language technologies. This paper presents an alternative method to generate large amount of morph-segmented corpus for Amharic language. First, a relatively small (138,400 words) morphologically annotated Amharic seed-corpus is manually prepared. The annotation enables to identify prefixes, stem, and suffixes of a given word. Second, a supervised approach is used to create a conditional random field-based seed-model (on the seed-corpus). Applying the seed-model (an unsupervised technique on a large unsegmented raw Amharic words) for prediction, a large corpus size (3,777,283) of segmented words are automatically generated. Third, the newly generated corpus is used to train an Amharic morphological segmenter (based on a supervised neural sequence-to-sequence (seq2seq) approach using character embeddings). Using the seq2seq method, an F-score of 98.65% was measured. Results show an agreement with previous efforts for Arabic language. The work presented here has profound implications for future studies of Ethiopian language technologies and may one day help solve the problem of the digital-divide between resource-rich and under-resourced languages.
形态切分器是阿姆哈拉语自然语言处理系统的重要组成部分。尽管如此,阿姆哈拉语缺乏大量的形态分段语料库。基于神经网络的语言技术往往需要大量的语料库。本文提出了一种生成大量阿姆哈拉语语料库的方法。首先,手工准备一个相对较小的(138,400字)形态学注释的阿姆哈拉语种子语料库。注释允许识别给定单词的前缀、词干和后缀。其次,使用监督方法创建基于条件随机场的种子模型(在种子语料库上)。应用种子模型(一种针对大量未分割的原始阿姆哈拉语单词的无监督技术)进行预测,自动生成了大量的分割词语料库(3,777,283)。第三,使用新生成的语料库训练阿姆哈拉语形态切分器(基于使用字符嵌入的有监督神经序列到序列(seq2seq)方法)。采用seq2seq法,f值为98.65%。结果表明,与先前的阿拉伯语努力一致。这里提出的工作对埃塞俄比亚语言技术的未来研究具有深远的意义,并且可能有一天有助于解决资源丰富和资源不足的语言之间的数字鸿沟问题。
{"title":"Corpus Generation to Develop Amharic Morphological Segmenter","authors":"Terefe Feyisa, Seble Hailu","doi":"10.14569/ijacsa.2023.01409116","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.01409116","url":null,"abstract":"Morphological segmenter is an important component in Amharic natural language processing systems. Despite this fact, Amharic lacks large amount of morphologically segmented corpus. Large amount of corpus is often a requirement to develop neural network-based language technologies. This paper presents an alternative method to generate large amount of morph-segmented corpus for Amharic language. First, a relatively small (138,400 words) morphologically annotated Amharic seed-corpus is manually prepared. The annotation enables to identify prefixes, stem, and suffixes of a given word. Second, a supervised approach is used to create a conditional random field-based seed-model (on the seed-corpus). Applying the seed-model (an unsupervised technique on a large unsegmented raw Amharic words) for prediction, a large corpus size (3,777,283) of segmented words are automatically generated. Third, the newly generated corpus is used to train an Amharic morphological segmenter (based on a supervised neural sequence-to-sequence (seq2seq) approach using character embeddings). Using the seq2seq method, an F-score of 98.65% was measured. Results show an agreement with previous efforts for Arabic language. The work presented here has profound implications for future studies of Ethiopian language technologies and may one day help solve the problem of the digital-divide between resource-rich and under-resourced languages.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135955493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Model for Smoke Detection Based on Concentration Features using YOLOv7tiny 基于浓度特征的YOLOv7tiny改进烟雾检测模型
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.01409114
Yuanpan ZHENG, Liwei Niu, Xinxin GAN, Hui WANG, Boyang XU, Zhenyu WANG
Smoke is often present in the early stages of a fire. Detecting low smoke concentration and small targets during these early stages can be challenging. This paper proposes an improved smoke detection algorithm that leverages the characteristics of smoke concentration using YOLOv7tiny. The improved algorithm consists of the following components: 1) utilizing the dark channel prior theory to extract smoke concentration characteristics and using the synthesized αRGB image as an input feature to enhance the features of sparse smoke; 2) designing a light-BiFPN multi-scale feature fusion structure to improve the detection performance of small target smoke; 3) using depth separable convolution to replace the original standard convolution and reduce the model parameter quantity. Experimental results on a self-made dataset show that the improved algorithm performs better in detecting sparse smoke and small target smoke, with mAP@0.5 and Recall reaching 94.03% and 95.62% respectively, and the detection FPS increasing to 118.78 frames/s. Moreover, the model parameter quantity decreases to 4.97M. The improved algorithm demonstrates superior performance in the detection of sparse and small smoke in the early stages of a fire.
烟雾通常出现在火灾的早期阶段。在这些早期阶段探测低烟雾浓度和小目标可能具有挑战性。本文提出了一种利用YOLOv7tiny的烟雾浓度特性的改进的烟雾检测算法。改进算法包括以下几个部分:1)利用暗通道先验理论提取烟雾浓度特征,利用合成的αRGB图像作为输入特征增强稀疏烟雾特征;2)设计轻型bifpn多尺度特征融合结构,提高对小目标烟雾的检测性能;3)利用深度可分卷积取代原有的标准卷积,减少模型参数数量。在自制数据集上的实验结果表明,改进算法对稀疏烟雾和小目标烟雾的检测效果更好,mAP@0.5和召回率分别达到94.03%和95.62%,检测FPS提高到118.78帧/秒。模型参数数量减少到4.97M。改进后的算法在火灾早期稀疏烟雾和小烟雾的检测中表现出了较好的性能。
{"title":"Improved Model for Smoke Detection Based on Concentration Features using YOLOv7tiny","authors":"Yuanpan ZHENG, Liwei Niu, Xinxin GAN, Hui WANG, Boyang XU, Zhenyu WANG","doi":"10.14569/ijacsa.2023.01409114","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.01409114","url":null,"abstract":"Smoke is often present in the early stages of a fire. Detecting low smoke concentration and small targets during these early stages can be challenging. This paper proposes an improved smoke detection algorithm that leverages the characteristics of smoke concentration using YOLOv7tiny. The improved algorithm consists of the following components: 1) utilizing the dark channel prior theory to extract smoke concentration characteristics and using the synthesized αRGB image as an input feature to enhance the features of sparse smoke; 2) designing a light-BiFPN multi-scale feature fusion structure to improve the detection performance of small target smoke; 3) using depth separable convolution to replace the original standard convolution and reduce the model parameter quantity. Experimental results on a self-made dataset show that the improved algorithm performs better in detecting sparse smoke and small target smoke, with mAP@0.5 and Recall reaching 94.03% and 95.62% respectively, and the detection FPS increasing to 118.78 frames/s. Moreover, the model parameter quantity decreases to 4.97M. The improved algorithm demonstrates superior performance in the detection of sparse and small smoke in the early stages of a fire.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135955988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning for Smart Cities: A Comprehensive Review of Applications and Opportunities 智慧城市的机器学习:应用和机遇的全面回顾
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.01409104
Xiaoning Dou, Weijing Chen, Lei Zhu, Yingmei Bai, Yan Li, Xiaoxiao Wu
The smart city concept originated a few years ago as a combination of ideas about how information and communication technologies can improve urban life. With the advent of the digital revolution, many cities globally are investing heavily in designing and implementing smart city solutions and projects. Machine Learning (ML) has evolved into a powerful tool within the smart city sector, enabling efficient resource management, improved infrastructure, and enhanced urban services. This paper discusses the diverse ML algorithms and their potential applications in smart cities, including Artificial Intelligence (AI) and Intelligent Transportation Systems (ITS). The key challenges, opportunities, and directions for adopting ML to make cities smarter and more sustainable are outlined.
智慧城市的概念起源于几年前,是关于信息和通信技术如何改善城市生活的各种想法的结合。随着数字革命的到来,全球许多城市都在大力投资设计和实施智慧城市解决方案和项目。机器学习(ML)已经发展成为智慧城市领域的强大工具,可以实现高效的资源管理,改善基础设施和增强城市服务。本文讨论了各种机器学习算法及其在智能城市中的潜在应用,包括人工智能(AI)和智能交通系统(ITS)。本文概述了采用机器学习使城市更智能、更可持续发展的主要挑战、机遇和方向。
{"title":"Machine Learning for Smart Cities: A Comprehensive Review of Applications and Opportunities","authors":"Xiaoning Dou, Weijing Chen, Lei Zhu, Yingmei Bai, Yan Li, Xiaoxiao Wu","doi":"10.14569/ijacsa.2023.01409104","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.01409104","url":null,"abstract":"The smart city concept originated a few years ago as a combination of ideas about how information and communication technologies can improve urban life. With the advent of the digital revolution, many cities globally are investing heavily in designing and implementing smart city solutions and projects. Machine Learning (ML) has evolved into a powerful tool within the smart city sector, enabling efficient resource management, improved infrastructure, and enhanced urban services. This paper discusses the diverse ML algorithms and their potential applications in smart cities, including Artificial Intelligence (AI) and Intelligent Transportation Systems (ITS). The key challenges, opportunities, and directions for adopting ML to make cities smarter and more sustainable are outlined.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135956275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIRA-ML: Auto Insurance Risk Assessment-Machine Learning Model using Resampling Methods AIRA-ML:汽车保险风险评估-使用重采样方法的机器学习模型
Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-01-01 DOI: 10.14569/ijacsa.2023.0140966
Ahmed Shawky Elbhrawy, Mohamed A. Belal, Mohamed Sameh Hassanein
Predicting underwriting risk has become a major challenge due to the imbalanced datasets in the field. A real-world imbalanced dataset is used in this work with 12 variables in 30144 cases, where most of the cases were classified as "accepting the insurance request", while a small percentage classified as "refusing insurance". This work developed 55 machine learning (ML) models to predict whether or not to renew policies. The models were developed using the original dataset and four data-level approaches resampling techniques: random oversampling, SMOTE, random undersampling, and hybrid methods with 11 ML algorithms to address the issue of imbalanced data (11 ML× (4 resampling techniques + unbalanced datasets) = 55 ML models). Seven classifier efficiency measures were used to evaluate these 55 models that were developed using 11 ML algorithms: logistic regression (LR), random forest (RF), artificial neural network (ANN), multilayer perceptron (MLP), support vector machine (SVM), naive Bayes (NB), decision tree (DT), XGBoost, k-nearest neighbors (KNN), stochastic gradient boosting (SGB), and AdaBoost. The seven classifier efficiency measures namely are accuracy, sensitivity, specificity, AUC, precision, F1-measure, and kappa. CRISP-DM methodology is utilisied to ensure that studies are conducted in a rigorous and systematic manner. Additionally, RapidMiner software was used to apply the algorithms and analyze the data, which highlighted the potential of ML to improve the accuracy of risk assessment in insurance underwriting. The results showed that all ML classifiers became more effective when using resampling strategies; where Hybrid resampling methods improved the performance of machine learning models on imbalanced data with an accuracy of 0.9967 and kappa statistics of 0.992 for the RF classifier.
由于该领域数据集的不平衡,预测承保风险已成为一项重大挑战。在这项工作中使用了一个真实世界的不平衡数据集,在30144个案例中有12个变量,其中大多数案例被归类为“接受保险请求”,而一小部分被归类为“拒绝保险”。这项工作开发了55个机器学习(ML)模型来预测是否更新政策。这些模型是使用原始数据集和四种数据级方法重新采样技术开发的:随机过采样、SMOTE、随机欠采样和11 ML算法的混合方法,以解决数据不平衡问题(11 mlx(4重采样技术+不平衡数据集)= 55 ML模型)。7个分类器效率指标用于评估使用11 ML算法开发的55个模型:逻辑回归(LR)、随机森林(RF)、人工神经网络(ANN)、多层感知器(MLP)、支持向量机(SVM)、朴素贝叶斯(NB)、决策树(DT)、XGBoost、k近邻(KNN)、随机梯度增强(SGB)和AdaBoost。7个分类器效率指标分别是准确性、灵敏度、特异性、AUC、精度、F1-measure和kappa。采用CRISP-DM方法确保以严格和系统的方式进行研究。此外,使用RapidMiner软件应用算法并分析数据,这突出了机器学习在提高保险承保风险评估准确性方面的潜力。结果表明,当使用重采样策略时,所有ML分类器都变得更加有效;其中,混合重采样方法提高了机器学习模型在不平衡数据上的性能,RF分类器的准确率为0.9967,kappa统计量为0.992。
{"title":"AIRA-ML: Auto Insurance Risk Assessment-Machine Learning Model using Resampling Methods","authors":"Ahmed Shawky Elbhrawy, Mohamed A. Belal, Mohamed Sameh Hassanein","doi":"10.14569/ijacsa.2023.0140966","DOIUrl":"https://doi.org/10.14569/ijacsa.2023.0140966","url":null,"abstract":"Predicting underwriting risk has become a major challenge due to the imbalanced datasets in the field. A real-world imbalanced dataset is used in this work with 12 variables in 30144 cases, where most of the cases were classified as \"accepting the insurance request\", while a small percentage classified as \"refusing insurance\". This work developed 55 machine learning (ML) models to predict whether or not to renew policies. The models were developed using the original dataset and four data-level approaches resampling techniques: random oversampling, SMOTE, random undersampling, and hybrid methods with 11 ML algorithms to address the issue of imbalanced data (11 ML× (4 resampling techniques + unbalanced datasets) = 55 ML models). Seven classifier efficiency measures were used to evaluate these 55 models that were developed using 11 ML algorithms: logistic regression (LR), random forest (RF), artificial neural network (ANN), multilayer perceptron (MLP), support vector machine (SVM), naive Bayes (NB), decision tree (DT), XGBoost, k-nearest neighbors (KNN), stochastic gradient boosting (SGB), and AdaBoost. The seven classifier efficiency measures namely are accuracy, sensitivity, specificity, AUC, precision, F1-measure, and kappa. CRISP-DM methodology is utilisied to ensure that studies are conducted in a rigorous and systematic manner. Additionally, RapidMiner software was used to apply the algorithms and analyze the data, which highlighted the potential of ML to improve the accuracy of risk assessment in insurance underwriting. The results showed that all ML classifiers became more effective when using resampling strategies; where Hybrid resampling methods improved the performance of machine learning models on imbalanced data with an accuracy of 0.9967 and kappa statistics of 0.992 for the RF classifier.","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135956277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Advanced Computer Science and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1