首页 > 最新文献

Applied Computer Systems最新文献

英文 中文
Empirical Analysis of Supervised and Unsupervised Machine Learning Algorithms with Aspect-Based Sentiment Analysis 基于方面的情感分析的有监督和无监督机器学习算法的实证分析
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-06-01 DOI: 10.2478/acss-2023-0012
Satwinder Singh, Harpreet Kaur, Rubal Kanozia, Gurpreet Kaur
Abstract Machine learning based sentiment analysis is an interdisciplinary approach in opinion mining, particularly in the field of media and communication research. In spite of their different backgrounds, researchers have collaborated to test, train and again retest the machine learning approach to collect, analyse and withdraw a meaningful insight from large datasets. This research classifies the texts of micro-blog (tweets) into positive and negative responses about a particular phenomenon. The study also demonstrates the process of compilation of corpus for review of sentiments, cleaning the body of text to make it a meaningful text, find people’s emotions about it, and interpret the findings. Till date the public sentiment after abrogation of Article 370 has not been studied, which adds the novelty to this scientific study. This study includes the dataset collection from Twitter that comprises 66.7 % of positive tweets and 34.3 % of negative tweets of the people about the abrogation of Article 370. Experimental testing reveals that the proposed methodology is much more effective than the previously proposed methodology. This study focuses on comparison of unsupervised lexicon-based models (TextBlob, AFINN, Vader Sentiment) and supervised machine learning models (KNN, SVM, Random Forest and Naïve Bayes) for sentiment analysis. This is the first study with cyber public opinion over the abrogation of Article 370. Twitter data of more than 2 lakh tweets were collected by the authors. After cleaning, 29732 tweets were selected for analysis. As per the results among supervised learning, Random Forest performs the best, whereas among unsupervised learning TextBlob achieves the highest accuracy of 99 % and 88 %, respectively. Performance parameters of the proposed supervised machine learning models also surpass the result of the recent study performed in 2023 for sentiment analysis.
基于机器学习的情感分析是一种跨学科的意见挖掘方法,特别是在媒体和传播研究领域。尽管他们的背景不同,但研究人员已经合作测试,训练和再次测试机器学习方法,以收集,分析并从大型数据集中提取有意义的见解。本研究将微博(tweets)文本分为对某一特定现象的积极回应和消极回应。本研究还展示了情感审查语料库的编制过程,清理文本主体使其成为有意义的文本,发现人们对文本的情感,并解释研究结果。到目前为止,还没有对废除第370条后的民意进行研究,这为这项科学研究增添了新颖性。本研究包括从Twitter收集的数据集,其中包括关于废除第370条的66.7%的正面推文和34.3%的负面推文。实验测试表明,所提出的方法比以前提出的方法更有效。本研究重点比较了用于情感分析的无监督基于词典的模型(TextBlob、AFINN、Vader Sentiment)和监督机器学习模型(KNN、SVM、Random Forest和Naïve Bayes)。这是首次就废除第370条进行网络舆论调查。作者收集了超过20万条推文的推特数据。清理后,选择29732条tweet进行分析。从结果来看,在监督学习中,Random Forest表现最好,而在无监督学习中,TextBlob的准确率最高,分别达到99%和88%。所提出的监督机器学习模型的性能参数也超过了最近在2023年进行的情感分析研究的结果。
{"title":"Empirical Analysis of Supervised and Unsupervised Machine Learning Algorithms with Aspect-Based Sentiment Analysis","authors":"Satwinder Singh, Harpreet Kaur, Rubal Kanozia, Gurpreet Kaur","doi":"10.2478/acss-2023-0012","DOIUrl":"https://doi.org/10.2478/acss-2023-0012","url":null,"abstract":"Abstract Machine learning based sentiment analysis is an interdisciplinary approach in opinion mining, particularly in the field of media and communication research. In spite of their different backgrounds, researchers have collaborated to test, train and again retest the machine learning approach to collect, analyse and withdraw a meaningful insight from large datasets. This research classifies the texts of micro-blog (tweets) into positive and negative responses about a particular phenomenon. The study also demonstrates the process of compilation of corpus for review of sentiments, cleaning the body of text to make it a meaningful text, find people’s emotions about it, and interpret the findings. Till date the public sentiment after abrogation of Article 370 has not been studied, which adds the novelty to this scientific study. This study includes the dataset collection from Twitter that comprises 66.7 % of positive tweets and 34.3 % of negative tweets of the people about the abrogation of Article 370. Experimental testing reveals that the proposed methodology is much more effective than the previously proposed methodology. This study focuses on comparison of unsupervised lexicon-based models (TextBlob, AFINN, Vader Sentiment) and supervised machine learning models (KNN, SVM, Random Forest and Naïve Bayes) for sentiment analysis. This is the first study with cyber public opinion over the abrogation of Article 370. Twitter data of more than 2 lakh tweets were collected by the authors. After cleaning, 29732 tweets were selected for analysis. As per the results among supervised learning, Random Forest performs the best, whereas among unsupervised learning TextBlob achieves the highest accuracy of 99 % and 88 %, respectively. Performance parameters of the proposed supervised machine learning models also surpass the result of the recent study performed in 2023 for sentiment analysis.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"139 1","pages":"125 - 136"},"PeriodicalIF":1.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77990829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting COVID-19 Cases on a Large Chest X-Ray Dataset Using Modified Pre-trained CNN Architectures 使用改进的预训练CNN架构在大型胸部x射线数据集上预测COVID-19病例
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-06-01 DOI: 10.2478/acss-2023-0005
Abdulkadir Karac
Abstract The Coronavirus is a virus that spreads very quickly. Therefore, it has had very destructive effects in many areas worldwide. Because X-ray images are an easily accessible, fast, and inexpensive method, they are widely used worldwide to diagnose COVID-19. This study tried detecting COVID-19 from X-ray images using pre-trained VGG16, VGG19, InceptionV3, and Resnet50 CNN architectures and modified versions of these architectures. The fully connected layers of the pre-trained architectures have been reorganized in the modified CNN architectures. These architectures were trained on binary and three-class datasets, revealing their classification performance. The data set was collected from four different sources and consisted of 594 COVID-19, 1345 viral pneumonia, and 1341 normal X-ray images. Models are built using Tensorflow and Keras Libraries with Python programming language. Preprocessing was performed on the dataset by applying resizing, normalization, and one hot encoding operation. Model performances were evaluated according to many performance metrics such as recall, specificity, accuracy, precision, F1-score, confusion matrix, ROC analysis, etc., using 5-fold cross-validation. The highest classification performance was obtained in the modified VGG19 model with 99.84 % accuracy for binary classification (COVID-19 vs. Normal) and in the modified VGG16 model with 98.26 % accuracy for triple classification (COVID-19 vs. Pneumonia vs. Normal). These models have a higher accuracy rate than other studies in the literature. In addition, the number of COVID-19 X-ray images in the dataset used in this study is approximately two times higher than in other studies. Since it is obtained from different sources, it is irregular and does not have a standard. Despite this, it is noteworthy that higher classification performance was achieved than in previous studies. Modified VGG16 and VGG19 models (available at github.com/akaraci/LargeDatasetCovid19) can be used as an auxiliary tool in slight healthcare organizations’ shortage of specialists to detect COVID-19.
冠状病毒是一种传播非常迅速的病毒。因此,它在世界上许多地区产生了极具破坏性的影响。由于x射线图像是一种容易获得、快速和廉价的方法,因此在世界范围内广泛用于诊断COVID-19。本研究尝试使用预训练的VGG16、VGG19、InceptionV3和Resnet50 CNN架构以及这些架构的修改版本从x射线图像中检测COVID-19。在修改后的CNN架构中,预训练架构的全连接层被重新组织。这些架构在二分类和三类数据集上进行了训练,揭示了它们的分类性能。数据集来自四个不同的来源,包括594张COVID-19图像,1345张病毒性肺炎图像和1341张正常x线图像。模型使用Tensorflow和Keras库与Python编程语言构建。通过调整大小、规范化和一次热编码操作对数据集进行预处理。采用5倍交叉验证,根据召回率、特异性、准确度、精密度、f1评分、混淆矩阵、ROC分析等多项性能指标对模型性能进行评价。改进的VGG19模型在二元分类(COVID-19 vs. Normal)上的准确率为99.84%,在三重分类(COVID-19 vs.肺炎vs. Normal)上的准确率为98.26%,分类性能最高。与文献中其他研究相比,这些模型具有更高的准确率。此外,本研究中使用的数据集中的COVID-19 x射线图像数量大约是其他研究的两倍。由于它的来源不同,所以它是不规则的,没有标准。尽管如此,值得注意的是,与以往的研究相比,我们取得了更高的分类性能。改进的VGG16和VGG19模型(可在github.com/akaraci/LargeDatasetCovid19上获得)可作为辅助工具,用于轻微医疗机构缺乏检测COVID-19的专家。
{"title":"Predicting COVID-19 Cases on a Large Chest X-Ray Dataset Using Modified Pre-trained CNN Architectures","authors":"Abdulkadir Karac","doi":"10.2478/acss-2023-0005","DOIUrl":"https://doi.org/10.2478/acss-2023-0005","url":null,"abstract":"Abstract The Coronavirus is a virus that spreads very quickly. Therefore, it has had very destructive effects in many areas worldwide. Because X-ray images are an easily accessible, fast, and inexpensive method, they are widely used worldwide to diagnose COVID-19. This study tried detecting COVID-19 from X-ray images using pre-trained VGG16, VGG19, InceptionV3, and Resnet50 CNN architectures and modified versions of these architectures. The fully connected layers of the pre-trained architectures have been reorganized in the modified CNN architectures. These architectures were trained on binary and three-class datasets, revealing their classification performance. The data set was collected from four different sources and consisted of 594 COVID-19, 1345 viral pneumonia, and 1341 normal X-ray images. Models are built using Tensorflow and Keras Libraries with Python programming language. Preprocessing was performed on the dataset by applying resizing, normalization, and one hot encoding operation. Model performances were evaluated according to many performance metrics such as recall, specificity, accuracy, precision, F1-score, confusion matrix, ROC analysis, etc., using 5-fold cross-validation. The highest classification performance was obtained in the modified VGG19 model with 99.84 % accuracy for binary classification (COVID-19 vs. Normal) and in the modified VGG16 model with 98.26 % accuracy for triple classification (COVID-19 vs. Pneumonia vs. Normal). These models have a higher accuracy rate than other studies in the literature. In addition, the number of COVID-19 X-ray images in the dataset used in this study is approximately two times higher than in other studies. Since it is obtained from different sources, it is irregular and does not have a standard. Despite this, it is noteworthy that higher classification performance was achieved than in previous studies. Modified VGG16 and VGG19 models (available at github.com/akaraci/LargeDatasetCovid19) can be used as an auxiliary tool in slight healthcare organizations’ shortage of specialists to detect COVID-19.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"10 1","pages":"44 - 57"},"PeriodicalIF":1.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81994837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UW Deep SLAM-CNN Assisted Underwater SLAM 深满贯- cnn辅助水下大满贯
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-06-01 DOI: 10.2478/acss-2023-0010
Chinthaka Amarasinghe, A. Ratnaweera, Sanjeeva Maitripala
Abstract Underwater simultaneous localization and mapping (SLAM) poses significant challenges for modern visual SLAM systems. The integration of deep learning networks within computer vision offers promising potential for addressing these difficulties. Our research draws inspiration from deep learning approaches applied to interest point detection and matching, single image depth prediction and underwater image enhancement. In response, we propose 3D-Net, a deep learning-assisted network designed to tackle these three tasks simultaneously. The network consists of three branches, each serving a distinct purpose: interest point detection, descriptor generation, and depth prediction. The interest point detector and descriptor generator can effectively serve as a front end for a classical SLAM system. The predicted depth information is akin to a virtual depth camera, opening up possibilities for various applications. We provide quantitative and qualitative evaluations to illustrate some of these potential uses. The network was trained in in several steps, using in-air datasets and followed by generated underwater datasets. Further, the network is integrated into feature-based SALM systems ORBSLAM2 and ORBSSLAM3, providing a comprehensive assessment of its effectiveness for underwater navigation.
水下同步定位与制图(SLAM)是现代视觉SLAM系统面临的重大挑战。计算机视觉中深度学习网络的集成为解决这些困难提供了有希望的潜力。我们的研究从应用于兴趣点检测和匹配、单图像深度预测和水下图像增强的深度学习方法中获得灵感。作为回应,我们提出了3D-Net,一种深度学习辅助网络,旨在同时解决这三个任务。该网络由三个分支组成,每个分支都有不同的目的:兴趣点检测、描述符生成和深度预测。兴趣点检测器和描述符生成器可以有效地作为经典SLAM系统的前端。预测的深度信息类似于虚拟深度相机,为各种应用开辟了可能性。我们提供定量和定性的评估来说明其中一些潜在的用途。该网络分几个步骤进行训练,首先使用空中数据集,然后使用生成的水下数据集。此外,该网络被集成到基于特征的SALM系统ORBSLAM2和ORBSSLAM3中,对其水下导航的有效性进行全面评估。
{"title":"UW Deep SLAM-CNN Assisted Underwater SLAM","authors":"Chinthaka Amarasinghe, A. Ratnaweera, Sanjeeva Maitripala","doi":"10.2478/acss-2023-0010","DOIUrl":"https://doi.org/10.2478/acss-2023-0010","url":null,"abstract":"Abstract Underwater simultaneous localization and mapping (SLAM) poses significant challenges for modern visual SLAM systems. The integration of deep learning networks within computer vision offers promising potential for addressing these difficulties. Our research draws inspiration from deep learning approaches applied to interest point detection and matching, single image depth prediction and underwater image enhancement. In response, we propose 3D-Net, a deep learning-assisted network designed to tackle these three tasks simultaneously. The network consists of three branches, each serving a distinct purpose: interest point detection, descriptor generation, and depth prediction. The interest point detector and descriptor generator can effectively serve as a front end for a classical SLAM system. The predicted depth information is akin to a virtual depth camera, opening up possibilities for various applications. We provide quantitative and qualitative evaluations to illustrate some of these potential uses. The network was trained in in several steps, using in-air datasets and followed by generated underwater datasets. Further, the network is integrated into feature-based SALM systems ORBSLAM2 and ORBSSLAM3, providing a comprehensive assessment of its effectiveness for underwater navigation.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"44 1","pages":"100 - 113"},"PeriodicalIF":1.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79011478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition and 3D Visualization of Human Body Parts and Bone Areas Using CT Images 利用CT图像识别和三维可视化人体部位和骨骼区域
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-06-01 DOI: 10.2478/acss-2023-0007
H. T. Nguyen, My N. Nguyen, Bang Anh Nguyen, Linh Chi Nguyen, Linh Duong Phung
Abstract The advent of medical imaging significantly assisted in disease diagnosis and treatment. This study introduces to a framework for detecting several human body parts in Computerised Tomography (CT) images formatted in DICOM files. In addition, the method can highlight the bone areas inside CT images and transform 2D slices into a visual 3D model to illustrate the structure of human body parts. Firstly, we leveraged shallow convolutional Neural Networks to classify body parts and detect bone areas in each part. Then, Grad-CAM was applied to highlight the bone areas. Finally, Insight and Visualization libraries were utilized to visualize slides in 3D of a body part. As a result, the classifiers achieved 98 % in F1-score in the classification of human body parts on a CT image dataset, including 1234 slides capturing body parts from a woman for the training phase and 1245 images from a male for testing. In addition, distinguishing between bone and non-bone images can reach 97 % in F1-score on the dataset generated by setting a threshold value to reveal bone areas in CT images. Moreover, the Grad-CAM-based approach can provide clear, accurate visualizations with segmented bones in the image. Also, we successfully converted 2D slice images of a body part into a lively 3D model that provided a more intuitive view from any angle. The proposed approach is expected to provide an interesting visual tool for supporting doctors in medical image-based disease diagnosis.
医学影像的出现极大地辅助了疾病的诊断和治疗。本研究介绍了一种检测DICOM格式计算机断层扫描(CT)图像中人体部位的框架。此外,该方法可以突出显示CT图像中的骨骼区域,并将2D切片转换为可视化的3D模型,以说明人体部位的结构。首先,我们利用浅卷积神经网络对身体部位进行分类,并检测每个部位的骨骼区域。然后,应用Grad-CAM突出骨骼区域。最后,利用Insight和Visualization库对身体部位的幻灯片进行3D可视化。结果,在CT图像数据集上,分类器对人体部位的分类达到了98%的f1得分,其中包括1234张女性身体部位的幻灯片用于训练阶段,1245张男性身体部位的图像用于测试。此外,在CT图像中设置显示骨骼区域的阈值生成的数据集上,骨与非骨图像的区分率在F1-score上可以达到97%。此外,基于grad - cam的方法可以提供清晰,准确的图像中分割的骨骼可视化。此外,我们成功地将身体部位的2D切片图像转换为生动的3D模型,从任何角度提供更直观的视图。该方法有望为支持医生进行基于医学图像的疾病诊断提供一个有趣的可视化工具。
{"title":"Recognition and 3D Visualization of Human Body Parts and Bone Areas Using CT Images","authors":"H. T. Nguyen, My N. Nguyen, Bang Anh Nguyen, Linh Chi Nguyen, Linh Duong Phung","doi":"10.2478/acss-2023-0007","DOIUrl":"https://doi.org/10.2478/acss-2023-0007","url":null,"abstract":"Abstract The advent of medical imaging significantly assisted in disease diagnosis and treatment. This study introduces to a framework for detecting several human body parts in Computerised Tomography (CT) images formatted in DICOM files. In addition, the method can highlight the bone areas inside CT images and transform 2D slices into a visual 3D model to illustrate the structure of human body parts. Firstly, we leveraged shallow convolutional Neural Networks to classify body parts and detect bone areas in each part. Then, Grad-CAM was applied to highlight the bone areas. Finally, Insight and Visualization libraries were utilized to visualize slides in 3D of a body part. As a result, the classifiers achieved 98 % in F1-score in the classification of human body parts on a CT image dataset, including 1234 slides capturing body parts from a woman for the training phase and 1245 images from a male for testing. In addition, distinguishing between bone and non-bone images can reach 97 % in F1-score on the dataset generated by setting a threshold value to reveal bone areas in CT images. Moreover, the Grad-CAM-based approach can provide clear, accurate visualizations with segmented bones in the image. Also, we successfully converted 2D slice images of a body part into a lively 3D model that provided a more intuitive view from any angle. The proposed approach is expected to provide an interesting visual tool for supporting doctors in medical image-based disease diagnosis.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"305 1","pages":"66 - 77"},"PeriodicalIF":1.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83444446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inquisitive Genetic-Based Wolf Optimization for Load Balancing in Cloud Computing 基于好奇遗传的云计算负载平衡狼优化
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-06-01 DOI: 10.2478/acss-2023-0017
Suman Sansanwal, Nitin Jain
Abstract Cloud remains an active and dominant player in the field of information technology. Hence, to meet the rapidly growing requirement of computational processes and storage resources, the cloud provider deploys efficient data centres globally that comprise thousands of IT servers. Because of tremendous energy and resource utilization, a reliable cloud platform has to be necessarily optimized. Effective load balancing is a great option to overcome these issues. However, loading balancing difficulties, such as increased computational complexity, the chance of losing the client data during task rescheduling, and consuming huge memory of the host, and new VM (Virtual Machine), need appropriate optimization. Hence, the study aims to create a newly developed IG-WA (Inquisitive Genetic–Wolf Optimization) framework that meritoriously detects the optimized virtual machine in an environment. For this purpose, the system utilises the GWO (Grey Wolf Optimization) method with an evolutionary mechanism for achieving a proper compromise between exploitation and exploration, thereby accelerating the convergence and achieving optimized accuracy. Furthermore, the fitness function evaluated with an inquisitive genetic algorithm adds value to the overall efficacy. Performance evaluation brings forward the outperformance of the proposed IGWO system in terms of energy consumption, execution time and cost, makespan, CPU utilization, and memory utilization. Further, the system attains more comprehensive and better results when compared to the state of art methods.
云仍然是信息技术领域的一个活跃和主导的参与者。因此,为了满足快速增长的计算过程和存储资源需求,云提供商在全球部署了由数千台IT服务器组成的高效数据中心。由于巨大的能源和资源利用率,一个可靠的云平台必须进行优化。有效的负载平衡是克服这些问题的一个很好的选择。但是,负载平衡方面的困难需要适当的优化,例如计算复杂性的增加、任务重新调度期间丢失客户端数据的可能性以及占用主机和新VM (Virtual Machine)的大量内存。因此,该研究旨在创建一个新开发的IG-WA(好奇遗传狼优化)框架,该框架可以有效地检测环境中优化的虚拟机。为此,系统采用灰狼优化(GWO)方法,采用进化机制,在开发和探索之间实现适当的折衷,从而加快收敛速度,达到最优精度。此外,用探究式遗传算法评估的适应度函数为整体功效增加了价值。性能评估表明,本文提出的IGWO系统在能耗、执行时间和成本、makespan、CPU利用率和内存利用率等方面均优于IGWO系统。此外,与最先进的方法相比,该系统获得了更全面和更好的结果。
{"title":"Inquisitive Genetic-Based Wolf Optimization for Load Balancing in Cloud Computing","authors":"Suman Sansanwal, Nitin Jain","doi":"10.2478/acss-2023-0017","DOIUrl":"https://doi.org/10.2478/acss-2023-0017","url":null,"abstract":"Abstract Cloud remains an active and dominant player in the field of information technology. Hence, to meet the rapidly growing requirement of computational processes and storage resources, the cloud provider deploys efficient data centres globally that comprise thousands of IT servers. Because of tremendous energy and resource utilization, a reliable cloud platform has to be necessarily optimized. Effective load balancing is a great option to overcome these issues. However, loading balancing difficulties, such as increased computational complexity, the chance of losing the client data during task rescheduling, and consuming huge memory of the host, and new VM (Virtual Machine), need appropriate optimization. Hence, the study aims to create a newly developed IG-WA (Inquisitive Genetic–Wolf Optimization) framework that meritoriously detects the optimized virtual machine in an environment. For this purpose, the system utilises the GWO (Grey Wolf Optimization) method with an evolutionary mechanism for achieving a proper compromise between exploitation and exploration, thereby accelerating the convergence and achieving optimized accuracy. Furthermore, the fitness function evaluated with an inquisitive genetic algorithm adds value to the overall efficacy. Performance evaluation brings forward the outperformance of the proposed IGWO system in terms of energy consumption, execution time and cost, makespan, CPU utilization, and memory utilization. Further, the system attains more comprehensive and better results when compared to the state of art methods.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"30 1","pages":"170 - 179"},"PeriodicalIF":1.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82328639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who are Smart Home Users and What do they Want? – Insights from an International Survey 谁是智能家居用户,他们想要什么?——来自一项国际调查的见解
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-06-01 DOI: 10.2478/acss-2023-0011
Ashkan Yaldaie, J. Porras, O. Drögehorn
Abstract Any set of devices for controlling home appliances that link to a common network and may be controlled independently or remotely are typically referred to as smart home technology. Smart homes and home automation are not completely unknown to people anymore; smart devices and sensors are part of daily life in the 21st century. Among other benefits of home automation devices, it is possible to manage home appliances and monitor resource usage, and security. It is essential to find practical information about smart home users, and possible use cases. The current survey covers smart home usage benefits and challenges for the users. The study presents the result of the collected information from different countries, and the participants are people from a variety of age groups and occupations. The questionnaire that contains both qualitative and quantitative questions was distributed through internet channels such as blog posts and social network groups. Furthermore, to generate the survey questions we conducted a literature review to gain a better understating of the subject and the related work. The research provides a better foundation for future smart home development. As a result of this survey-based study and in addition to finding the desirable home automation features, we discovered the amount of money users are ready to spend to automate their homes. Connecting the favourite smart home features to its users and the amount of money they are ready to spend on them can provide a bigger picture for the smart home industry as a whole and particularly be beneficial for developers and start-ups.
任何一组用于控制连接到公共网络的家用电器的设备,可以独立或远程控制,通常称为智能家居技术。智能家居和家庭自动化对人们来说不再是完全陌生的;智能设备和传感器是21世纪日常生活的一部分。在家庭自动化设备的其他好处中,可以管理家用电器并监控资源使用情况和安全性。找到有关智能家居用户的实用信息和可能的用例是至关重要的。目前的调查涵盖了智能家居使用对用户的好处和挑战。该研究展示了从不同国家收集信息的结果,参与者是来自不同年龄段和职业的人。问卷包含定性和定量问题,通过博客文章和社交网络群组等互联网渠道分发。此外,为了产生调查问题,我们进行了文献综述,以更好地了解主题和相关工作。该研究为未来智能家居的发展提供了更好的基础。这项基于调查的研究的结果是,除了发现理想的家庭自动化功能外,我们还发现了用户准备为家庭自动化花费的金额。将最受欢迎的智能家居功能与用户联系起来,以及他们准备在这些功能上花多少钱,可以为整个智能家居行业提供一个更大的图景,尤其是对开发商和初创企业有利。
{"title":"Who are Smart Home Users and What do they Want? – Insights from an International Survey","authors":"Ashkan Yaldaie, J. Porras, O. Drögehorn","doi":"10.2478/acss-2023-0011","DOIUrl":"https://doi.org/10.2478/acss-2023-0011","url":null,"abstract":"Abstract Any set of devices for controlling home appliances that link to a common network and may be controlled independently or remotely are typically referred to as smart home technology. Smart homes and home automation are not completely unknown to people anymore; smart devices and sensors are part of daily life in the 21st century. Among other benefits of home automation devices, it is possible to manage home appliances and monitor resource usage, and security. It is essential to find practical information about smart home users, and possible use cases. The current survey covers smart home usage benefits and challenges for the users. The study presents the result of the collected information from different countries, and the participants are people from a variety of age groups and occupations. The questionnaire that contains both qualitative and quantitative questions was distributed through internet channels such as blog posts and social network groups. Furthermore, to generate the survey questions we conducted a literature review to gain a better understating of the subject and the related work. The research provides a better foundation for future smart home development. As a result of this survey-based study and in addition to finding the desirable home automation features, we discovered the amount of money users are ready to spend to automate their homes. Connecting the favourite smart home features to its users and the amount of money they are ready to spend on them can provide a bigger picture for the smart home industry as a whole and particularly be beneficial for developers and start-ups.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"42 1","pages":"114 - 124"},"PeriodicalIF":1.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89449245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and Classification of Banana Leaf Disease Using Novel Segmentation and Ensemble Machine Learning Approach 基于新型分割集成机器学习方法的香蕉叶片病害检测与分类
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-06-01 DOI: 10.2478/acss-2023-0009
Vandana Chaudhari, M. Patil
Abstract Plant diseases are a primary hazard to the productiveness of crops, which impacts food protection and decreases the profitability of farmers. Consequently, identification of plant diseases becomes a crucial task. By taking the right nurturing measures to remediate these diseases in the early stages can drastically help in fending off the reduction in productivity/profit. Providing an intelligent and automated solution becomes a necessity. This can be achieved with the help of machine learning techniques. It involves a number of steps like image acquisition, image pre-processing using filtering and contrast enhancement techniques. Image segmentation, which is a crucial part in disease detection system, is done by applying genetic algorithm and the colour, texture features extracted using a local binary pattern. The novelty of this approach is applying the genetic algorithm for image segmentation and combining a set of propositions from all the learning classifiers with an ensemble method and calculating the results. This obeys the optimistic features of all the learning classifiers. System accuracy is evaluated using precision, recall, and accuracy measures. After analysing the results, it clearly shows that the ensemble models deliver very good accuracy of over 92 % as compared to an individual SVM, Naïve Bayes, and KNN classifiers.
植物病害是影响作物生产的主要危害,影响粮食安全,降低农民的盈利能力。因此,植物病害的鉴定成为一项至关重要的任务。通过采取正确的培育措施,在早期阶段纠正这些疾病,可以大大有助于避免生产力/利润的下降。提供智能和自动化的解决方案变得非常必要。这可以在机器学习技术的帮助下实现。它涉及许多步骤,如图像采集,图像预处理使用滤波和对比度增强技术。图像分割是疾病检测系统的关键部分,它采用遗传算法和局部二值模式提取图像的颜色、纹理特征。该方法的新颖之处在于将遗传算法应用于图像分割,并将所有学习分类器的一组命题与集成方法相结合并计算结果。这符合所有学习分类器的乐观特征。使用精密度、召回率和准确度来评估系统的准确性。在分析结果之后,它清楚地表明,与单个SVM、Naïve贝叶斯和KNN分类器相比,集成模型提供了超过92%的非常好的准确率。
{"title":"Detection and Classification of Banana Leaf Disease Using Novel Segmentation and Ensemble Machine Learning Approach","authors":"Vandana Chaudhari, M. Patil","doi":"10.2478/acss-2023-0009","DOIUrl":"https://doi.org/10.2478/acss-2023-0009","url":null,"abstract":"Abstract Plant diseases are a primary hazard to the productiveness of crops, which impacts food protection and decreases the profitability of farmers. Consequently, identification of plant diseases becomes a crucial task. By taking the right nurturing measures to remediate these diseases in the early stages can drastically help in fending off the reduction in productivity/profit. Providing an intelligent and automated solution becomes a necessity. This can be achieved with the help of machine learning techniques. It involves a number of steps like image acquisition, image pre-processing using filtering and contrast enhancement techniques. Image segmentation, which is a crucial part in disease detection system, is done by applying genetic algorithm and the colour, texture features extracted using a local binary pattern. The novelty of this approach is applying the genetic algorithm for image segmentation and combining a set of propositions from all the learning classifiers with an ensemble method and calculating the results. This obeys the optimistic features of all the learning classifiers. System accuracy is evaluated using precision, recall, and accuracy measures. After analysing the results, it clearly shows that the ensemble models deliver very good accuracy of over 92 % as compared to an individual SVM, Naïve Bayes, and KNN classifiers.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"19 1","pages":"92 - 99"},"PeriodicalIF":1.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81880880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Content-Based Image Retrieval System with Two-Tier Hybrid Frameworks 基于两层混合框架的高效内容图像检索系统
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-12-01 DOI: 10.2478/acss-2022-0018
Fatima Shaheen, R. Raibagkar
Abstract The Content Based Image Retrieval (CBIR) system is a framework for finding images from huge datasets that are similar to a given image. The main component of CBIR system is the strategy for retrieval of images. There are many strategies available and most of these rely on single feature extraction. The single feature-based strategy may not be efficient for all types of images. Similarly, due to a larger set of data, image retrieval may become inefficient. Hence, this article proposes a system that comprises of two-stage retrieval with different features at every stage where the first stage will be coarse retrieval and the second will be fine retrieval. The proposed framework is validated on standard benchmark images and compared with existing frameworks. The results are recorded in graphical and numerical form, thus supporting the efficiency of the proposed system.
基于内容的图像检索(CBIR)系统是一个从海量数据集中查找与给定图像相似的图像的框架。CBIR系统的主要组成部分是图像检索策略。有许多可用的策略,其中大多数依赖于单个特征提取。单一的基于特征的策略可能不是对所有类型的图像都有效。同样,由于数据集较大,图像检索可能会变得效率低下。因此,本文提出了一个由两阶段检索组成的系统,每个阶段具有不同的特征,第一阶段将是粗检索,第二阶段将是细检索。在标准基准图像上对该框架进行了验证,并与现有框架进行了比较。结果以图形和数字形式记录下来,从而支持所提出系统的效率。
{"title":"Efficient Content-Based Image Retrieval System with Two-Tier Hybrid Frameworks","authors":"Fatima Shaheen, R. Raibagkar","doi":"10.2478/acss-2022-0018","DOIUrl":"https://doi.org/10.2478/acss-2022-0018","url":null,"abstract":"Abstract The Content Based Image Retrieval (CBIR) system is a framework for finding images from huge datasets that are similar to a given image. The main component of CBIR system is the strategy for retrieval of images. There are many strategies available and most of these rely on single feature extraction. The single feature-based strategy may not be efficient for all types of images. Similarly, due to a larger set of data, image retrieval may become inefficient. Hence, this article proposes a system that comprises of two-stage retrieval with different features at every stage where the first stage will be coarse retrieval and the second will be fine retrieval. The proposed framework is validated on standard benchmark images and compared with existing frameworks. The results are recorded in graphical and numerical form, thus supporting the efficiency of the proposed system.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"77 1","pages":"166 - 182"},"PeriodicalIF":1.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76568751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Project Defect Prediction with Metrics Selection and Balancing Approach 用度量选择和平衡方法进行跨项目缺陷预测
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-12-01 DOI: 10.2478/acss-2022-0015
Meetesh Nevendra, Pradeep Singh
Abstract In software development, defects influence the quality and cost in an undesirable way. Software defect prediction (SDP) is one of the techniques which improves the software quality and testing efficiency by early identification of defects(bug/fault/error). Thus, several experiments have been suggested for defect prediction (DP) techniques. Mainly DP method utilises historical project data for constructing prediction models. SDP performs well within projects until there is an adequate amount of data accessible to train the models. However, if the data are inadequate or limited for the same project, the researchers mainly use Cross-Project Defect Prediction (CPDP). CPDP is a possible alternative option that refers to anticipating defects using prediction models built on historical data from other projects. CPDP is challenging due to its data distribution and domain difference problem. The proposed framework is an effective two-stage approach for CPDP, i.e., model generation and prediction process. In model generation phase, the conglomeration of different pre-processing, including feature selection and class reweights technique, is used to improve the initial data quality. Finally, a fine-tuned efficient bagging and boosting based hybrid ensemble model is developed, which avoids model over -fitting/under-fitting and helps enhance the prediction performance. In the prediction process phase, the generated model predicts the historical data from other projects, which has defects or clean. The framework is evaluated using25 software projects obtained from public repositories. The result analysis shows that the proposed model has achieved a 0.71±0.03 f1-score, which significantly improves the state-of-the-art approaches by 23 % to 60 %.
摘要在软件开发中,缺陷对软件的质量和成本产生了不利的影响。软件缺陷预测(SDP)是一种通过早期识别缺陷(bug/fault/error)来提高软件质量和测试效率的技术。因此,有几个实验建议缺陷预测(DP)技术。DP方法主要是利用历史工程数据构建预测模型。SDP在项目中表现良好,直到有足够数量的可访问数据来训练模型。然而,如果同一项目的数据不充分或有限,研究人员主要使用跨项目缺陷预测(CPDP)。CPDP是一种可能的替代选择,它指的是使用基于其他项目的历史数据构建的预测模型来预测缺陷。CPDP由于其数据分布和领域差异问题而具有挑战性。提出的框架是一种有效的两阶段CPDP方法,即模型生成和预测过程。在模型生成阶段,采用特征选择和类重权技术等多种预处理技术的组合,提高初始数据质量。最后,建立了一种基于微调的高效套袋和增压混合集成模型,避免了模型的过拟合/欠拟合,提高了预测性能。在预测过程阶段,生成的模型预测来自其他项目的历史数据,这些数据有缺陷或干净。该框架使用从公共存储库获得的25个软件项目进行评估。结果分析表明,所提出的模型达到了0.71±0.03 f1得分,显著提高了目前最先进的方法23%至60%。
{"title":"Cross-Project Defect Prediction with Metrics Selection and Balancing Approach","authors":"Meetesh Nevendra, Pradeep Singh","doi":"10.2478/acss-2022-0015","DOIUrl":"https://doi.org/10.2478/acss-2022-0015","url":null,"abstract":"Abstract In software development, defects influence the quality and cost in an undesirable way. Software defect prediction (SDP) is one of the techniques which improves the software quality and testing efficiency by early identification of defects(bug/fault/error). Thus, several experiments have been suggested for defect prediction (DP) techniques. Mainly DP method utilises historical project data for constructing prediction models. SDP performs well within projects until there is an adequate amount of data accessible to train the models. However, if the data are inadequate or limited for the same project, the researchers mainly use Cross-Project Defect Prediction (CPDP). CPDP is a possible alternative option that refers to anticipating defects using prediction models built on historical data from other projects. CPDP is challenging due to its data distribution and domain difference problem. The proposed framework is an effective two-stage approach for CPDP, i.e., model generation and prediction process. In model generation phase, the conglomeration of different pre-processing, including feature selection and class reweights technique, is used to improve the initial data quality. Finally, a fine-tuned efficient bagging and boosting based hybrid ensemble model is developed, which avoids model over -fitting/under-fitting and helps enhance the prediction performance. In the prediction process phase, the generated model predicts the historical data from other projects, which has defects or clean. The framework is evaluated using25 software projects obtained from public repositories. The result analysis shows that the proposed model has achieved a 0.71±0.03 f1-score, which significantly improves the state-of-the-art approaches by 23 % to 60 %.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"58 1","pages":"137 - 148"},"PeriodicalIF":1.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90396663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aspect-based Sentiment Analysis and Location Detection for Arabic Language Tweets 基于方面的阿拉伯语推文情感分析和位置检测
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-12-01 DOI: 10.2478/acss-2022-0013
N. Alshammari, Amal Almansour
Abstract The research examines the accuracy of current solution models for the Arabic text sentiment classification, including traditional machine learning and deep learning algorithms. The main aim is to detect the opinion and emotion expressed in Telecom companies’ customers tweets. Three supervised machine learning algorithms, Logistic Regression (LR), Support Vector Machine (SVM), and Random Forest (RF), and one deep learning algorithm, Convolutional Neural Network (CNN) were applied to classify the sentiment of 1098 unique Arabic textual tweets. The research results show that deep learning CNN using Word Embedding achieved higher performance in terms of accuracy with F1 score = 0.81. Furthermore, in the aspect classification task, the results reveal that applying Part of Speech (POS) features with deep learning CNN algorithm was efficient and reached 75 % accuracy using a dataset consisting of 1277 tweets. Additionally, in this study, we added an additional task of extracting the geographical location information from the tweet content. The location detection model achieved the following precision values: 0.6 and 0.89 for both Point of Interest (POI) and city (CIT).
摘要本研究考察了目前阿拉伯语文本情感分类的解决方案模型的准确性,包括传统的机器学习和深度学习算法。主要目的是检测电信公司客户推文中表达的观点和情感。采用逻辑回归(LR)、支持向量机(SVM)和随机森林(RF)三种监督机器学习算法和卷积神经网络(CNN)一种深度学习算法对1098条阿拉伯语文本推文的情感进行分类。研究结果表明,使用Word Embedding的深度学习CNN在准确率方面取得了更高的性能,F1得分= 0.81。此外,在方面分类任务中,研究结果表明,在包含1277条推文的数据集上,将词性(POS)特征与深度学习CNN算法结合使用是有效的,准确率达到75%。此外,在本研究中,我们增加了从tweet内容中提取地理位置信息的额外任务。位置检测模型对兴趣点(POI)和城市(CIT)的精度分别为0.6和0.89。
{"title":"Aspect-based Sentiment Analysis and Location Detection for Arabic Language Tweets","authors":"N. Alshammari, Amal Almansour","doi":"10.2478/acss-2022-0013","DOIUrl":"https://doi.org/10.2478/acss-2022-0013","url":null,"abstract":"Abstract The research examines the accuracy of current solution models for the Arabic text sentiment classification, including traditional machine learning and deep learning algorithms. The main aim is to detect the opinion and emotion expressed in Telecom companies’ customers tweets. Three supervised machine learning algorithms, Logistic Regression (LR), Support Vector Machine (SVM), and Random Forest (RF), and one deep learning algorithm, Convolutional Neural Network (CNN) were applied to classify the sentiment of 1098 unique Arabic textual tweets. The research results show that deep learning CNN using Word Embedding achieved higher performance in terms of accuracy with F1 score = 0.81. Furthermore, in the aspect classification task, the results reveal that applying Part of Speech (POS) features with deep learning CNN algorithm was efficient and reached 75 % accuracy using a dataset consisting of 1277 tweets. Additionally, in this study, we added an additional task of extracting the geographical location information from the tweet content. The location detection model achieved the following precision values: 0.6 and 0.89 for both Point of Interest (POI) and city (CIT).","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"105 1","pages":"119 - 127"},"PeriodicalIF":1.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77810937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Applied Computer Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1