首页 > 最新文献

PeerJ Computer Science最新文献

英文 中文
Unveiling the capabilities of vision transformers in sperm morphology analysis: a comparative evaluation. 揭示视觉变形在精子形态分析中的能力:比较评价。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-10 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3173
Abdulsamet Aktas, Gorkem Serbes, Hamza Osman Ilhan

Traditional sperm morphology assessment relies on manual visual inspection or semi-automated computer-aided sperm analysis (CASA) systems, which often require labor-intensive pre-processing steps. While recent machine learning approaches, particularly convolutional neural networks (CNNs), have improved feature extraction from sperm images, achieving a fully automated and highly accurate system remains challenging due to the complexity of sperm morphology and the need for specialized image adjustments. This study presents a novel, end-to-end automated sperm morphology analysis framework based on vision transformers (ViTs), which processes raw sperm images from two benchmark datasets-Human Sperm Head Morphology (HuSHeM) and Sperm Morphology Image Data Set (SMIDS)-without manual pre-processing. We conducted an extensive hyperparameter optimization study across eight ViT variants, evaluating learning rates, optimization algorithms, and data augmentation scales. Our experiments demonstrated that data augmentation significantly enhances ViT performance by improving generalization, particularly in limited-data scenarios. A comparative analysis of CNNs, hybrid models, and pure ViTs revealed that transformer-based architectures consistently outperform traditional methods. The BEiT_Base model achieved state-of-the-art accuracies of 92.5% (SMIDS) and 93.52% (HuSHeM), surpassing prior CNN-based approaches by 1.63% and 1.42%, respectively. Statistical significance (p < 0.05, t-test) confirmed these improvements. Visualization techniques (Attention Maps, Grad-CAM) further validated ViTs' superior ability to capture long-range spatial dependencies and discriminative morphological features, such as head shape and tail integrity. Our work bridges a critical gap in reproductive medicine by delivering a scalable, fully automated solution that eliminates manual intervention while improving diagnostic accuracy. These findings underscore the potential of transformer-based models in clinical andrology, with implications for broader applications in biomedical image analysis.

传统的精子形态评估依赖于人工目视检查或半自动计算机辅助精子分析(CASA)系统,这通常需要劳动密集型的预处理步骤。虽然最近的机器学习方法,特别是卷积神经网络(cnn),已经改进了精子图像的特征提取,但由于精子形态的复杂性和需要专门的图像调整,实现完全自动化和高度精确的系统仍然具有挑战性。本研究提出了一种基于视觉变压器(ViTs)的新颖的端到端自动精子形态分析框架,该框架处理来自两个基准数据集的原始精子图像-人类精子头部形态(HuSHeM)和精子形态图像数据集(SMIDS)-无需手动预处理。我们对八个ViT变量进行了广泛的超参数优化研究,评估了学习率、优化算法和数据增强规模。我们的实验表明,数据增强通过提高泛化,特别是在有限数据场景下,显著提高了ViT性能。cnn、混合模型和纯vit的对比分析表明,基于变压器的架构始终优于传统方法。BEiT_Base模型达到了92.5% (SMIDS)和93.52% (HuSHeM)的最先进精度,分别比之前基于cnn的方法高1.63%和1.42%。统计学意义(p < 0.05, t检验)证实了这些改善。可视化技术(注意地图,gradcam)进一步验证了ViTs在捕获远程空间依赖性和区分形态特征(如头形状和尾巴完整性)方面的卓越能力。我们的工作通过提供可扩展的全自动解决方案,消除人工干预,同时提高诊断准确性,弥合了生殖医学领域的关键差距。这些发现强调了基于变压器的模型在临床男科中的潜力,并对生物医学图像分析的更广泛应用产生了影响。
{"title":"Unveiling the capabilities of vision transformers in sperm morphology analysis: a comparative evaluation.","authors":"Abdulsamet Aktas, Gorkem Serbes, Hamza Osman Ilhan","doi":"10.7717/peerj-cs.3173","DOIUrl":"10.7717/peerj-cs.3173","url":null,"abstract":"<p><p>Traditional sperm morphology assessment relies on manual visual inspection or semi-automated computer-aided sperm analysis (CASA) systems, which often require labor-intensive pre-processing steps. While recent machine learning approaches, particularly convolutional neural networks (CNNs), have improved feature extraction from sperm images, achieving a fully automated and highly accurate system remains challenging due to the complexity of sperm morphology and the need for specialized image adjustments. This study presents a novel, end-to-end automated sperm morphology analysis framework based on vision transformers (ViTs), which processes raw sperm images from two benchmark datasets-Human Sperm Head Morphology (HuSHeM) and Sperm Morphology Image Data Set (SMIDS)-without manual pre-processing. We conducted an extensive hyperparameter optimization study across eight ViT variants, evaluating learning rates, optimization algorithms, and data augmentation scales. Our experiments demonstrated that data augmentation significantly enhances ViT performance by improving generalization, particularly in limited-data scenarios. A comparative analysis of CNNs, hybrid models, and pure ViTs revealed that transformer-based architectures consistently outperform traditional methods. The BEiT_Base model achieved state-of-the-art accuracies of 92.5% (SMIDS) and 93.52% (HuSHeM), surpassing prior CNN-based approaches by 1.63% and 1.42%, respectively. Statistical significance (<i>p</i> < 0.05, <i>t</i>-test) confirmed these improvements. Visualization techniques (Attention Maps, Grad-CAM) further validated ViTs' superior ability to capture long-range spatial dependencies and discriminative morphological features, such as head shape and tail integrity. Our work bridges a critical gap in reproductive medicine by delivering a scalable, fully automated solution that eliminates manual intervention while improving diagnostic accuracy. These findings underscore the potential of transformer-based models in clinical andrology, with implications for broader applications in biomedical image analysis.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3173"},"PeriodicalIF":2.5,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453802/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A GAN-based approach to solar radiation prediction: data augmentation and model optimization for Saudi Arabia. 基于gan的太阳辐射预测方法:沙特阿拉伯的数据增强和模型优化。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-10 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3189
Abdalla Alameen, Sultan Mesfer Aldossary

Background: Accurate solar radiation prediction is essential for optimizing renewable energy systems but remains challenging due to data scarcity and variability. This study addresses these challenges by employing generative adversarial networks (GANs) to generate high-quality synthetic solar radiation data.

Methods: A novel framework was developed that integrates GAN-generated synthetic data with machine learning and deep learning models, including CNN-LSTM architectures. These models were trained and evaluated using augmented datasets to improve predictive accuracy and adaptability across diverse climatic zones.

Results: Models trained on augmented datasets exhibited significant improvements, with root mean square error (RMSE) reduced by 15.2% and mean absolute error (MAE) decreased by 19.9%. The framework effectively bridged data gaps and enhanced model generalization, enabling applicability across various climatic regions in Saudi Arabia.

Conclusions: The proposed framework facilitates practical applications such as photovoltaic system optimization, grid stability enhancement, and resource planning. By aligning with Saudi Arabia's Vision 2030 and global renewable energy objectives, this study presents a scalable and adaptable approach to advancing renewable energy systems. However, challenges such as computational complexity and hyperparameter sensitivity warrant further investigation, providing a robust pathway toward sustainable energy futures worldwide.

背景:准确的太阳辐射预测对于优化可再生能源系统至关重要,但由于数据的稀缺性和可变性,仍然具有挑战性。本研究通过使用生成对抗网络(gan)生成高质量的合成太阳辐射数据来解决这些挑战。方法:开发了一个新的框架,将gan生成的合成数据与机器学习和深度学习模型(包括CNN-LSTM架构)集成在一起。这些模型使用增强数据集进行训练和评估,以提高不同气候带的预测准确性和适应性。结果:在增强数据集上训练的模型表现出显著的改进,均方根误差(RMSE)降低了15.2%,平均绝对误差(MAE)降低了19.9%。该框架有效地弥合了数据差距,增强了模型的泛化,使其能够适用于沙特阿拉伯的各种气候区域。结论:提出的框架有利于光伏系统优化、电网稳定性增强和资源规划等实际应用。通过与沙特阿拉伯2030年愿景和全球可再生能源目标保持一致,本研究提出了一种可扩展和适应性强的方法来推进可再生能源系统。然而,计算复杂性和超参数敏感性等挑战需要进一步研究,为全球可持续能源未来提供坚实的途径。
{"title":"A GAN-based approach to solar radiation prediction: data augmentation and model optimization for Saudi Arabia.","authors":"Abdalla Alameen, Sultan Mesfer Aldossary","doi":"10.7717/peerj-cs.3189","DOIUrl":"10.7717/peerj-cs.3189","url":null,"abstract":"<p><strong>Background: </strong>Accurate solar radiation prediction is essential for optimizing renewable energy systems but remains challenging due to data scarcity and variability. This study addresses these challenges by employing generative adversarial networks (GANs) to generate high-quality synthetic solar radiation data.</p><p><strong>Methods: </strong>A novel framework was developed that integrates GAN-generated synthetic data with machine learning and deep learning models, including CNN-LSTM architectures. These models were trained and evaluated using augmented datasets to improve predictive accuracy and adaptability across diverse climatic zones.</p><p><strong>Results: </strong>Models trained on augmented datasets exhibited significant improvements, with root mean square error (RMSE) reduced by 15.2% and mean absolute error (MAE) decreased by 19.9%. The framework effectively bridged data gaps and enhanced model generalization, enabling applicability across various climatic regions in Saudi Arabia.</p><p><strong>Conclusions: </strong>The proposed framework facilitates practical applications such as photovoltaic system optimization, grid stability enhancement, and resource planning. By aligning with Saudi Arabia's Vision 2030 and global renewable energy objectives, this study presents a scalable and adaptable approach to advancing renewable energy systems. However, challenges such as computational complexity and hyperparameter sensitivity warrant further investigation, providing a robust pathway toward sustainable energy futures worldwide.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3189"},"PeriodicalIF":2.5,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SignVLM: a pre-trained large video model for sign language recognition. SignVLM:一个用于手语识别的预训练大型视频模型。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3112
Hamzah Luqman

Sign language recognition (SLR) plays a vital role in including people with hearing impairment in the community. It facilitates the recognition of sign gestures and converts them into spoken languages. One of the main challenges for developing SLR systems is the lack of annotated datasets. This issue is more noticeable with low-resourced sign languages. To address this issue, we propose a pretrained large vision model, SignVLM, for SLR. This work explores the capability of the contrastive language-image pre-training (CLIP) model for SLR. This model is used to extract spatial features from the sign video frames while a Transformer decoder is used for temporal learning. The proposed model has been evaluated on four different sign languages using the KArSL, WLASL, LSA64, and AUSTL datasets. Different evaluation settings have been followed in this work including zero-shot and few-shot learning. The proposed model outperformed other models on the KArSL, WLASL, and LSA64 datasets and achieved comparable performance on the AUTSL dataset. The obtained results demonstrate the generalization of the proposed model to new datasets with few samples. The code and data are available at https://github.com/Hamzah-Luqman/signVLM.

手语识别(SLR)在将听力障碍人士纳入社区中起着至关重要的作用。它有助于识别手势并将其转换为口语。开发单反系统的主要挑战之一是缺乏注释数据集。这个问题在资源匮乏的手语中更为明显。为了解决这个问题,我们提出了一种针对单反的预训练大视觉模型SignVLM。本研究探讨了单反对比语言图像预训练(CLIP)模型的能力。该模型用于从标识视频帧中提取空间特征,同时使用Transformer解码器进行时间学习。使用KArSL、WLASL、LSA64和AUSTL四种不同的手语数据集对所提出的模型进行了评估。本研究采用了零弹学习和少弹学习两种不同的评估设置。该模型在KArSL、WLASL和LSA64数据集上的性能优于其他模型,并在AUTSL数据集上取得了相当的性能。结果表明,该模型在样本较少的新数据集上具有良好的泛化性。代码和数据可在https://github.com/Hamzah-Luqman/signVLM上获得。
{"title":"SignVLM: a pre-trained large video model for sign language recognition.","authors":"Hamzah Luqman","doi":"10.7717/peerj-cs.3112","DOIUrl":"10.7717/peerj-cs.3112","url":null,"abstract":"<p><p>Sign language recognition (SLR) plays a vital role in including people with hearing impairment in the community. It facilitates the recognition of sign gestures and converts them into spoken languages. One of the main challenges for developing SLR systems is the lack of annotated datasets. This issue is more noticeable with low-resourced sign languages. To address this issue, we propose a pretrained large vision model, SignVLM, for SLR. This work explores the capability of the contrastive language-image pre-training (CLIP) model for SLR. This model is used to extract spatial features from the sign video frames while a Transformer decoder is used for temporal learning. The proposed model has been evaluated on four different sign languages using the KArSL, WLASL, LSA64, and AUSTL datasets. Different evaluation settings have been followed in this work including zero-shot and few-shot learning. The proposed model outperformed other models on the KArSL, WLASL, and LSA64 datasets and achieved comparable performance on the AUTSL dataset. The obtained results demonstrate the generalization of the proposed model to new datasets with few samples. The code and data are available at https://github.com/Hamzah-Luqman/signVLM.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3112"},"PeriodicalIF":2.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453763/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of tennis auxiliary teaching system based on reinforcement learning and multi-feature fusion. 基于强化学习和多特征融合的网球辅助教学系统设计。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3188
Shiquan Zhang, Chaohong Gan

To accurately identify and evaluate tennis movements, a tennis auxiliary teaching system based on reinforcement learning and multi-feature fusion was designed by combining deep learning methods with tennis-related knowledge to recognize and evaluate tennis movements accurately. The algorithm first extracts human skeletal joint points from a video sequence using a human pose-recognition algorithm. Reinforcement learning is then used to extract and optimize the keyframes. Second, genetic algorithms were used to fuse the different features. The results demonstrate that the proposed tennis action recognition method achieves a classification accuracy of 98.45% for four types of tennis subactions. Its generalization ability is greater than that of graph convolutional network-based techniques, such as AGCN and ST-GCN. Lastly, following action categorization, the suggested scoring method based on dynamic temporal warping may deliver accurate and real-time assessment ratings for corresponding actions, lowering the effort of tennis instructors and significantly raising the standard of tennis instruction.

为了准确识别和评估网球动作,结合深度学习方法和网球相关知识,设计了基于强化学习和多特征融合的网球辅助教学系统,实现对网球动作的准确识别和评估。该算法首先使用人体姿势识别算法从视频序列中提取人体骨骼关节点。然后使用强化学习来提取和优化关键帧。其次,采用遗传算法对不同特征进行融合;结果表明,所提出的网球动作识别方法对四种类型的网球动作分类准确率达到了98.45%。其泛化能力优于基于图卷积网络的AGCN、ST-GCN等技术。最后,在动作分类之后,基于动态时间扭曲的评分方法可以对相应的动作给出准确、实时的评估评分,降低了网球教练的工作量,显著提高了网球教学水平。
{"title":"Design of tennis auxiliary teaching system based on reinforcement learning and multi-feature fusion.","authors":"Shiquan Zhang, Chaohong Gan","doi":"10.7717/peerj-cs.3188","DOIUrl":"10.7717/peerj-cs.3188","url":null,"abstract":"<p><p>To accurately identify and evaluate tennis movements, a tennis auxiliary teaching system based on reinforcement learning and multi-feature fusion was designed by combining deep learning methods with tennis-related knowledge to recognize and evaluate tennis movements accurately. The algorithm first extracts human skeletal joint points from a video sequence using a human pose-recognition algorithm. Reinforcement learning is then used to extract and optimize the keyframes. Second, genetic algorithms were used to fuse the different features. The results demonstrate that the proposed tennis action recognition method achieves a classification accuracy of 98.45% for four types of tennis subactions. Its generalization ability is greater than that of graph convolutional network-based techniques, such as AGCN and ST-GCN. Lastly, following action categorization, the suggested scoring method based on dynamic temporal warping may deliver accurate and real-time assessment ratings for corresponding actions, lowering the effort of tennis instructors and significantly raising the standard of tennis instruction.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3188"},"PeriodicalIF":2.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A machine learning assistant for detecting fraudulent activities in synchronous online programming exams. 用于检测同步在线编程考试中的欺诈活动的机器学习助手。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3159
Francisco Ortin, Alonso Gago, Jose Quiroga, Miguel Garcia

The rapid expansion of online learning has made education more accessible but has also introduced significant challenges in maintaining academic integrity, particularly during online exams. For certain types of exams, students are prohibited from connecting to the Internet to prevent them from accessing unauthorized resources, utilizing generative artificial intelligence tools, or engaging in other forms of cheating. However, in online exams, students must remain connected to the Internet. Most existing online proctoring systems rely on various devices to monitor students' actions and environments during the exam, focusing on tracking physical behavior, such as facial expressions, eye movements, and the presence of unauthorized materials, rather than analyzing the students' work within their computers. This often requires human review to determine whether students are engaging in unauthorized actions. This article presents the development and evaluation of a machine-learning-based assistant designed to assist instructors in detecting fraudulent activities in real-time during online programming exams. Our system leverages a convolutional neural network (CNN) followed by a recurrent neural network (RNN) and a dense layer to analyze sequences of screenshot frames captured from students' screens during exams. The system achieves an accuracy of 95.18% and an F2-score of 94.2%, prioritizing recall to emphasize detecting cheating instances, while minimizing false positives. Notably, data augmentation and class-weight adjustments during training significantly enhanced the model's performance, while transfer learning and alternative loss functions did not provide additional improvements. In post-deployment feedback, instructors expressed high satisfaction with the system's ability to assist in the rapid detection of cheating, reinforcing the potential of machine learning to support real-time monitoring in large-scale online exams.

在线学习的迅速发展使教育更容易获得,但也给维护学术诚信带来了重大挑战,特别是在在线考试期间。对于某些类型的考试,学生被禁止连接到互联网,以防止他们访问未经授权的资源,使用生成人工智能工具,或从事其他形式的作弊。然而,在在线考试中,学生必须保持与互联网的连接。大多数现有的在线监考系统依靠各种设备来监控学生在考试期间的行为和环境,重点是跟踪身体行为,如面部表情、眼球运动和未授权材料的存在,而不是分析学生在电脑上的作业。这通常需要人工审查,以确定学生是否从事未经授权的行为。本文介绍了一种基于机器学习的助手的开发和评估,该助手旨在帮助教师在在线编程考试中实时检测欺诈活动。我们的系统利用卷积神经网络(CNN),然后是循环神经网络(RNN)和密集层来分析考试期间从学生屏幕上捕获的截图帧序列。该系统达到95.18%的准确率和94.2%的f2分数,优先召回以强调检测作弊实例,同时最大限度地减少误报。值得注意的是,训练过程中的数据增强和类权值调整显著提高了模型的性能,而迁移学习和替代损失函数没有提供额外的改进。在部署后的反馈中,教师对系统协助快速检测作弊的能力表示高度满意,这加强了机器学习在大规模在线考试中支持实时监控的潜力。
{"title":"A machine learning assistant for detecting fraudulent activities in synchronous online programming exams.","authors":"Francisco Ortin, Alonso Gago, Jose Quiroga, Miguel Garcia","doi":"10.7717/peerj-cs.3159","DOIUrl":"10.7717/peerj-cs.3159","url":null,"abstract":"<p><p>The rapid expansion of online learning has made education more accessible but has also introduced significant challenges in maintaining academic integrity, particularly during online exams. For certain types of exams, students are prohibited from connecting to the Internet to prevent them from accessing unauthorized resources, utilizing generative artificial intelligence tools, or engaging in other forms of cheating. However, in online exams, students must remain connected to the Internet. Most existing online proctoring systems rely on various devices to monitor students' actions and environments during the exam, focusing on tracking physical behavior, such as facial expressions, eye movements, and the presence of unauthorized materials, rather than analyzing the students' work within their computers. This often requires human review to determine whether students are engaging in unauthorized actions. This article presents the development and evaluation of a machine-learning-based assistant designed to assist instructors in detecting fraudulent activities in real-time during online programming exams. Our system leverages a convolutional neural network (CNN) followed by a recurrent neural network (RNN) and a dense layer to analyze sequences of screenshot frames captured from students' screens during exams. The system achieves an accuracy of 95.18% and an F<sub>2</sub>-score of 94.2%, prioritizing recall to emphasize detecting cheating instances, while minimizing false positives. Notably, data augmentation and class-weight adjustments during training significantly enhanced the model's performance, while transfer learning and alternative loss functions did not provide additional improvements. In post-deployment feedback, instructors expressed high satisfaction with the system's ability to assist in the rapid detection of cheating, reinforcing the potential of machine learning to support real-time monitoring in large-scale online exams.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3159"},"PeriodicalIF":2.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453727/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing fruit freshness classification with adaptive knowledge distillation and global response normalization in convolutional networks. 基于卷积网络的自适应知识蒸馏和全局响应归一化增强水果新鲜度分类。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3198
Semih Demirel, Oktay Yıldız

The assessment of fruit freshness is crucial for ensuring food quality and reducing waste in agricultural production. In this study, we propose Global Response Normalization and Gaussian Error Linear Unit Enhanced Network (GGENet), a novel deep learning architecture that leverages adaptive knowledge distillation (AKD) and global response normalization (GRN) to classify fruits as fresh or rotten. Our model comprises two variants: GGENet-Teacher (GGENet-T), serving as the teacher model, and GGENet-Student (GGENet-S), functioning as the student model. By transferring attention maps from the teacher to the student model, we achieve efficient adaptive knowledge distillation, enhancing the performance of the lighter student model. Experimental results demonstrate that the GGENet with adaptive knowledge distillation (GGENet-AKD) achieves a competitive accuracy of 0.9818, an F1-score of 0.9818, and an area under the curve (AUC) score of 0.9891. The proposed method significantly contributes to reducing food waste and enhancing quality control in agriculture by facilitating early detection of rotting fruits.

水果新鲜度的评估对于保证食品质量和减少农业生产中的浪费至关重要。在这项研究中,我们提出了全局响应归一化和高斯误差线性单元增强网络(GGENet),这是一种新的深度学习架构,利用自适应知识蒸馏(AKD)和全局响应归一化(GRN)将水果分类为新鲜或腐烂。我们的模型包括两个变体:作为教师模型的GGENet-Teacher (GGENet-T)和作为学生模型的GGENet-Student (GGENet-S)。通过将注意力图从教师模型转移到学生模型,实现了高效的自适应知识提炼,提高了轻学生模型的性能。实验结果表明,基于自适应知识蒸馏的GGENet (GGENet- akd)的竞争准确率为0.9818,f1得分为0.9818,曲线下面积(AUC)得分为0.9891。该方法有助于减少食物浪费,并通过早期发现腐烂的水果来加强农业质量控制。
{"title":"Enhancing fruit freshness classification with adaptive knowledge distillation and global response normalization in convolutional networks.","authors":"Semih Demirel, Oktay Yıldız","doi":"10.7717/peerj-cs.3198","DOIUrl":"10.7717/peerj-cs.3198","url":null,"abstract":"<p><p>The assessment of fruit freshness is crucial for ensuring food quality and reducing waste in agricultural production. In this study, we propose <i>Global Response Normalization and Gaussian Error Linear Unit Enhanced Network (GGENet)</i>, a novel deep learning architecture that leverages adaptive knowledge distillation (AKD) and global response normalization (GRN) to classify fruits as fresh or rotten. Our model comprises two variants: <i>GGENet-Teacher (GGENet-T)</i>, serving as the teacher model, and <i>GGENet-Student (GGENet-S)</i>, functioning as the student model. By transferring attention maps from the teacher to the student model, we achieve efficient adaptive knowledge distillation, enhancing the performance of the lighter student model. Experimental results demonstrate that the <i>GGENet with adaptive knowledge distillation (GGENet-AKD)</i> achieves a competitive accuracy of 0.9818, an F1-score of 0.9818, and an area under the curve (AUC) score of 0.9891. The proposed method significantly contributes to reducing food waste and enhancing quality control in agriculture by facilitating early detection of rotting fruits.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3198"},"PeriodicalIF":2.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating inappropriate concepts in text-to-image generation with attention-guided Image editing. 使用注意力引导的图像编辑减少文本到图像生成中的不适当概念。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3170
Jiyeon Oh, Jae-Yeop Jeong, Yeong-Gi Hong, Jin-Woo Jeong

Text-to-image generative models have recently garnered a significant surge due to their ability to produce diverse images based on given text prompts. However, concerns regarding the occasional generation of inappropriate, offensive, or explicit content have arisen. To address this, we propose a simple yet effective method that leverages attention map to selectively suppress inappropriate concepts during image generation. Unlike existing approaches that often sacrifice original image context or demand substantial computational overhead, our method preserves image integrity without requiring additional model training or extensive engineering effort. To evaluate our method, we conducted comprehensive quantitative assessments on inappropriateness reduction, text fidelity, image consistency, and computational cost, alongside an online human perceptual study involving 20 participants. The results from our statistical analysis demonstrated that our method effectively removes inappropriate content while preserving the integrity of the original images with high computational efficiency.

文本到图像生成模型最近获得了显著的激增,因为它们能够根据给定的文本提示生成不同的图像。然而,对于偶尔产生的不恰当、冒犯性或露骨内容的担忧已经出现。为了解决这个问题,我们提出了一种简单而有效的方法,利用注意图在图像生成过程中选择性地抑制不适当的概念。与现有的经常牺牲原始图像上下文或需要大量计算开销的方法不同,我们的方法保留了图像的完整性,而不需要额外的模型训练或大量的工程工作。为了评估我们的方法,我们对不当性减少、文本保真度、图像一致性和计算成本进行了全面的定量评估,同时进行了一项涉及20名参与者的在线人类感知研究。统计分析结果表明,该方法在保持原始图像完整性的同时,有效地去除了不合适的内容,计算效率高。
{"title":"Mitigating inappropriate concepts in text-to-image generation with attention-guided Image editing.","authors":"Jiyeon Oh, Jae-Yeop Jeong, Yeong-Gi Hong, Jin-Woo Jeong","doi":"10.7717/peerj-cs.3170","DOIUrl":"10.7717/peerj-cs.3170","url":null,"abstract":"<p><p>Text-to-image generative models have recently garnered a significant surge due to their ability to produce diverse images based on given text prompts. However, concerns regarding the occasional generation of inappropriate, offensive, or explicit content have arisen. To address this, we propose a simple yet effective method that leverages attention map to selectively suppress inappropriate concepts during image generation. Unlike existing approaches that often sacrifice original image context or demand substantial computational overhead, our method preserves image integrity without requiring additional model training or extensive engineering effort. To evaluate our method, we conducted comprehensive quantitative assessments on inappropriateness reduction, text fidelity, image consistency, and computational cost, alongside an online human perceptual study involving 20 participants. The results from our statistical analysis demonstrated that our method effectively removes inappropriate content while preserving the integrity of the original images with high computational efficiency.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3170"},"PeriodicalIF":2.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453712/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel deep learning based approach with hyperparameter selection using grey wolf optimization for leukemia classification and hematologic malignancy detection. 基于灰狼优化的超参数选择深度学习方法在白血病分类和血液恶性肿瘤检测中的应用。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-08 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3160
Shams Ur Rehman, Robertas Damaševicius, Hassan Al Sukhni, Abeer Aljohani, Ameer Hamza, Deema Mohammed Alsekait, Diaa Salama AbdElminaam

Traditional diagnostic methods of leukemia, a blood cancer disease, are based on visual assessment of white cells in microscopic peripheral blood smears, and as a result, they are arbitrary, laborious, and susceptible to errors. This study proposes a new automated deep learning-based framework for accurately classifying leukemia cancer. A novel lightweight algorithm based on the hyperbolic sin function has been designed for contrast enhancement. In the next step, we proposed a customized convolutional neural network (CNN) model based on a parallel inverted dual self-attention network (PIDSAN4), and a tiny16 Vision Transformer (ViT) has been employed. The hyperparameters were tuned using the grey wolf optimization and then used to train the models. The experiment is carried out on a publicly available leukemia microscopic images dataset, and the proposed model achieved 0.913 accuracy, 0.892 sensitivity, 0.925 specificity, 0.883 precision, 0.894 F-measure, and 0.901 G-mean. The results were compared with state-of-the-art pre-trained models, showing that the proposed model improved accuracy.

白血病是一种血癌疾病,传统的诊断方法是基于显微镜下外周血涂片中白细胞的视觉评估,因此,它们是任意的,费力的,而且容易出错。本研究提出了一种新的基于深度学习的自动化框架,用于准确分类白血病癌症。设计了一种基于双曲正弦函数的图像对比度增强算法。下一步,我们提出了一个基于并行倒置双自注意网络(PIDSAN4)的定制卷积神经网络(CNN)模型,并使用了一个微型视觉变压器(ViT)。采用灰狼优化方法对超参数进行调优,并对模型进行训练。在公开的白血病显微图像数据集上进行实验,该模型的准确率为0.913,灵敏度为0.892,特异性为0.925,精度为0.883,F-measure为0.894,G-mean为0.901。结果与最先进的预训练模型进行了比较,表明所提出的模型提高了精度。
{"title":"A novel deep learning based approach with hyperparameter selection using grey wolf optimization for leukemia classification and hematologic malignancy detection.","authors":"Shams Ur Rehman, Robertas Damaševicius, Hassan Al Sukhni, Abeer Aljohani, Ameer Hamza, Deema Mohammed Alsekait, Diaa Salama AbdElminaam","doi":"10.7717/peerj-cs.3160","DOIUrl":"10.7717/peerj-cs.3160","url":null,"abstract":"<p><p>Traditional diagnostic methods of leukemia, a blood cancer disease, are based on visual assessment of white cells in microscopic peripheral blood smears, and as a result, they are arbitrary, laborious, and susceptible to errors. This study proposes a new automated deep learning-based framework for accurately classifying leukemia cancer. A novel lightweight algorithm based on the hyperbolic sin function has been designed for contrast enhancement. In the next step, we proposed a customized convolutional neural network (CNN) model based on a parallel inverted dual self-attention network (PIDSAN4), and a tiny16 Vision Transformer (ViT) has been employed. The hyperparameters were tuned using the grey wolf optimization and then used to train the models. The experiment is carried out on a publicly available leukemia microscopic images dataset, and the proposed model achieved 0.913 accuracy, 0.892 sensitivity, 0.925 specificity, 0.883 precision, 0.894 F-measure, and 0.901 G-mean. The results were compared with state-of-the-art pre-trained models, showing that the proposed model improved accuracy.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3160"},"PeriodicalIF":2.5,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453765/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HEMF: an adaptive hierarchical enhanced multi-attention feature fusion framework for cross-scale medical image classification. HEMF:一种用于跨尺度医学图像分类的自适应分层增强多关注特征融合框架。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-08 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3181
Jingdong He, Qiang Shi, Jun Ma, Dacheng Shi, Tie Min

Medical image classification is essential for contemporary clinical diagnosis and decision support systems. However, medical images generally have similar inter-class features and complex structure patterns, making it a challenging task. While both local and global features are critical for noise reduction and discriminative pattern extraction in medical images, conventional approaches exhibit limitations. Specifically, convolutional neural networks (CNNs) focus on local features extraction but lack a comprehensive understanding of global semantic. Conversely, vision transformers (ViTs) can model long-range feature dependencies but may cause disruption to local features. To address these limitations, we propose Hierarchical Enhanced Multi-attention Feature (HEMF), an adaptive hierarchical enhanced multi-attention feature fusion framework to synergistically extract and fuse multi-scale local and global features. It comprises two core components: (1) the enhanced local and global feature extraction modules to extract multi-scale local and global features in parallel; (2) the hierarchical enhanced feature fusion module integrating a novel attention mechanism named Mixed Attention (MA) and a novel inverted residual block named Squeezed Inverted Residual Multi-Layer Perceptron (SIRMLP) to effectively fuse multi-scale features. Experimental results demonstrate that with nearly minimal model parameters compared to other advanced models, HEMF achieves the accuracy and F1-score of 87.34% and 78.89% on the ISIC2018 dataset, 87.03% and 87.02% on the Kvasir dataset, and 82.26% and 82.20% on the COVID-19 CT dataset, which are the state-of-the-art performance. Our code is open source and available from https://github.com/Esgjgd/HEMF.

医学图像分类是现代临床诊断和决策支持系统的重要组成部分。然而,医学图像通常具有相似的类间特征和复杂的结构模式,使其成为一项具有挑战性的任务。虽然局部和全局特征对于医学图像的降噪和判别模式提取至关重要,但传统方法存在局限性。具体而言,卷积神经网络(cnn)侧重于局部特征提取,但缺乏对全局语义的全面理解。相反,视觉转换器(ViTs)可以建模长期的特征依赖,但可能会导致局部特征的中断。为了解决这些问题,我们提出了一种自适应的层次增强多注意特征融合框架——层次增强多注意特征(HEMF),以协同提取和融合多尺度局部和全局特征。它包括两个核心部分:(1)增强的局部和全局特征提取模块,实现多尺度局部和全局特征的并行提取;(2)层次化增强特征融合模块,该模块集成了一种新型的混合注意机制(MA)和一种新型的压缩倒残差多层感知器(sirrmlp)倒立残差块,有效融合多尺度特征。实验结果表明,与其他先进模型相比,HEMF模型参数几乎最小,在ISIC2018数据集上的准确率和f1分数分别为87.34%和78.89%,在Kvasir数据集上分别为87.03%和87.02%,在COVID-19 CT数据集上分别为82.26%和82.20%,达到了最先进的性能。我们的代码是开源的,可以从https://github.com/Esgjgd/HEMF获得。
{"title":"HEMF: an adaptive hierarchical enhanced multi-attention feature fusion framework for cross-scale medical image classification.","authors":"Jingdong He, Qiang Shi, Jun Ma, Dacheng Shi, Tie Min","doi":"10.7717/peerj-cs.3181","DOIUrl":"10.7717/peerj-cs.3181","url":null,"abstract":"<p><p>Medical image classification is essential for contemporary clinical diagnosis and decision support systems. However, medical images generally have similar inter-class features and complex structure patterns, making it a challenging task. While both local and global features are critical for noise reduction and discriminative pattern extraction in medical images, conventional approaches exhibit limitations. Specifically, convolutional neural networks (CNNs) focus on local features extraction but lack a comprehensive understanding of global semantic. Conversely, vision transformers (ViTs) can model long-range feature dependencies but may cause disruption to local features. To address these limitations, we propose Hierarchical Enhanced Multi-attention Feature (HEMF), an adaptive hierarchical enhanced multi-attention feature fusion framework to synergistically extract and fuse multi-scale local and global features. It comprises two core components: (1) the enhanced local and global feature extraction modules to extract multi-scale local and global features in parallel; (2) the hierarchical enhanced feature fusion module integrating a novel attention mechanism named Mixed Attention (MA) and a novel inverted residual block named Squeezed Inverted Residual Multi-Layer Perceptron (SIRMLP) to effectively fuse multi-scale features. Experimental results demonstrate that with nearly minimal model parameters compared to other advanced models, HEMF achieves the accuracy and F1-score of 87.34% and 78.89% on the ISIC2018 dataset, 87.03% and 87.02% on the Kvasir dataset, and 82.26% and 82.20% on the COVID-19 CT dataset, which are the state-of-the-art performance. Our code is open source and available from https://github.com/Esgjgd/HEMF.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3181"},"PeriodicalIF":2.5,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453837/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting academic performance for students' university: case study from Saint Cloud State University. 大学学生学业表现预测:来自圣克劳德州立大学的案例研究。
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-08 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.3087
Bilal I Al-Ahmad, Abdullah Alzaqebah, Rami Alkhawaldeh, Ala' M Al-Zoubi, Hsuehi Lo, Adel Ali

Predicting students' performance is one of the essential educational data mining approaches aimed at observing learning outcomes. Predicting grade point average (GPA) helps to monitor academic performance and assists advisors in identifying students at risk of failure, major changes, or dropout. To enhance prediction performance, this study employs a long short-term memory (LSTM) model using a rich set of academic and demographic features. The dataset, drawn from 29,455 students at Saint Cloud State University (SCSU) over eight years (2016-2024), was carefully preprocessed by eliminating irrelevant and missing data, encoding categorical variables, and normalizing numerical features. Feature importance was determined using a permutation-based method to identify the most impactful variables on term GPA prediction. Furthermore, model hyperparameters, including the number of LSTM layers, units per layer, batch size, learning rate, and activation functions, were fine-tuned using experimental validation with the Adam optimizer and learning rate scheduling. Two experiments were conducted at both the college and department levels. The proposed model outperformed traditional machine learning models such as linear regression (LR), K-nearest neighbor (KNN), decision tree (DT), random forest (RF), and support vector regressor (SVR), and it surpasses two deep learning models, recurrent neural network (RNN) and convolutional neural network (CNN), achieving 9.54 mean absolute percentage error (MAPE), 0.0059 mean absolute error (MAE), 0.0001 root mean square error (RMSE), and an R² score of 99%.

预测学生的表现是一种重要的教育数据挖掘方法,旨在观察学习结果。预测平均绩点(GPA)有助于监控学习成绩,并帮助指导老师确定学生面临失败、重大变化或辍学的风险。为了提高预测性能,本研究采用了一个长短期记忆(LSTM)模型,该模型使用了一套丰富的学术和人口统计学特征。该数据集来自圣克劳德州立大学(SCSU) 8年来(2016-2024年)的29,455名学生,通过消除不相关和缺失的数据,编码分类变量和标准化数值特征进行了仔细的预处理。使用基于排列的方法确定特征重要性,以确定对学期GPA预测影响最大的变量。此外,模型超参数,包括LSTM层数、每层单位、批大小、学习率和激活函数,使用Adam优化器和学习率调度的实验验证进行微调。两个实验分别在学院和院系层面进行。该模型优于传统的机器学习模型,如线性回归(LR)、k近邻(KNN)、决策树(DT)、随机森林(RF)和支持向量回归(SVR),并优于递归神经网络(RNN)和卷积神经网络(CNN)两种深度学习模型,平均绝对百分比误差(MAPE)为9.54,平均绝对误差(MAE)为0.0059,均方根误差(RMSE)为0.0001,R²得分为99%。
{"title":"Predicting academic performance for students' university: case study from Saint Cloud State University.","authors":"Bilal I Al-Ahmad, Abdullah Alzaqebah, Rami Alkhawaldeh, Ala' M Al-Zoubi, Hsuehi Lo, Adel Ali","doi":"10.7717/peerj-cs.3087","DOIUrl":"10.7717/peerj-cs.3087","url":null,"abstract":"<p><p>Predicting students' performance is one of the essential educational data mining approaches aimed at observing learning outcomes. Predicting grade point average (GPA) helps to monitor academic performance and assists advisors in identifying students at risk of failure, major changes, or dropout. To enhance prediction performance, this study employs a long short-term memory (LSTM) model using a rich set of academic and demographic features. The dataset, drawn from 29,455 students at Saint Cloud State University (SCSU) over eight years (2016-2024), was carefully preprocessed by eliminating irrelevant and missing data, encoding categorical variables, and normalizing numerical features. Feature importance was determined using a permutation-based method to identify the most impactful variables on term GPA prediction. Furthermore, model hyperparameters, including the number of LSTM layers, units per layer, batch size, learning rate, and activation functions, were fine-tuned using experimental validation with the Adam optimizer and learning rate scheduling. Two experiments were conducted at both the college and department levels. The proposed model outperformed traditional machine learning models such as linear regression (LR), K-nearest neighbor (KNN), decision tree (DT), random forest (RF), and support vector regressor (SVR), and it surpasses two deep learning models, recurrent neural network (RNN) and convolutional neural network (CNN), achieving 9.54 mean absolute percentage error (MAPE), 0.0059 mean absolute error (MAE), 0.0001 root mean square error (RMSE), and an R² score of 99%.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3087"},"PeriodicalIF":2.5,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453804/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
PeerJ Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1