首页 > 最新文献

PeerJ Computer Science最新文献

英文 中文
Design of a 3D emotion mapping model for visual feature analysis using improved Gaussian mixture models.
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2596
Enshi Wang, Fakhri Alam Khan

Given the integration of color emotion space information from multiple feature sources in multimodal recognition systems, effectively fusing this information presents a significant challenge. This article proposes a three-dimensional (3D) color-emotion space visual feature extraction model for multimodal data integration based on an improved Gaussian mixture model to address these issues. Unlike traditional methods, which often struggle with redundant information and high model complexity, our approach optimizes feature fusion by employing entropy and visual feature sequences. By integrating machine vision with six activation functions and utilizing multiple aesthetic features, the proposed method exhibits strong performance in a high emotion mapping accuracy (EMA) of 92.4%, emotion recognition precision (ERP) of 88.35%, and an emotion recognition F1 score (ERFS) of 96.22%. These improvements over traditional approaches highlight the model's effectiveness in reducing complexity while enhancing emotional recognition accuracy, positioning it as a more efficient solution for visual emotion analysis in multimedia applications. The findings indicate that the model significantly enhances emotional recognition accuracy.

{"title":"Design of a 3D emotion mapping model for visual feature analysis using improved Gaussian mixture models.","authors":"Enshi Wang, Fakhri Alam Khan","doi":"10.7717/peerj-cs.2596","DOIUrl":"10.7717/peerj-cs.2596","url":null,"abstract":"<p><p>Given the integration of color emotion space information from multiple feature sources in multimodal recognition systems, effectively fusing this information presents a significant challenge. This article proposes a three-dimensional (3D) color-emotion space visual feature extraction model for multimodal data integration based on an improved Gaussian mixture model to address these issues. Unlike traditional methods, which often struggle with redundant information and high model complexity, our approach optimizes feature fusion by employing entropy and visual feature sequences. By integrating machine vision with six activation functions and utilizing multiple aesthetic features, the proposed method exhibits strong performance in a high emotion mapping accuracy (EMA) of 92.4%, emotion recognition precision (ERP) of 88.35%, and an emotion recognition F1 score (ERFS) of 96.22%. These improvements over traditional approaches highlight the model's effectiveness in reducing complexity while enhancing emotional recognition accuracy, positioning it as a more efficient solution for visual emotion analysis in multimedia applications. The findings indicate that the model significantly enhances emotional recognition accuracy.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2596"},"PeriodicalIF":3.5,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11753788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI and future education: a review, theoretical validation, and authors' perspective on challenges and solutions. 生成式人工智能与未来教育:综述、理论验证以及作者对挑战和解决方案的看法。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-03 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2105
Wali Khan Monib, Atika Qazi, Rosyzie Anna Apong, Mohammad Tazli Azizan, Liyanage De Silva, Hayati Yassin

Generative AI (Gen AI), exemplified by ChatGPT, has witnessed a remarkable surge in popularity recently. This cutting-edge technology demonstrates an exceptional ability to produce human-like responses and engage in natural language conversations guided by context-appropriate prompts. However, its integration into education has become a subject of ongoing debate. This review examines the challenges of using Gen AI like ChatGPT in education and offers effective strategies. To retrieve relevant literature, a search of reputable databases was conducted, resulting in the inclusion of twenty-two publications. Using Atlas.ti, the analysis reflected six primary challenges with plagiarism as the most prevalent issue, closely followed by responsibility and accountability challenges. Concerns were also raised about privacy, data protection, safety, and security risks, as well as discrimination and bias. Additionally, there were challenges about the loss of soft skills and the risks of the digital divide. To address these challenges, a number of strategies were identified and subjected to critical evaluation to assess their practicality. Most of them were practical and align with the ethical and pedagogical theories. Within the prevalent concepts, "ChatGPT" emerged as the most frequent one, followed by "AI," "student," "research," and "education," highlighting a growing trend in educational discourse. Moreover, close collaboration was evident among the leading countries, all forming a single cluster, led by the United States. This comprehensive review provides implications, recommendations, and future prospects concerning the use of generative AI in education.

以ChatGPT为代表的生成式人工智能(Gen AI)最近受到了极大的欢迎。这项尖端技术展示了一种特殊的能力,可以产生类似人类的反应,并在上下文适当的提示引导下进行自然语言对话。然而,它与教育的融合已经成为一个持续争论的话题。本文探讨了在教育中使用像ChatGPT这样的Gen AI所面临的挑战,并提供了有效的策略。为了检索相关文献,对知名数据库进行了搜索,结果纳入了22份出版物。使用阿特拉斯。该分析反映了六大主要挑战,其中抄袭是最普遍的问题,紧随其后的是责任和问责制挑战。人们还对隐私、数据保护、安全和安保风险以及歧视和偏见表示担忧。此外,软技能的流失和数字鸿沟的风险也带来了挑战。为了应对这些挑战,我们确定了一些战略,并对其进行了严格的评估,以评估其实用性。其中大多数是实用的,符合伦理和教学理论。在流行的概念中,“ChatGPT”是最常见的一个,其次是“AI”,“学生”,“研究”和“教育”,突显了教育话语的增长趋势。此外,主要国家之间的密切合作是显而易见的,它们都形成了一个以美国为首的集群。这篇全面的综述提供了关于在教育中使用生成式人工智能的影响、建议和未来前景。
{"title":"Generative AI and future education: a review, theoretical validation, and authors' perspective on challenges and solutions.","authors":"Wali Khan Monib, Atika Qazi, Rosyzie Anna Apong, Mohammad Tazli Azizan, Liyanage De Silva, Hayati Yassin","doi":"10.7717/peerj-cs.2105","DOIUrl":"10.7717/peerj-cs.2105","url":null,"abstract":"<p><p>Generative AI (Gen AI), exemplified by ChatGPT, has witnessed a remarkable surge in popularity recently. This cutting-edge technology demonstrates an exceptional ability to produce human-like responses and engage in natural language conversations guided by context-appropriate prompts. However, its integration into education has become a subject of ongoing debate. This review examines the challenges of using Gen AI like ChatGPT in education and offers effective strategies. To retrieve relevant literature, a search of reputable databases was conducted, resulting in the inclusion of twenty-two publications. Using Atlas.ti, the analysis reflected six primary challenges with plagiarism as the most prevalent issue, closely followed by responsibility and accountability challenges. Concerns were also raised about privacy, data protection, safety, and security risks, as well as discrimination and bias. Additionally, there were challenges about the loss of soft skills and the risks of the digital divide. To address these challenges, a number of strategies were identified and subjected to critical evaluation to assess their practicality. Most of them were practical and align with the ethical and pedagogical theories. Within the prevalent concepts, \"ChatGPT\" emerged as the most frequent one, followed by \"AI,\" \"student,\" \"research,\" and \"education,\" highlighting a growing trend in educational discourse. Moreover, close collaboration was evident among the leading countries, all forming a single cluster, led by the United States. This comprehensive review provides implications, recommendations, and future prospects concerning the use of generative AI in education.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2105"},"PeriodicalIF":3.5,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11622955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LOGIC: LLM-originated guidance for internal cognitive improvement of small language models in stance detection. 逻辑:基于llm的小语言模型在姿态检测中的内部认知改进指导。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-03 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2585
Woojin Lee, Jaewook Lee, Harksoo Kim

Stance detection is a critical task in natural language processing that determines an author's viewpoint toward a specific target, playing a pivotal role in social science research and various applications. Traditional approaches incorporating Wikipedia-sourced data into small language models (SLMs) to compensate for limited target knowledge often suffer from inconsistencies in article quality and length due to the diverse pool of Wikipedia contributors. To address these limitations, we utilize large language models (LLMs) pretrained on expansive datasets to generate accurate and contextually relevant target knowledge. By providing concise, real-world insights tailored to the stance detection task, this approach surpasses the limitations of Wikipedia-based information. Despite their superior reasoning capabilities, LLMs are computationally intensive and challenging to deploy on smaller devices. To mitigate these drawbacks, we introduce a reasoning distillation methodology that transfers the reasoning capabilities of LLMs to more compact SLMs, enhancing their efficiency while maintaining robust performance. Our stance detection model, LOGIC (LLM-Originated Guidance for Internal Cognitive improvement of small language models in stance detection), is built on Bidirectional and Auto-Regressive Transformer (BART) and fine-tuned with auxiliary learning tasks, including reasoning distillation. By incorporating LLM-generated target knowledge into the inference process, LOGIC achieves state-of-the-art performance on the VAried Stance Topics (VAST) dataset, outperforming advanced models like GPT-3.5 Turbo and GPT-4 Turbo in stance detection tasks.

姿态检测是自然语言处理中的一项关键任务,它决定了作者对特定目标的观点,在社会科学研究和各种应用中起着关键作用。传统方法将维基百科来源的数据合并到小型语言模型(slm)中,以弥补有限的目标知识,但由于维基百科贡献者的多样性,文章的质量和长度往往不一致。为了解决这些限制,我们利用在扩展数据集上预训练的大型语言模型(llm)来生成准确且与上下文相关的目标知识。通过为姿态检测任务提供简洁、真实的见解,这种方法超越了基于维基百科的信息的局限性。尽管llm具有卓越的推理能力,但它的计算强度很大,很难部署在较小的设备上。为了减轻这些缺点,我们引入了一种推理蒸馏方法,该方法将llm的推理能力转移到更紧凑的slm上,在保持稳健性能的同时提高了它们的效率。我们的姿态检测模型LOGIC (LLM-Originated Guidance for Internal Cognitive improvement of small language models in stance detection)建立在双向自回归变压器(BART)的基础上,并通过辅助学习任务(包括推理蒸馏)进行微调。通过将llm生成的目标知识整合到推理过程中,LOGIC在不同姿态主题(VAST)数据集上实现了最先进的性能,在姿态检测任务中优于GPT-3.5 Turbo和GPT-4 Turbo等先进模型。
{"title":"LOGIC: LLM-originated guidance for internal cognitive improvement of small language models in stance detection.","authors":"Woojin Lee, Jaewook Lee, Harksoo Kim","doi":"10.7717/peerj-cs.2585","DOIUrl":"10.7717/peerj-cs.2585","url":null,"abstract":"<p><p>Stance detection is a critical task in natural language processing that determines an author's viewpoint toward a specific target, playing a pivotal role in social science research and various applications. Traditional approaches incorporating Wikipedia-sourced data into small language models (SLMs) to compensate for limited target knowledge often suffer from inconsistencies in article quality and length due to the diverse pool of Wikipedia contributors. To address these limitations, we utilize large language models (LLMs) pretrained on expansive datasets to generate accurate and contextually relevant target knowledge. By providing concise, real-world insights tailored to the stance detection task, this approach surpasses the limitations of Wikipedia-based information. Despite their superior reasoning capabilities, LLMs are computationally intensive and challenging to deploy on smaller devices. To mitigate these drawbacks, we introduce a reasoning distillation methodology that transfers the reasoning capabilities of LLMs to more compact SLMs, enhancing their efficiency while maintaining robust performance. Our stance detection model, LOGIC (LLM-Originated Guidance for Internal Cognitive improvement of small language models in stance detection), is built on Bidirectional and Auto-Regressive Transformer (BART) and fine-tuned with auxiliary learning tasks, including reasoning distillation. By incorporating LLM-generated target knowledge into the inference process, LOGIC achieves state-of-the-art performance on the VAried Stance Topics (VAST) dataset, outperforming advanced models like GPT-3.5 Turbo and GPT-4 Turbo in stance detection tasks.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2585"},"PeriodicalIF":3.5,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623219/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing task execution: a dual-layer approach with multi-queue adaptive priority scheduling. 增强任务执行:具有多队列自适应优先级调度的双层方法。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-03 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2531
Mansoor Iqbal, Muhammad Umar Shafiq, Shouzab Khan, Obaidullah, Saad Alahmari, Zahid Ullah

Efficient task execution is critical to optimize the usage of computing resources in process scheduling. Various task scheduling algorithms ensure optimized and efficient use of computing resources. This article introduces an innovative dual-layer scheduling algorithm, Multi-Queue Adaptive Priority Scheduling (MQAPS), for task execution. MQAPS features a dual-layer hierarchy with a ready queue (RQ) and a secondary queue (SQ). New tasks enter the RQ, where they are prioritized, while the SQ contains tasks that have already used computing resources at least once, with priorities below a predefined threshold. The algorithm dynamically calculates the time slice based on process priorities to ensure efficient CPU utilization. In the RQ, the task's priority level defines its prioritization, which ensures that important jobs are completed on time compared to other conventional methods where priority is fixed or no priority parameter is defined, resulting in starvation in low-priority jobs. The simulation results show that MQAPS better utilizes CPU resources and time than traditional round-robin (RR) and multi-level scheduling. The MQAPS showcases a promising scheduling technique ensuring a balanced framework for dynamic adjustment of time quantum and priority. The MQAPS algorithm demonstrated optimization, fairness, and efficiency in job scheduling.

在进程调度中,高效的任务执行是优化计算资源使用的关键。各种任务调度算法确保了计算资源的优化和有效利用。本文介绍了一种用于任务执行的创新的双层调度算法——多队列自适应优先级调度(MQAPS)。MQAPS具有一个就绪队列(RQ)和一个辅助队列(SQ)的双层层次结构。新任务进入RQ,在那里它们被优先级,而SQ包含已经使用计算资源至少一次的任务,优先级低于预定义的阈值。该算法根据进程优先级动态计算时间片,保证高效的CPU利用率。在RQ中,任务的优先级级别定义了它的优先级,与其他固定优先级或没有定义优先级参数的常规方法相比,它确保重要的任务按时完成,从而导致低优先级任务的饥饿。仿真结果表明,MQAPS比传统的轮询调度和多级调度更有效地利用了CPU资源和时间。MQAPS展示了一种很有前途的调度技术,它确保了一个平衡的框架来动态调整时间量和优先级。MQAPS算法演示了作业调度中的优化、公平性和效率。
{"title":"Enhancing task execution: a dual-layer approach with multi-queue adaptive priority scheduling.","authors":"Mansoor Iqbal, Muhammad Umar Shafiq, Shouzab Khan, Obaidullah, Saad Alahmari, Zahid Ullah","doi":"10.7717/peerj-cs.2531","DOIUrl":"10.7717/peerj-cs.2531","url":null,"abstract":"<p><p>Efficient task execution is critical to optimize the usage of computing resources in process scheduling. Various task scheduling algorithms ensure optimized and efficient use of computing resources. This article introduces an innovative dual-layer scheduling algorithm, Multi-Queue Adaptive Priority Scheduling (MQAPS), for task execution. MQAPS features a dual-layer hierarchy with a ready queue (RQ) and a secondary queue (SQ). New tasks enter the RQ, where they are prioritized, while the SQ contains tasks that have already used computing resources at least once, with priorities below a predefined threshold. The algorithm dynamically calculates the time slice based on process priorities to ensure efficient CPU utilization. In the RQ, the task's priority level defines its prioritization, which ensures that important jobs are completed on time compared to other conventional methods where priority is fixed or no priority parameter is defined, resulting in starvation in low-priority jobs. The simulation results show that MQAPS better utilizes CPU resources and time than traditional round-robin (RR) and multi-level scheduling. The MQAPS showcases a promising scheduling technique ensuring a balanced framework for dynamic adjustment of time quantum and priority. The MQAPS algorithm demonstrated optimization, fairness, and efficiency in job scheduling.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2531"},"PeriodicalIF":3.5,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSR-UNet: enhancing multi-scale and long-range dependencies in medical image segmentation. MSR-UNet:增强医学图像分割中的多尺度和远程依赖关系。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-03 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2563
Shuai Wang, Lei Liu, Jun Wang, Xinyue Peng, Baosen Liu

Transformer-based technology has attracted widespread attention in medical image segmentation. Due to the diversity of organs, effective modelling of multi-scale information and establishing long-range dependencies between pixels are crucial for successful medical image segmentation. However, most studies rely on a fixed single-scale window for modeling, which ignores the potential impact of window size on performance. This limitation can hinder window-based models' ability to fully explore multi-scale and long-range relationships within medical images. To address this issue, we propose a multi-scale reconfiguration self-attention (MSR-SA) module that accurately models multi-scale information and long-range dependencies in medical images. The MSR-SA module first divides the attention heads into multiple groups, each assigned an ascending dilation rate. These groups are then uniformly split into several non-overlapping local windows. Using dilated sampling, we gather the same number of keys to obtain both long-range and multi-scale information. Finally, dynamic information fusion is achieved by integrating features from the sampling points at corresponding positions across different windows. Based on the MSR-SA module, we propose a multi-scale reconfiguration U-Net (MSR-UNet) framework for medical image segmentation. Experiments on the Synapse and automated cardiac diagnosis challenge (ACDC) datasets show that MSR-UNet can achieve satisfactory segmentation results. The code is available at https://github.com/davidsmithwj/MSR-UNet (DOI: 10.5281/zenodo.13969855).

基于变压器的医学图像分割技术受到了广泛的关注。由于器官的多样性,有效的多尺度信息建模和建立像素之间的远程依赖关系是成功分割医学图像的关键。然而,大多数研究依赖于固定的单尺度窗口进行建模,忽略了窗口大小对性能的潜在影响。这一限制会阻碍基于窗口的模型在医学图像中充分探索多尺度和远程关系的能力。为了解决这个问题,我们提出了一个多尺度重构自注意(MSR-SA)模块,该模块可以准确地建模医学图像中的多尺度信息和远程依赖关系。MSR-SA模块首先将注意力头分成多个组,每个组分配一个递增的扩张率。然后将这些组均匀地分成几个不重叠的本地窗口。通过扩展采样,我们收集了相同数量的密钥,以获得远程和多尺度的信息。最后,通过对不同窗口对应位置的采样点特征进行积分,实现动态信息融合。基于MSR-SA模块,提出了一种用于医学图像分割的多尺度重构U-Net (MSR-UNet)框架。在Synapse和自动心脏诊断挑战(ACDC)数据集上的实验表明,MSR-UNet可以获得令人满意的分割结果。该代码可从https://github.com/davidsmithwj/MSR-UNet (DOI: 10.5281/zenodo.13969855)获得。
{"title":"MSR-UNet: enhancing multi-scale and long-range dependencies in medical image segmentation.","authors":"Shuai Wang, Lei Liu, Jun Wang, Xinyue Peng, Baosen Liu","doi":"10.7717/peerj-cs.2563","DOIUrl":"10.7717/peerj-cs.2563","url":null,"abstract":"<p><p>Transformer-based technology has attracted widespread attention in medical image segmentation. Due to the diversity of organs, effective modelling of multi-scale information and establishing long-range dependencies between pixels are crucial for successful medical image segmentation. However, most studies rely on a fixed single-scale window for modeling, which ignores the potential impact of window size on performance. This limitation can hinder window-based models' ability to fully explore multi-scale and long-range relationships within medical images. To address this issue, we propose a multi-scale reconfiguration self-attention (MSR-SA) module that accurately models multi-scale information and long-range dependencies in medical images. The MSR-SA module first divides the attention heads into multiple groups, each assigned an ascending dilation rate. These groups are then uniformly split into several non-overlapping local windows. Using dilated sampling, we gather the same number of keys to obtain both long-range and multi-scale information. Finally, dynamic information fusion is achieved by integrating features from the sampling points at corresponding positions across different windows. Based on the MSR-SA module, we propose a multi-scale reconfiguration U-Net (MSR-UNet) framework for medical image segmentation. Experiments on the Synapse and automated cardiac diagnosis challenge (ACDC) datasets show that MSR-UNet can achieve satisfactory segmentation results. The code is available at https://github.com/davidsmithwj/MSR-UNet (DOI: 10.5281/zenodo.13969855).</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2563"},"PeriodicalIF":3.5,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the interpretability of fuzzy knowledge base systems. 模糊知识库系统的可解释性研究。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-03 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2558
Francesco Camastra, Angelo Ciaramella, Giuseppe Salvi, Salvatore Sposato, Antonino Staiano

In recent years, fuzzy rule-based systems have been attracting great interest in interpretable and eXplainable Artificial Intelligence as ante-hoc methods. These systems represent knowledge that humans can easily understand, but since they are not interpretable per se, they must remain simple and understandable, and the rule base must have a compactness property. This article presents an algorithm for minimizing the fuzzy rule base, leveraging rough set theory and a greedy strategy. Reducing fuzzy rules simplifies the rule base, facilitating the construction of interpretable inference systems such as decision support and recommendation systems. Validation and comparison of the proposed methodology using both real and benchmark data yield encouraging results.

近年来,基于模糊规则的系统在可解释和可解释的人工智能领域引起了极大的兴趣。这些系统表示人类可以很容易理解的知识,但由于它们本身是不可解释的,因此它们必须保持简单和可理解,并且规则库必须具有紧凑性。本文提出了一种利用粗糙集理论和贪心策略最小化模糊规则库的算法。减少模糊规则简化了规则库,便于构建可解释的推理系统,如决策支持和推荐系统。使用真实和基准数据验证和比较所提出的方法产生了令人鼓舞的结果。
{"title":"On the interpretability of fuzzy knowledge base systems.","authors":"Francesco Camastra, Angelo Ciaramella, Giuseppe Salvi, Salvatore Sposato, Antonino Staiano","doi":"10.7717/peerj-cs.2558","DOIUrl":"10.7717/peerj-cs.2558","url":null,"abstract":"<p><p>In recent years, fuzzy rule-based systems have been attracting great interest in interpretable and eXplainable Artificial Intelligence as <i>ante-hoc</i> methods. These systems represent knowledge that humans can easily understand, but since they are not interpretable <i>per se</i>, they must remain simple and understandable, and the rule base must have a compactness property. This article presents an algorithm for minimizing the fuzzy rule base, leveraging rough set theory and a greedy strategy. Reducing fuzzy rules simplifies the rule base, facilitating the construction of interpretable inference systems such as decision support and recommendation systems. Validation and comparison of the proposed methodology using both real and benchmark data yield encouraging results.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2558"},"PeriodicalIF":3.5,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced lightweight T-Net architecture based on convolutional neural network (CNN) for tomato plant leaf disease classification. 基于卷积神经网络(CNN)的番茄叶片病害分类增强轻量级T-Net架构。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-02 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2495
Amreen Batool, Jisoo Kim, Sang-Joon Lee, Ji-Hyeok Yang, Yung-Cheol Byun

Tomatoes are a widely cultivated crop globally, and according to the Food and Agriculture Organization (FAO) statistics, tomatoes are the third after potatoes and sweet potatoes. Tomatoes are commonly used in kitchens worldwide. Despite their popularity, tomato crops face challenges from several diseases, which reduce their quality and quantity. Therefore, there is a significant problem with global agricultural productivity due to the development of diseases related to tomatoes. Fusarium wilt and bacterial blight are substantial challenges for tomato farming, affecting global economies and food security. Technological breakthroughs are necessary because existing disease detection methods are time-consuming and labor-intensive. We have proposed the T-Net model to find a rapid, accurate approach to tackle the challenge of automated detection of tomato disease. This novel deep learning model utilizes a unique combination of the layered architecture of convolutional neural networks (CNNs) and a transfer learning model based on VGG-16, Inception V3, and AlexNet to classify tomato leaf disease. Our suggested T-Net model outperforms earlier methods with an astounding 98.97% accuracy rate. We prove the effectiveness of our technique by extensive experimentation and comparison with current approaches. This study offers a dependable and understandable method for diagnosing tomato illnesses, marking a substantial development in agricultural technology. The proposed T-Net-based framework helps protect crops by providing farmers with practical knowledge for managing disease. The source code can be accessed from the given link.

西红柿是全球广泛种植的作物,根据联合国粮食及农业组织(FAO)的统计,西红柿是仅次于土豆和红薯的第三大作物。西红柿在世界各地的厨房里都很常用。尽管它们很受欢迎,但番茄作物面临着几种疾病的挑战,这些疾病降低了它们的质量和数量。因此,由于与番茄有关的疾病的发展,全球农业生产力出现了重大问题。枯萎病和细菌性枯萎病是番茄种植面临的重大挑战,影响着全球经济和粮食安全。技术突破是必要的,因为现有的疾病检测方法耗时耗力。我们提出了T-Net模型来寻找一种快速、准确的方法来解决番茄疾病自动检测的挑战。这个新颖的深度学习模型利用卷积神经网络(cnn)的分层结构和基于VGG-16、Inception V3和AlexNet的迁移学习模型的独特组合来分类番茄叶片疾病。我们建议的T-Net模型以惊人的98.97%的准确率优于早期的方法。我们通过大量的实验和与现有方法的比较证明了我们技术的有效性。这项研究为番茄疾病的诊断提供了一种可靠、易懂的方法,标志着农业技术的重大发展。拟议的基于t- net的框架通过向农民提供管理疾病的实用知识来帮助保护作物。源代码可以从给定的链接访问。
{"title":"An enhanced lightweight T-Net architecture based on convolutional neural network (CNN) for tomato plant leaf disease classification.","authors":"Amreen Batool, Jisoo Kim, Sang-Joon Lee, Ji-Hyeok Yang, Yung-Cheol Byun","doi":"10.7717/peerj-cs.2495","DOIUrl":"10.7717/peerj-cs.2495","url":null,"abstract":"<p><p>Tomatoes are a widely cultivated crop globally, and according to the Food and Agriculture Organization (FAO) statistics, tomatoes are the third after potatoes and sweet potatoes. Tomatoes are commonly used in kitchens worldwide. Despite their popularity, tomato crops face challenges from several diseases, which reduce their quality and quantity. Therefore, there is a significant problem with global agricultural productivity due to the development of diseases related to tomatoes. Fusarium wilt and bacterial blight are substantial challenges for tomato farming, affecting global economies and food security. Technological breakthroughs are necessary because existing disease detection methods are time-consuming and labor-intensive. We have proposed the T-Net model to find a rapid, accurate approach to tackle the challenge of automated detection of tomato disease. This novel deep learning model utilizes a unique combination of the layered architecture of convolutional neural networks (CNNs) and a transfer learning model based on VGG-16, Inception V3, and AlexNet to classify tomato leaf disease. Our suggested T-Net model outperforms earlier methods with an astounding 98.97% accuracy rate. We prove the effectiveness of our technique by extensive experimentation and comparison with current approaches. This study offers a dependable and understandable method for diagnosing tomato illnesses, marking a substantial development in agricultural technology. The proposed T-Net-based framework helps protect crops by providing farmers with practical knowledge for managing disease. The source code can be accessed from the given link.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2495"},"PeriodicalIF":3.5,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623089/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid model integrating recurrent neural networks and the semi-supervised support vector machine for identification of early student dropout risk. 基于递归神经网络和半监督支持向量机的早期学生辍学风险识别混合模型。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-29 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2572
Huong Nguyen Thi Cam, Aliza Sarlan, Noreen Izza Arshad

Background: Student dropout rates are one of the major concerns of educational institutions because they affect the success and efficacy of them. In order to help students continue their learning and achieve a better future, there is a need to identify the risk of student dropout. However, it is challenging to accurately identify the student dropout risk in the preliminary stages considering the complexities associated with it. This research develops an efficient prediction model using machine learning (ML) and deep learning (DL) techniques for identifying student dropouts in both small and big educational datasets.

Methods: A hybrid prediction model DeepS3VM is designed by integrating a Semi-supervised support vector machine (S3VM) model with a recurrent neural network (RNN) to capture sequential patterns in student dropout prediction. In addition, a personalized recommendation system (PRS) is developed to recommend personalized learning paths for students who are at risk of dropping out. The potential of the DeepS3VM is evaluated with respect to various evaluation metrics and the results are compared with various existing models such as Random Forest (RF), decision tree (DT), XGBoost, artificial neural network (ANN) and convolutional neural network (CNN).

Results: The DeepS3VM model demonstrates outstanding accuracy at 92.54%, surpassing other current models. This confirms the model's effectiveness in precisely identifying the risk of student dropout. The dataset used for this analysis was obtained from the student management system of a private university in Vietnam and generated from an initial 243 records to a total of one hundred thousand records.

背景:学生辍学率是教育机构关注的主要问题之一,因为它影响着教育机构的成功和效率。为了帮助学生继续学习,实现更好的未来,有必要确定学生辍学的风险。然而,考虑到与之相关的复杂性,在初步阶段准确识别学生辍学风险是具有挑战性的。本研究利用机器学习(ML)和深度学习(DL)技术开发了一个有效的预测模型,用于识别小型和大型教育数据集中的学生辍学情况。方法:将半监督支持向量机(S3VM)模型与递归神经网络(RNN)相结合,设计了一个混合预测模型DeepS3VM,以捕获学生辍学预测中的顺序模式。此外,还开发了个性化推荐系统(PRS),为面临辍学风险的学生推荐个性化的学习路径。DeepS3VM的潜力根据各种评估指标进行评估,并将结果与各种现有模型(如随机森林(RF)、决策树(DT)、XGBoost、人工神经网络(ANN)和卷积神经网络(CNN))进行比较。结果:DeepS3VM模型的准确率达到了92.54%,超过了现有的其他模型。这证实了该模型在精确识别学生辍学风险方面的有效性。用于此分析的数据集来自越南一所私立大学的学生管理系统,从最初的243条记录生成到总共10万条记录。
{"title":"A hybrid model integrating recurrent neural networks and the semi-supervised support vector machine for identification of early student dropout risk.","authors":"Huong Nguyen Thi Cam, Aliza Sarlan, Noreen Izza Arshad","doi":"10.7717/peerj-cs.2572","DOIUrl":"10.7717/peerj-cs.2572","url":null,"abstract":"<p><strong>Background: </strong>Student dropout rates are one of the major concerns of educational institutions because they affect the success and efficacy of them. In order to help students continue their learning and achieve a better future, there is a need to identify the risk of student dropout. However, it is challenging to accurately identify the student dropout risk in the preliminary stages considering the complexities associated with it. This research develops an efficient prediction model using machine learning (ML) and deep learning (DL) techniques for identifying student dropouts in both small and big educational datasets.</p><p><strong>Methods: </strong>A hybrid prediction model DeepS3VM is designed by integrating a Semi-supervised support vector machine (S3VM) model with a recurrent neural network (RNN) to capture sequential patterns in student dropout prediction. In addition, a personalized recommendation system (PRS) is developed to recommend personalized learning paths for students who are at risk of dropping out. The potential of the DeepS3VM is evaluated with respect to various evaluation metrics and the results are compared with various existing models such as Random Forest (RF), decision tree (DT), XGBoost, artificial neural network (ANN) and convolutional neural network (CNN).</p><p><strong>Results: </strong>The DeepS3VM model demonstrates outstanding accuracy at 92.54%, surpassing other current models. This confirms the model's effectiveness in precisely identifying the risk of student dropout. The dataset used for this analysis was obtained from the student management system of a private university in Vietnam and generated from an initial 243 records to a total of one hundred thousand records.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2572"},"PeriodicalIF":3.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623006/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and forecasting cryptojacking attack trends in Internet of Things and wireless sensor networks devices. 检测和预测物联网和无线传感器网络设备的加密攻击趋势。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-29 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2491
Kishor Kumar Reddy C, Vijaya Sindhoori Kaza, Madana Mohana R, Abdulrahman Alamer, Shadab Alam, Mohammed Shuaib, Sultan Basudan, Abdullah Sheneamer

This research addresses the critical issue of cryptojacking attacks, a significant cybersecurity threat where malicious actors covertly exploit computational resources for unauthorized cryptocurrency mining, particularly in wireless sensor networks (WSN) and Internet of Things (IoT) devices. The article proposes an innovative approach that integrates time series analysis with graph neural networks (GNNs) to forecast/detect cryptojacking attack trends within these vulnerable ecosystems. Utilizing the "Cryptojacking Attack Timeseries Dataset," the proposed method emphasizes early detection and predictive insights to anticipate emerging attack patterns. Through rigorous experiments, the model demonstrated high accuracy with ARIMA achieving up to 99.98% on specific attributes and the GNN model yielding an accuracy of 99.99%. Despite these strengths, the ensemble approach showed a slightly lower overall accuracy of 90.97%. Despite the reduction in accuracy compared to individual models, the ensemble method enhances predictive robustness and adaptability, making it more effective in identifying emerging cryptojacking trends amidst varying network conditions. This research significantly contributes to enhancing cybersecurity measures against the evolving threat of cryptojacking in WSN and IoT environments by providing a robust, proactive defence mechanism.

本研究解决了加密劫持攻击的关键问题,这是一种重大的网络安全威胁,恶意行为者暗中利用计算资源进行未经授权的加密货币挖掘,特别是在无线传感器网络(WSN)和物联网(IoT)设备中。本文提出了一种创新的方法,将时间序列分析与图神经网络(gnn)相结合,以预测/检测这些脆弱生态系统中的加密劫持攻击趋势。利用“加密劫持攻击时间序列数据集”,提出的方法强调早期检测和预测洞察力,以预测新出现的攻击模式。经过严格的实验,该模型具有较高的准确率,ARIMA模型在特定属性上的准确率可达99.98%,GNN模型的准确率可达99.99%。尽管有这些优势,集成方法的总体准确率略低,为90.97%。尽管与单个模型相比准确性降低,但集成方法增强了预测鲁棒性和适应性,使其在不同网络条件下更有效地识别新出现的加密劫持趋势。本研究通过提供强大的主动防御机制,为增强网络安全措施,应对WSN和物联网环境中不断变化的加密劫持威胁做出了重大贡献。
{"title":"Detecting and forecasting cryptojacking attack trends in Internet of Things and wireless sensor networks devices.","authors":"Kishor Kumar Reddy C, Vijaya Sindhoori Kaza, Madana Mohana R, Abdulrahman Alamer, Shadab Alam, Mohammed Shuaib, Sultan Basudan, Abdullah Sheneamer","doi":"10.7717/peerj-cs.2491","DOIUrl":"10.7717/peerj-cs.2491","url":null,"abstract":"<p><p>This research addresses the critical issue of cryptojacking attacks, a significant cybersecurity threat where malicious actors covertly exploit computational resources for unauthorized cryptocurrency mining, particularly in wireless sensor networks (WSN) and Internet of Things (IoT) devices. The article proposes an innovative approach that integrates time series analysis with graph neural networks (GNNs) to forecast/detect cryptojacking attack trends within these vulnerable ecosystems. Utilizing the \"Cryptojacking Attack Timeseries Dataset,\" the proposed method emphasizes early detection and predictive insights to anticipate emerging attack patterns. Through rigorous experiments, the model demonstrated high accuracy with ARIMA achieving up to 99.98% on specific attributes and the GNN model yielding an accuracy of 99.99%. Despite these strengths, the ensemble approach showed a slightly lower overall accuracy of 90.97%. Despite the reduction in accuracy compared to individual models, the ensemble method enhances predictive robustness and adaptability, making it more effective in identifying emerging cryptojacking trends amidst varying network conditions. This research significantly contributes to enhancing cybersecurity measures against the evolving threat of cryptojacking in WSN and IoT environments by providing a robust, proactive defence mechanism.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2491"},"PeriodicalIF":3.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EFR-FCOS: enhancing feature reuse for anchor-free object detector. EFR-FCOS:增强无锚目标检测器的特征重用。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-29 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2470
Yongwei Liao, Zhenjun Li, Wenlong Feng, Yibin Zhang, Bing Zhou

In this paper, we propose enhancing feature reuse for fully convolutional one-stage object detection (EFR-FCOS) to aim at backbone, neck and head, which are three main components of object detection. For the backbone, we build a global attention network (GANet) using the block with global attention connections to extract prominent features and acquire global information from feature maps. For the neck, we design an aggregate feature fusion pyramid network (AFF-FPN) to fuse the information of feature maps with different receptive fields, which uses the attention module to extract aggregated features and reduce the decay of information in process of the feature fusion. For the head, we construct a feature reuse head (EnHead) to detect objects, which adopts the cascade detection by the refined bounding box regression to improve the confidence of the classification and regression. The experiments conducted on the COCO dataset show that the proposed approaches are extensive usability and achieve significant performance for object detection.

本文针对目标检测的三个主要组成部分——脊柱、颈部和头部,提出了增强全卷积单阶段目标检测(EFR-FCOS)特征重用的方法。对于骨干网,我们利用具有全局关注连接的块构建全局关注网络(GANet),提取突出特征,从特征图中获取全局信息。对于颈部,我们设计了一个聚合特征融合金字塔网络(AFF-FPN)来融合具有不同感受域的特征映射信息,该网络利用注意力模块提取聚合特征,减少特征融合过程中信息的衰减。对于头部,我们构建了一个特征重用头部(EnHead)来检测目标,该头部采用了细化边界盒回归的级联检测,提高了分类和回归的置信度。在COCO数据集上进行的实验表明,所提出的方法具有广泛的可用性,并且在目标检测方面取得了显著的性能。
{"title":"EFR-FCOS: enhancing feature reuse for anchor-free object detector.","authors":"Yongwei Liao, Zhenjun Li, Wenlong Feng, Yibin Zhang, Bing Zhou","doi":"10.7717/peerj-cs.2470","DOIUrl":"10.7717/peerj-cs.2470","url":null,"abstract":"<p><p>In this paper, we propose enhancing feature reuse for fully convolutional one-stage object detection (EFR-FCOS) to aim at backbone, neck and head, which are three main components of object detection. For the backbone, we build a global attention network (GANet) using the block with global attention connections to extract prominent features and acquire global information from feature maps. For the neck, we design an aggregate feature fusion pyramid network (AFF-FPN) to fuse the information of feature maps with different receptive fields, which uses the attention module to extract aggregated features and reduce the decay of information in process of the feature fusion. For the head, we construct a feature reuse head (EnHead) to detect objects, which adopts the cascade detection by the refined bounding box regression to improve the confidence of the classification and regression. The experiments conducted on the COCO dataset show that the proposed approaches are extensive usability and achieve significant performance for object detection.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2470"},"PeriodicalIF":3.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
PeerJ Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1