首页 > 最新文献

Journal of Computer Science最新文献

英文 中文
A Model of Cyber Extremists' Rhetorical Structure Towards Protecting Critical Infrastructure 网络极端分子保护关键基础设施的修辞结构模型
Pub Date : 2024-06-01 DOI: 10.3844/jcssp.2024.610.627
Osman Khairunnisa, J. Zanariah, Ahmad Rabiah
: Much research at present focuses on the ways in which organizations secure their networks and information in the supply chain, ignoring the ways in which organizations construct and understand the core of the cybersecurity risks. Cybersecurity focuses more on defense mechanisms such as anti-virus software, malware protection, firewall and more on securing network and application. More research to understand extremist activity should be conducted by exploring the extremist corpus. It is a good strategy since the web is overloaded with multiple conversations or information since the dependency to the technology has been skyrocketing by people including by the extremist itself. The objectives of this study were to identify the types of rhetoric in cyber extremist’s communication, analyze how the cyber extremists utilized the rhetoric in appealing to the audience, identify the stylistic devices used by the extremists and produce a model of cyber extremists’ rhetorical structure. Therefore, new approaches to study the rhetoric of cyber extremists have been designed. It is the combination of Norreklit’s methodology, Neo-Aristotelian criticism ideologist criticism which were deemed able to pry out the hidden literacy of the extremists. In this study the type of rhetoric that dominated the cyber extremists’ communication was pathos. The category of pathos in the extremists’ postings was more on negative feelings such as sad, anger and hatred. Using the Neo-Aristotle criticism, the stylistic devices used by the extremists were identified such as the metonymy, simile metaphor. The metonymy used was like ‘Jihad’, ‘Mujahidin’, ‘Ansar’ and ‘Muhajirin’. The metonymy ‘Penyembah’ which referred to the opponents’ obsessive materialistic behavior was seen multiplicatively in the extremists’ postings. The simile stylistic devices such as the word ‘Thagut’ and ‘terrorist’ were used by the extremists as direct comparison. As a direct comparison, Meanwhile, the metaphor of death was used consistently by the extremist which can be seen as a scary technique for the opponents. Meanwhile, by using the ideologist criticism, the appearance that was used by most of the extremists was the desire to be seen as a peace ideologist and kind rhetor through the use of cold color such as blue and green with the nature design as the background of the blog. All the phrases were gathered, analyzed and integrated to ascertain the pattern based on the research methodologies to develop a model. A model of cyber extremists' rhetorical structure was developed and established towards protecting the critical infrastructure to ease any parties including the authority, expert and public to identify the possible extremists’ styles as red flags during cyber communication such as social media communication
:目前的许多研究侧重于组织如何确保其网络和供应链中信息的安全,而忽视了组织如何构建和理解网络安全风险的核心。网络安全更多地关注防病毒软件、恶意软件保护、防火墙等防御机制,而更多地关注网络和应用程序的安全。应通过探索极端分子语料库来开展更多研究,以了解极端分子的活动。这是一个很好的策略,因为网络上充斥着大量的对话或信息,人们包括极端分子本身对技术的依赖性急剧上升。本研究的目标是识别网络极端分子传播中的修辞类型,分析网络极端分子如何利用修辞吸引受众,识别极端分子使用的文体手段,并建立网络极端分子的修辞结构模型。因此,人们设计了研究网络极端分子修辞的新方法。诺雷克利特方法论、新亚里士多德批判思想家批判相结合,被认为能够窥探出极端分子隐藏的素养。在本研究中,主导网络极端分子传播的修辞类型是悲怆。在极端分子的帖子中,"悲怆 "更多的是负面情绪,如悲伤、愤怒和仇恨。利用新亚里士多德批评法,研究人员确定了极端分子使用的文体手段,如转喻、比喻和隐喻。使用的隐喻有 "圣战"、"圣战者"、"安萨尔 "和 "穆哈吉林"。隐喻 "Penyembah "指的是极端分子的反对者痴迷于物质的行为,在极端分子的帖子中成倍出现。极端分子使用 "Thagut "和 "terrorist "等比喻文体进行直接对比。同时,作为直接对比,极端分子不断使用死亡的隐喻,这可以被视为一种让反对者感到恐惧的手法。同时,通过使用意识形态批评,大多数极端分子使用的外观是通过使用蓝色和绿色等冷色调以及自然设计作为博客背景,希望被视为和平意识形态主义者和善良的修辞者。我们收集、分析和整合了所有短语,并根据研究方法确定了模式,从而建立了一个模型。网络极端分子修辞结构模型的开发和建立旨在保护关键基础设施,以方便包括权威机构、专家和公众在内的各方在网络通信(如社交媒体通信)过程中识别可能的极端分子风格,并将其作为红旗。
{"title":"A Model of Cyber Extremists' Rhetorical Structure Towards Protecting Critical Infrastructure","authors":"Osman Khairunnisa, J. Zanariah, Ahmad Rabiah","doi":"10.3844/jcssp.2024.610.627","DOIUrl":"https://doi.org/10.3844/jcssp.2024.610.627","url":null,"abstract":": Much research at present focuses on the ways in which organizations secure their networks and information in the supply chain, ignoring the ways in which organizations construct and understand the core of the cybersecurity risks. Cybersecurity focuses more on defense mechanisms such as anti-virus software, malware protection, firewall and more on securing network and application. More research to understand extremist activity should be conducted by exploring the extremist corpus. It is a good strategy since the web is overloaded with multiple conversations or information since the dependency to the technology has been skyrocketing by people including by the extremist itself. The objectives of this study were to identify the types of rhetoric in cyber extremist’s communication, analyze how the cyber extremists utilized the rhetoric in appealing to the audience, identify the stylistic devices used by the extremists and produce a model of cyber extremists’ rhetorical structure. Therefore, new approaches to study the rhetoric of cyber extremists have been designed. It is the combination of Norreklit’s methodology, Neo-Aristotelian criticism ideologist criticism which were deemed able to pry out the hidden literacy of the extremists. In this study the type of rhetoric that dominated the cyber extremists’ communication was pathos. The category of pathos in the extremists’ postings was more on negative feelings such as sad, anger and hatred. Using the Neo-Aristotle criticism, the stylistic devices used by the extremists were identified such as the metonymy, simile metaphor. The metonymy used was like ‘Jihad’, ‘Mujahidin’, ‘Ansar’ and ‘Muhajirin’. The metonymy ‘Penyembah’ which referred to the opponents’ obsessive materialistic behavior was seen multiplicatively in the extremists’ postings. The simile stylistic devices such as the word ‘Thagut’ and ‘terrorist’ were used by the extremists as direct comparison. As a direct comparison, Meanwhile, the metaphor of death was used consistently by the extremist which can be seen as a scary technique for the opponents. Meanwhile, by using the ideologist criticism, the appearance that was used by most of the extremists was the desire to be seen as a peace ideologist and kind rhetor through the use of cold color such as blue and green with the nature design as the background of the blog. All the phrases were gathered, analyzed and integrated to ascertain the pattern based on the research methodologies to develop a model. A model of cyber extremists' rhetorical structure was developed and established towards protecting the critical infrastructure to ease any parties including the authority, expert and public to identify the possible extremists’ styles as red flags during cyber communication such as social media communication","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141231152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Holistic Approach to Security, Availability and Reliability in Fog Computing 实现雾计算安全性、可用性和可靠性的整体方法
Pub Date : 2024-06-01 DOI: 10.3844/jcssp.2024.641.648
Abdulrahman Alshehri, H. Alshareef, Samah Alhazmi, M. Almasri, Maha Helal
: Cloud computing has become popular in recent years due to the considerable flexibility it provides in terms of its availability and affordability and the reliability of different software and services for remote users. Fog computing has also gained considerable attention in recent years from the research fraternity. Fog computing is an additional layer between the users of the cloud and the cloud infrastructure as a place that stores frequently used data in order to reduce latency, which might occur as a consequence of using cloud computing. It also provides easy access and management mechanisms to the devices located at the edge of the cloud, which leads to better performance when compared with cloud computing. Fog computing does, however, pose certain challenges, related to security, such as data breaches; availability, such as dealing with connectivity interruptions; and the reliability of fog resources and services. This study proposes a lightweight system that adopts the fog computing paradigm and addresses several of its challenges by, for instance, enhancing the security aspects of the whole system by validating nodes that join the fog layer before serving the end user. In addition, the proposed system provides better availability and reliability for fog computing and its associated services by capturing and tracking the progress of tasks and being able to resume once an interruption is detected. Experimental results validate the feasibility of the proposed system in terms of its enhanced security capabilities and time cost. This is achieved by using several security techniques which result in allowing only approved devices to join the fog layer. The results also demonstrate the capability to execute tasks even if an interruption is detected by resuming the remainder of the task through another fog node. The proposed solution is unique in the sense that it provides a simple mechanism for implementation in real-world applications, especially in crowded places or when the mobility of users is high. It can also be enhanced further in several ways to address other predicaments related to fog computing.
:云计算在可用性和可负担性方面提供了相当大的灵活性,并且为远程用户提供了可靠的不同软件和服务,因此近年来大受欢迎。近年来,雾计算也获得了研究界的极大关注。雾计算是云用户和云基础设施之间的一个附加层,作为存储常用数据的地方,可以减少使用云计算可能导致的延迟。它还为位于云边缘的设备提供了便捷的访问和管理机制,与云计算相比,它能带来更好的性能。不过,雾计算也带来了一些挑战,涉及安全性(如数据泄露)、可用性(如处理连接中断)以及雾资源和服务的可靠性。本研究提出了一种采用雾计算范例的轻量级系统,通过验证加入雾层的节点,然后再为终端用户提供服务,从而增强了整个系统的安全性。此外,建议的系统通过捕获和跟踪任务进度,并在检测到中断时能够恢复,为雾计算及其相关服务提供了更好的可用性和可靠性。实验结果验证了拟议系统在增强安全能力和时间成本方面的可行性。这是通过使用几种安全技术实现的,这些技术只允许经过批准的设备加入雾层。实验结果还证明,即使检测到任务中断,也能通过另一个雾节点继续执行剩余任务。所提出的解决方案是独一无二的,因为它提供了一种简单的机制,可在现实世界的应用中实施,尤其是在拥挤的地方或用户流动性很高的情况下。它还可以通过多种方式进一步增强,以解决与雾计算相关的其他困境。
{"title":"A Holistic Approach to Security, Availability and Reliability in Fog Computing","authors":"Abdulrahman Alshehri, H. Alshareef, Samah Alhazmi, M. Almasri, Maha Helal","doi":"10.3844/jcssp.2024.641.648","DOIUrl":"https://doi.org/10.3844/jcssp.2024.641.648","url":null,"abstract":": Cloud computing has become popular in recent years due to the considerable flexibility it provides in terms of its availability and affordability and the reliability of different software and services for remote users. Fog computing has also gained considerable attention in recent years from the research fraternity. Fog computing is an additional layer between the users of the cloud and the cloud infrastructure as a place that stores frequently used data in order to reduce latency, which might occur as a consequence of using cloud computing. It also provides easy access and management mechanisms to the devices located at the edge of the cloud, which leads to better performance when compared with cloud computing. Fog computing does, however, pose certain challenges, related to security, such as data breaches; availability, such as dealing with connectivity interruptions; and the reliability of fog resources and services. This study proposes a lightweight system that adopts the fog computing paradigm and addresses several of its challenges by, for instance, enhancing the security aspects of the whole system by validating nodes that join the fog layer before serving the end user. In addition, the proposed system provides better availability and reliability for fog computing and its associated services by capturing and tracking the progress of tasks and being able to resume once an interruption is detected. Experimental results validate the feasibility of the proposed system in terms of its enhanced security capabilities and time cost. This is achieved by using several security techniques which result in allowing only approved devices to join the fog layer. The results also demonstrate the capability to execute tasks even if an interruption is detected by resuming the remainder of the task through another fog node. The proposed solution is unique in the sense that it provides a simple mechanism for implementation in real-world applications, especially in crowded places or when the mobility of users is high. It can also be enhanced further in several ways to address other predicaments related to fog computing.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141233649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Semantic Web Retrieval Through Ontology-Driven Feature Extraction: A Novel Proposition 通过本体驱动的特征提取增强语义网检索:一个新命题
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.487.494
Meer Hazar Khan, Muhammad Imran Sharif, Mehwish Mehmood, Fernaz Narin Nur, Md Palash Uddin, Zahid Akhtar, Kamran Siddique, Sadia Waheed Awan
: Web images represent unstructured data sets which often lead to challenges when users try to locate distinct images via text-based searches on the web. Such difficulties stem from different factors, e
:网络图像代表着非结构化数据集,当用户试图通过基于文本的网络搜索来查找不同的图像时,往往会遇到困难。这些困难来自不同的因素,例如
{"title":"Enhancing Semantic Web Retrieval Through Ontology-Driven Feature Extraction: A Novel Proposition","authors":"Meer Hazar Khan, Muhammad Imran Sharif, Mehwish Mehmood, Fernaz Narin Nur, Md Palash Uddin, Zahid Akhtar, Kamran Siddique, Sadia Waheed Awan","doi":"10.3844/jcssp.2024.487.494","DOIUrl":"https://doi.org/10.3844/jcssp.2024.487.494","url":null,"abstract":": Web images represent unstructured data sets which often lead to challenges when users try to locate distinct images via text-based searches on the web. Such difficulties stem from different factors, e","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141027461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Approaches for the Prediction of Gas Turbine Transients 预测燃气轮机瞬态的机器学习方法
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.495.510
Arnaud Nguembang Fadja, Giuseppe Cota, Francesco Bertasi, Fabrizio Riguzzi, E. Losi, L. Manservigi, M. Venturini, G. Bechini
: Gas Turbine (GT) emergency shutdowns can lead to energy production interruption and may also reduce the lifespan of a turbine. In order to remain competitive in the market, it is necessary to improve the reliability and availability of GTs by developing predictive maintenance systems that are able to predict future conditions of GTs within a certain time. Predicting such situations not only helps to take corrective measures to avoid service unavailability but also eases the process of maintenance and considerably reduces maintenance costs. Huge amounts of sensor data are collected from (GTs) making monitoring impossible for human operators even with the help of computers. Machine learning techniques could provide support for handling large amounts of sensor data and building decision models for predicting GT future conditions. The paper presents an application of machine learning based on decision trees and k-nearest neighbors for predicting the rotational speed of gas turbines. The aim is to distinguish steady states (e.g., GT operation at normal conditions) from transients (e.g., GT trip or shutdown). The different steps of a machine learning pipeline, starting from data extraction to model testing are implemented and analyzed. Experiments are performed by applying decision trees, extremely randomized trees, and k-nearest neighbors to sensor data collected from GTs located in different countries. The trained models were able to predict steady state and transient with more than 93% accuracy. This research advances predictive maintenance methods and suggests exploring advanced machine learning algorithms, real-time data integration, and explainable AI techniques to enhance gas turbine behavior understanding and develop more adaptable maintenance systems for industrial applications.
:燃气轮机(GT)紧急停机会导致能源生产中断,还可能缩短涡轮机的使用寿命。为了保持市场竞争力,有必要通过开发能够预测燃气轮机在一定时间内的未来状况的预测性维护系统来提高燃气轮机的可靠性和可用性。预测这种情况不仅有助于采取纠正措施,避免出现服务不可用的情况,还能简化维护过程,大大降低维护成本。从全球定位系统(GTs)上收集的大量传感器数据使得人类操作员即使在计算机的帮助下也无法进行监控。机器学习技术可为处理大量传感器数据和建立预测 GT 未来状况的决策模型提供支持。本文介绍了基于决策树和 k-nearest neighbors 的机器学习在预测燃气轮机转速方面的应用。其目的是区分稳定状态(如正常条件下的燃气轮机运行)和瞬态(如燃气轮机跳闸或停机)。从数据提取到模型测试,对机器学习管道的不同步骤进行了实施和分析。通过对从位于不同国家的 GT 收集到的传感器数据应用决策树、极端随机树和 k 最近邻进行了实验。经过训练的模型能够预测稳定状态和瞬态,准确率超过 93%。这项研究推动了预测性维护方法的发展,并建议探索先进的机器学习算法、实时数据集成和可解释的人工智能技术,以加强对燃气轮机行为的理解,并为工业应用开发适应性更强的维护系统。
{"title":"Machine Learning Approaches for the Prediction of Gas Turbine Transients","authors":"Arnaud Nguembang Fadja, Giuseppe Cota, Francesco Bertasi, Fabrizio Riguzzi, E. Losi, L. Manservigi, M. Venturini, G. Bechini","doi":"10.3844/jcssp.2024.495.510","DOIUrl":"https://doi.org/10.3844/jcssp.2024.495.510","url":null,"abstract":": Gas Turbine (GT) emergency shutdowns can lead to energy production interruption and may also reduce the lifespan of a turbine. In order to remain competitive in the market, it is necessary to improve the reliability and availability of GTs by developing predictive maintenance systems that are able to predict future conditions of GTs within a certain time. Predicting such situations not only helps to take corrective measures to avoid service unavailability but also eases the process of maintenance and considerably reduces maintenance costs. Huge amounts of sensor data are collected from (GTs) making monitoring impossible for human operators even with the help of computers. Machine learning techniques could provide support for handling large amounts of sensor data and building decision models for predicting GT future conditions. The paper presents an application of machine learning based on decision trees and k-nearest neighbors for predicting the rotational speed of gas turbines. The aim is to distinguish steady states (e.g., GT operation at normal conditions) from transients (e.g., GT trip or shutdown). The different steps of a machine learning pipeline, starting from data extraction to model testing are implemented and analyzed. Experiments are performed by applying decision trees, extremely randomized trees, and k-nearest neighbors to sensor data collected from GTs located in different countries. The trained models were able to predict steady state and transient with more than 93% accuracy. This research advances predictive maintenance methods and suggests exploring advanced machine learning algorithms, real-time data integration, and explainable AI techniques to enhance gas turbine behavior understanding and develop more adaptable maintenance systems for industrial applications.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141026970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Framework for the Adaptive Learning of Higher Education Students in Virtual Classes in Peru Using CRISP-DM and Machine Learning 利用 CRISP-DM 和机器学习为秘鲁虚拟课堂中的高校学生提供自适应学习框架
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.522.534
Maryori Bautista, Sebastian Alfaro, Lenis Wong
: During the COVID-19 pandemic, virtual education played a significant role around the world. In post-pandemic Peru, higher education institutions did not entirely dismiss the online education modality. However, this virtual education system maintains a traditional teaching-learning model, where all students receive the same content material and are expected to learn in the same way; as a result, it has not been effective in meeting the individual needs of students, causing poor performance in many cases. For this reason, a framework is proposed for the adaptive learning of higher education students in virtual classes using the Cross-Industry Standard Process for Data Mining (CRISP-DM) and Machine Learning (ML) methodology in order to recommend individualized learning materials. This framework is made up of four stages: (i) Analysis of student aspects, (ii) Analysis of Learning Methodology (LM), (iii) ML development and (iv) Integration of LM and ML models. (i) evaluates the student-related factors to be considered in adapting their learning content material. (ii) Evaluate which LM is more effective in a virtual environment. In (iii), Four ML algorithms based on the CRISP-DM methodology are implemented. In (iv), The best ML model is integrated with the LM in a virtual class. Two experiments were carried out to compare the traditional teaching methodology (experiment I) and the proposed framework (experiment 2) with a sample of 68 students. The results showed that the framework was more effective in promoting progress and academic performance, obtaining an Improvement Percentage (IP) of 39.72%. This percentage was calculated by subtracting the grade average of the tests taken at the beginning and end of each experiment.
:在 COVID-19 大流行期间,虚拟教育在世界各地发挥了重要作用。在大流行后的秘鲁,高等教育机构并没有完全否定在线教育模式。但是,这种虚拟教育系统仍然采用传统的教学模式,所有学生都接受同样的教材内容,并以同样的方式进行学习;因此,这种模式不能有效地满足学生的个性化需求,在许多情况下导致学生成绩不佳。为此,我们提出了一个框架,利用数据挖掘跨行业标准流程(CRISP-DM)和机器学习(ML)方法,为虚拟课堂中的高校学生提供自适应学习,以推荐个性化的学习材料。该框架由四个阶段组成:(i) 分析学生方面,(ii) 分析学习方法 (LM),(iii) 开发 ML,(iv) 整合 LM 和 ML 模型。(i) 评估在调整其学习内容材料时应考虑的与学生有关的因素。(ii) 评估哪种 LM 在虚拟环境中更有效。(iii) 基于 CRISP-DM 方法实施四种 ML 算法。在(iv)中,将最佳 ML 模型与 LM 集成到虚拟课堂中。以 68 名学生为样本,进行了两次实验,比较传统教学方法(实验一)和提议的框架(实验二)。结果表明,该框架在促进学生进步和提高学习成绩方面更为有效,其提高率(IP)为 39.72%。这个百分比是通过减去每次实验开始和结束时的测试平均成绩计算出来的。
{"title":"Framework for the Adaptive Learning of Higher Education Students in Virtual Classes in Peru Using CRISP-DM and Machine Learning","authors":"Maryori Bautista, Sebastian Alfaro, Lenis Wong","doi":"10.3844/jcssp.2024.522.534","DOIUrl":"https://doi.org/10.3844/jcssp.2024.522.534","url":null,"abstract":": During the COVID-19 pandemic, virtual education played a significant role around the world. In post-pandemic Peru, higher education institutions did not entirely dismiss the online education modality. However, this virtual education system maintains a traditional teaching-learning model, where all students receive the same content material and are expected to learn in the same way; as a result, it has not been effective in meeting the individual needs of students, causing poor performance in many cases. For this reason, a framework is proposed for the adaptive learning of higher education students in virtual classes using the Cross-Industry Standard Process for Data Mining (CRISP-DM) and Machine Learning (ML) methodology in order to recommend individualized learning materials. This framework is made up of four stages: (i) Analysis of student aspects, (ii) Analysis of Learning Methodology (LM), (iii) ML development and (iv) Integration of LM and ML models. (i) evaluates the student-related factors to be considered in adapting their learning content material. (ii) Evaluate which LM is more effective in a virtual environment. In (iii), Four ML algorithms based on the CRISP-DM methodology are implemented. In (iv), The best ML model is integrated with the LM in a virtual class. Two experiments were carried out to compare the traditional teaching methodology (experiment I) and the proposed framework (experiment 2) with a sample of 68 students. The results showed that the framework was more effective in promoting progress and academic performance, obtaining an Improvement Percentage (IP) of 39.72%. This percentage was calculated by subtracting the grade average of the tests taken at the beginning and end of each experiment.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141043923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Papers Mentioning Things Board: A Systematic Mapping Study 提及事物局的论文:系统绘图研究
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.574.584
Paolino Di Felice, Gaetanino Paolone
.
.
{"title":"Papers Mentioning Things Board: A Systematic Mapping Study","authors":"Paolino Di Felice, Gaetanino Paolone","doi":"10.3844/jcssp.2024.574.584","DOIUrl":"https://doi.org/10.3844/jcssp.2024.574.584","url":null,"abstract":".","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141025618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Postoperative Brain MRI Segmentation with Automated Skull Removal and Resection Cavity Analysis 通过自动颅骨切除和切除腔分析增强术后脑磁共振成像分割功能
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.585.593
Sobha Xavier P., Sathish P. K., Raju G.
: Brain tumors present a significant medical challenge, often necessitating surgical intervention for treatment. In the context of postoperative brain MRI, the primary focus is on the resection cavity, the void that remains in the brain following tumor removal surgery. Precise segmentation of this resection cavity is crucial for a comprehensive assessment of surgical efficacy, aiding healthcare professionals in evaluating the success of tumor removal. Automatically segmenting surgical cavities in post-operative brain MRI images is a complex task due to challenges such as image artifacts, tissue reorganization, and variations in appearance. Existing state-of-the-art techniques, mainly based on Convolutional Neural Networks (CNNs), particularly U-Net models, encounter difficulties when handling these complexities. The intricate nature of these images, coupled with limited annotated data, highlights the need for advanced automated segmentation models to accurately assess resection cavities and improve patient care. In this context, this study introduces a two-stage architecture for resection cavity segmentation, featuring two innovative models. The first is an automatic skull removal model that separates brain tissue from the skull image before input into the cavity segmentation model. The second is an automated postoperative resection cavity segmentation model customized for resected brain areas. The proposed resection cavity segmentation model is an enhanced U-Net model with a pre-trained VGG16 backbone. Trained on publicly available post-operative datasets, it undergoes preprocessing by the proposed skull removal model to enhance precision and accuracy. This segmentation model achieves a Dice coefficient value of 0.96, surpassing state-of-the-art techniques like ResUNet, Attention U-Net, U-Net++, and U-Net.
:脑肿瘤是一项重大的医学挑战,通常需要通过手术进行治疗。术后脑部磁共振成像的主要重点是切除腔,即肿瘤切除手术后残留在脑部的空隙。切除腔的精确分割对于全面评估手术疗效至关重要,有助于医护人员评估肿瘤切除是否成功。由于图像伪影、组织重组和外观变化等挑战,在术后脑部磁共振成像图像中自动分割手术腔是一项复杂的任务。现有的先进技术主要基于卷积神经网络(CNN),尤其是 U-Net 模型,在处理这些复杂问题时遇到了困难。这些图像错综复杂,加上注释数据有限,因此需要先进的自动分割模型来准确评估切除腔并改善患者护理。在此背景下,本研究介绍了一种用于切除腔体分割的两阶段架构,其中包括两个创新模型。第一个是自动头骨移除模型,可在输入腔体分割模型之前将脑组织从头骨图像中分离出来。第二个是针对切除脑区定制的术后自动切除腔体分割模型。所提出的切除腔体分割模型是一个增强型 U-Net 模型,带有预先训练好的 VGG16 主干网。该模型在公开的术后数据集上进行了训练,并由所提出的颅骨切除模型进行了预处理,以提高精确度和准确性。该分割模型的 Dice 系数值达到了 0.96,超过了 ResUNet、Attention U-Net、U-Net++ 和 U-Net 等最先进的技术。
{"title":"Enhanced Postoperative Brain MRI Segmentation with Automated Skull Removal and Resection Cavity Analysis","authors":"Sobha Xavier P., Sathish P. K., Raju G.","doi":"10.3844/jcssp.2024.585.593","DOIUrl":"https://doi.org/10.3844/jcssp.2024.585.593","url":null,"abstract":": Brain tumors present a significant medical challenge, often necessitating surgical intervention for treatment. In the context of postoperative brain MRI, the primary focus is on the resection cavity, the void that remains in the brain following tumor removal surgery. Precise segmentation of this resection cavity is crucial for a comprehensive assessment of surgical efficacy, aiding healthcare professionals in evaluating the success of tumor removal. Automatically segmenting surgical cavities in post-operative brain MRI images is a complex task due to challenges such as image artifacts, tissue reorganization, and variations in appearance. Existing state-of-the-art techniques, mainly based on Convolutional Neural Networks (CNNs), particularly U-Net models, encounter difficulties when handling these complexities. The intricate nature of these images, coupled with limited annotated data, highlights the need for advanced automated segmentation models to accurately assess resection cavities and improve patient care. In this context, this study introduces a two-stage architecture for resection cavity segmentation, featuring two innovative models. The first is an automatic skull removal model that separates brain tissue from the skull image before input into the cavity segmentation model. The second is an automated postoperative resection cavity segmentation model customized for resected brain areas. The proposed resection cavity segmentation model is an enhanced U-Net model with a pre-trained VGG16 backbone. Trained on publicly available post-operative datasets, it undergoes preprocessing by the proposed skull removal model to enhance precision and accuracy. This segmentation model achieves a Dice coefficient value of 0.96, surpassing state-of-the-art techniques like ResUNet, Attention U-Net, U-Net++, and U-Net.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141032862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Hodgkin Lymphoma Risk Grading Through the Pathological Data by Using the Optimized Convolutional Lymphnet Model 利用优化卷积淋巴网模型通过病理数据进行非霍奇金淋巴瘤风险分级
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.511.521
Sivaranjini Nagarajan, Gomathi Muthuswamy
: Diagnosing Non-Hodgkin Lymphoma (NHL) is difficult and often requires specialised training and expertise as well as extensive morphological investigation and, in certain cases, costly immunohistological and genetic techniques. Computational approaches enabling morphological-based decision making are necessary for bridging the existing gaps. Histopathological images can be accurately classified using deep learning approaches, however data on NHL subtyping is limited. However, there is a lack of data about the categorization of lymph nodes affected by Non-Hodgkin Lymphoma. Here in this study, initially image preprocessing was done using the maximal Kalman filter which helps in removing the noise, data augmentation was done to improve the dataset, then the lymph nodal area was segmented using the sequential fuzzy YOLACT algorithm. Finally we trained and optimized an Convolutional Lymphnet model to classify and grade tumor level from tumor-free reference lymph nodes using the grey wolf optimized model by selecting the fitness parameters and optimize it for identifying the patient risk score. The overall experimentation was carried out under python framework. The findings demonstrate that the recommended strategy works better than the state-of-the-art techniques by having excellent detection and risk score prediction accuracy
:非霍奇金淋巴瘤(NHL)的诊断非常困难,通常需要专业培训和专业知识以及广泛的形态学调查,在某些情况下还需要昂贵的免疫组织学和遗传学技术。要弥补现有的差距,就必须采用计算方法,做出基于形态学的决策。使用深度学习方法可以对组织病理学图像进行准确分类,但有关 NHL 亚型的数据却很有限。然而,关于受非霍奇金淋巴瘤影响的淋巴结的分类数据却很缺乏。在这项研究中,我们首先使用最大卡尔曼滤波器进行图像预处理,该滤波器有助于去除噪声;然后进行数据扩增以改进数据集;最后使用序列模糊 YOLACT 算法分割淋巴结区域。最后,我们训练并优化了一个卷积淋巴网络模型,利用灰狼优化模型从无肿瘤的参考淋巴结中对肿瘤程度进行分类和分级,方法是选择适配参数并进行优化,以确定患者的风险评分。整个实验在 python 框架下进行。实验结果表明,所推荐的策略比最先进的技术效果更好,具有极高的检测和风险评分预测准确度。
{"title":"Non-Hodgkin Lymphoma Risk Grading Through the Pathological Data by Using the Optimized Convolutional Lymphnet Model","authors":"Sivaranjini Nagarajan, Gomathi Muthuswamy","doi":"10.3844/jcssp.2024.511.521","DOIUrl":"https://doi.org/10.3844/jcssp.2024.511.521","url":null,"abstract":": Diagnosing Non-Hodgkin Lymphoma (NHL) is difficult and often requires specialised training and expertise as well as extensive morphological investigation and, in certain cases, costly immunohistological and genetic techniques. Computational approaches enabling morphological-based decision making are necessary for bridging the existing gaps. Histopathological images can be accurately classified using deep learning approaches, however data on NHL subtyping is limited. However, there is a lack of data about the categorization of lymph nodes affected by Non-Hodgkin Lymphoma. Here in this study, initially image preprocessing was done using the maximal Kalman filter which helps in removing the noise, data augmentation was done to improve the dataset, then the lymph nodal area was segmented using the sequential fuzzy YOLACT algorithm. Finally we trained and optimized an Convolutional Lymphnet model to classify and grade tumor level from tumor-free reference lymph nodes using the grey wolf optimized model by selecting the fitness parameters and optimize it for identifying the patient risk score. The overall experimentation was carried out under python framework. The findings demonstrate that the recommended strategy works better than the state-of-the-art techniques by having excellent detection and risk score prediction accuracy","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141024894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Unimodular Hill Cipher and RSA Methods to Text Encryption Algorithms Using Python 使用 Python 将单模希尔密码和 RSA 方法应用于文本加密算法
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.548.563
Samsul Arifin, Dwi Wijonarko, Suwarno, Edwin K Sijabat
: Text encryption is one of the techniques used to maintain the confidentiality of information in digital communications. In this study, we propose to apply a combination of the Unimodular Hill Cipher and RSA methods to a text encryption algorithm using the Python programming language. The Unimodular Hill Cipher method uses an unimodular matrix to replace text characters with encrypted characters, while RSA (Rivest-Shamir-Adleman) is a public key encryption algorithm that relies on modulo arithmetic properties. The purpose of this research is to combine the strengths of the two methods and produce a more secure text encryption system. Unimodular Hill Cipher provides the advantage of randomizing text characters by using matrix modulo operations, while RSA provides a high level of security through the use of public and private key pairs. In this study, we explain in detail the basic theory and algorithms of the Unimodular Hill Cipher and RSA. We also describe the implementation steps of both methods in the Python programming language. The text data used in this study went through a preprocessing stage before being encrypted. We also analyze the results of the encryption using several statistical methods to measure how close the relationship between the original text and the result of the encryption is. In a comparative analysis with the previous paper, in this study, the use of the Unimodular Hill Cipher and RSA methods in the context of Python provides additional insight into the performance and level of security of both. The experimental results show that the combination of the Unimodular Hill Cipher and RSA methods can produce a higher level of security in text encryption. It is hoped that this research can contribute to the development of a more effective and secure text encryption algorithm.
:文本加密是数字通信中用于维护信息机密性的技术之一。在本研究中,我们建议使用 Python 编程语言将单模态希尔密码和 RSA 方法结合应用到文本加密算法中。单模态希尔密码法使用单模态矩阵将文本字符替换为加密字符,而 RSA(Rivest-Shamir-Adleman)是一种公钥加密算法,依赖于模态算术特性。这项研究的目的是结合这两种方法的优势,开发出一种更安全的文本加密系统。单模态希尔密码通过矩阵模运算提供了随机化文本字符的优势,而 RSA 则通过使用公钥和私钥对提供了高水平的安全性。在本研究中,我们将详细解释单模态希尔密码和 RSA 的基本理论和算法。我们还介绍了这两种方法在 Python 编程语言中的实现步骤。本研究中使用的文本数据在加密前经过了预处理阶段。我们还使用多种统计方法对加密结果进行了分析,以衡量原始文本与加密结果之间的密切程度。在与前一篇论文的对比分析中,本研究在 Python 的背景下使用了 Unimodular Hill Cipher 和 RSA 方法,使我们对两者的性能和安全等级有了更深入的了解。实验结果表明,单模态希尔密码和 RSA 方法的结合可以产生更高水平的文本加密安全性。希望这项研究能为开发更有效、更安全的文本加密算法做出贡献。
{"title":"Application of Unimodular Hill Cipher and RSA Methods to Text Encryption Algorithms Using Python","authors":"Samsul Arifin, Dwi Wijonarko, Suwarno, Edwin K Sijabat","doi":"10.3844/jcssp.2024.548.563","DOIUrl":"https://doi.org/10.3844/jcssp.2024.548.563","url":null,"abstract":": Text encryption is one of the techniques used to maintain the confidentiality of information in digital communications. In this study, we propose to apply a combination of the Unimodular Hill Cipher and RSA methods to a text encryption algorithm using the Python programming language. The Unimodular Hill Cipher method uses an unimodular matrix to replace text characters with encrypted characters, while RSA (Rivest-Shamir-Adleman) is a public key encryption algorithm that relies on modulo arithmetic properties. The purpose of this research is to combine the strengths of the two methods and produce a more secure text encryption system. Unimodular Hill Cipher provides the advantage of randomizing text characters by using matrix modulo operations, while RSA provides a high level of security through the use of public and private key pairs. In this study, we explain in detail the basic theory and algorithms of the Unimodular Hill Cipher and RSA. We also describe the implementation steps of both methods in the Python programming language. The text data used in this study went through a preprocessing stage before being encrypted. We also analyze the results of the encryption using several statistical methods to measure how close the relationship between the original text and the result of the encryption is. In a comparative analysis with the previous paper, in this study, the use of the Unimodular Hill Cipher and RSA methods in the context of Python provides additional insight into the performance and level of security of both. The experimental results show that the combination of the Unimodular Hill Cipher and RSA methods can produce a higher level of security in text encryption. It is hoped that this research can contribute to the development of a more effective and secure text encryption algorithm.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141050681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentence Classification Using Attention Model for E-Commerce Product Review 利用注意力模型为电子商务产品评论进行句子分类
Pub Date : 2024-05-01 DOI: 10.3844/jcssp.2024.535.547
Nagendra N, Chandra J
: The importance of aspect extraction in text classification, particularly in the e-commerce sector. E-commerce platforms generate vast amounts of textual data, such as comments, product descriptions, and customer reviews, which contain valuable information about various aspects of products or services. Aspect extraction involves identifying and classifying individual traits or aspects mentioned in textual reviews to understand customer opinions, improve products, and enhance the customer experience. The role of product reviews in e-commerce is discussed, emphasizing their value in aiding customers' purchase decisions and guiding businesses in product stocking and marketing strategies. Reviews are essential for boosting sales potential, maintaining a good reputation, and promoting brand recognition. Customers extensively research product reviews from different sources before purchasing, making them vital user-generated content for e-commerce businesses. The current work provided an efficient and novel classification model for sentence classification using the ABNAM model. The automated text classification models available cannot categorize the data into sixteen distinct classes. The technologies applied for the mentioned work contain TF-IDF, N-gram, CNN, linear SVM, random forest, Naïve bays, and ABNAM with significant results. The best-performing ML method for the successful classification of a given sentence into one of the sixteen categories is achieved with the proposed model named the based Neural Attention Model (ABNAM), which has the highest accuracy at 97%. The research acclaimed ABNAM as a novel classification model with the highest-class categorizations.
:方面提取在文本分类中的重要性,尤其是在电子商务领域。电子商务平台会产生大量的文本数据,如评论、产品描述和客户评价,其中包含有关产品或服务各个方面的宝贵信息。特征提取包括对文本评论中提及的个别特征或方面进行识别和分类,以了解客户意见、改进产品并提升客户体验。本文讨论了产品评论在电子商务中的作用,强调了其在帮助客户做出购买决策以及指导企业制定产品库存和营销策略方面的价值。评论对于提高销售潜力、维护良好声誉和促进品牌认知度至关重要。顾客在购买前会广泛研究不同来源的产品评论,因此评论对于电子商务企业来说是至关重要的用户生成内容。目前的工作提供了一种使用 ABNAM 模型进行句子分类的高效而新颖的分类模型。现有的自动文本分类模型无法将数据分为十六个不同的类别。上述工作应用的技术包括 TF-IDF、N-gram、CNN、线性 SVM、随机森林、Naïve bays 和 ABNAM,并取得了显著效果。基于神经注意模型(ABNAM)是将给定句子成功分类为 16 个类别之一的最佳 ML 方法,其准确率最高,达到 97%。该研究称赞 ABNAM 是一种新颖的分类模型,具有最高的类别分类能力。
{"title":"Sentence Classification Using Attention Model for E-Commerce Product Review","authors":"Nagendra N, Chandra J","doi":"10.3844/jcssp.2024.535.547","DOIUrl":"https://doi.org/10.3844/jcssp.2024.535.547","url":null,"abstract":": The importance of aspect extraction in text classification, particularly in the e-commerce sector. E-commerce platforms generate vast amounts of textual data, such as comments, product descriptions, and customer reviews, which contain valuable information about various aspects of products or services. Aspect extraction involves identifying and classifying individual traits or aspects mentioned in textual reviews to understand customer opinions, improve products, and enhance the customer experience. The role of product reviews in e-commerce is discussed, emphasizing their value in aiding customers' purchase decisions and guiding businesses in product stocking and marketing strategies. Reviews are essential for boosting sales potential, maintaining a good reputation, and promoting brand recognition. Customers extensively research product reviews from different sources before purchasing, making them vital user-generated content for e-commerce businesses. The current work provided an efficient and novel classification model for sentence classification using the ABNAM model. The automated text classification models available cannot categorize the data into sixteen distinct classes. The technologies applied for the mentioned work contain TF-IDF, N-gram, CNN, linear SVM, random forest, Naïve bays, and ABNAM with significant results. The best-performing ML method for the successful classification of a given sentence into one of the sixteen categories is achieved with the proposed model named the based Neural Attention Model (ABNAM), which has the highest accuracy at 97%. The research acclaimed ABNAM as a novel classification model with the highest-class categorizations.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141031297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1