首页 > 最新文献

Big data analytics最新文献

英文 中文
A Comparative Analysis of VirLock and Bacteriophage ϕ6 through the Lens of Game Theory 基于博弈论的VirLock和Bacteriophage <e:1>的比较分析
Pub Date : 2023-11-06 DOI: 10.3390/analytics2040045
Dimitris Kostadimas, Kalliopi Kastampolidou, Theodore Andronikos
The novelty of this paper lies in its perspective, which underscores the fruitful correlation between biological and computer viruses. In the realm of computer science, the study of theoretical concepts often intersects with practical applications. Computer viruses have many common traits with their biological counterparts. Studying their correlation may enhance our perspective and, ultimately, augment our ability to successfully protect our computer systems and data against viruses. Game theory may be an appropriate tool for establishing the link between biological and computer viruses. In this work, we establish correlations between a well-known computer virus, VirLock, with an equally well-studied biological virus, the bacteriophage ϕ6. VirLock is a formidable ransomware that encrypts user files and demands a ransom for data restoration. Drawing a parallel with the biological virus bacteriophage ϕ6, we uncover conceptual links like shared attributes and behaviors, as well as useful insights. Following this line of thought, we suggest efficient strategies based on a game theory perspective, which have the potential to address the infections caused by VirLock, and other viruses with analogous behavior. Moreover, we propose mathematical formulations that integrate real-world variables, providing a means to gauge virus severity and design robust defensive strategies and analytics. This interdisciplinary inquiry, fusing game theory, biology, and computer science, advances our understanding of virus behavior, paving the way for the development of effective countermeasures while presenting an alternative viewpoint. Throughout this theoretical exploration, we contribute to the ongoing discourse on computer virus behavior and stimulate new avenues for addressing digital threats. In particular, the formulas and framework developed in this work can facilitate better risk analysis and assessment, and become useful tools in penetration testing analysis, helping companies and organizations enhance their security.
这篇论文的新颖之处在于它的观点,它强调了生物病毒和计算机病毒之间富有成效的相关性。在计算机科学领域,理论概念的研究经常与实际应用相交叉。计算机病毒与其生物病毒有许多共同特征。研究它们的相关性可以增强我们的视野,并最终增强我们成功保护计算机系统和数据免受病毒侵害的能力。博弈论可能是建立生物病毒和计算机病毒之间联系的适当工具。在这项工作中,我们建立了一种众所周知的计算机病毒VirLock与一种同样得到充分研究的生物病毒噬菌体之间的相关性。VirLock是一种可怕的勒索软件,它会对用户文件进行加密,并要求用户支付赎金以恢复数据。通过与生物病毒噬菌体(bacteriophage)相类比,我们发现了概念上的联系,如共同的属性和行为,以及有用的见解。根据这一思路,我们提出了基于博弈论观点的有效策略,这些策略有可能解决由VirLock和其他具有类似行为的病毒引起的感染。此外,我们提出了整合现实世界变量的数学公式,提供了一种衡量病毒严重性和设计稳健防御策略和分析的方法。这种跨学科的探究,融合了博弈论,生物学和计算机科学,促进了我们对病毒行为的理解,为有效对策的发展铺平了道路,同时提出了另一种观点。在整个理论探索中,我们对计算机病毒行为的持续论述做出了贡献,并为解决数字威胁激发了新的途径。特别是,在这项工作中开发的公式和框架可以促进更好的风险分析和评估,并成为渗透测试分析中的有用工具,帮助公司和组织增强其安全性。
{"title":"A Comparative Analysis of VirLock and Bacteriophage ϕ6 through the Lens of Game Theory","authors":"Dimitris Kostadimas, Kalliopi Kastampolidou, Theodore Andronikos","doi":"10.3390/analytics2040045","DOIUrl":"https://doi.org/10.3390/analytics2040045","url":null,"abstract":"The novelty of this paper lies in its perspective, which underscores the fruitful correlation between biological and computer viruses. In the realm of computer science, the study of theoretical concepts often intersects with practical applications. Computer viruses have many common traits with their biological counterparts. Studying their correlation may enhance our perspective and, ultimately, augment our ability to successfully protect our computer systems and data against viruses. Game theory may be an appropriate tool for establishing the link between biological and computer viruses. In this work, we establish correlations between a well-known computer virus, VirLock, with an equally well-studied biological virus, the bacteriophage ϕ6. VirLock is a formidable ransomware that encrypts user files and demands a ransom for data restoration. Drawing a parallel with the biological virus bacteriophage ϕ6, we uncover conceptual links like shared attributes and behaviors, as well as useful insights. Following this line of thought, we suggest efficient strategies based on a game theory perspective, which have the potential to address the infections caused by VirLock, and other viruses with analogous behavior. Moreover, we propose mathematical formulations that integrate real-world variables, providing a means to gauge virus severity and design robust defensive strategies and analytics. This interdisciplinary inquiry, fusing game theory, biology, and computer science, advances our understanding of virus behavior, paving the way for the development of effective countermeasures while presenting an alternative viewpoint. Throughout this theoretical exploration, we contribute to the ongoing discourse on computer virus behavior and stimulate new avenues for addressing digital threats. In particular, the formulas and framework developed in this work can facilitate better risk analysis and assessment, and become useful tools in penetration testing analysis, helping companies and organizations enhance their security.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"21 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135589265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can Oral Grades Predict Final Examination Scores? Case Study in a Higher Education Military Academy 口语成绩能预测期末考试成绩吗?在高等教育军事院校的案例研究
Pub Date : 2023-11-02 DOI: 10.3390/analytics2040044
Antonios Andreatos, Apostolos Leros
This paper investigates the correlation between oral grades and final written examination grades in a higher education military academy. A quantitative, correlational methodology utilizing linear regression analysis is employed. The data consist of undergraduate telecommunications and electronics engineering students’ grades in two courses offered during the fourth year of studies, and spans six academic years. Course One covers period 2017–2022, while Course Two, period 1 spans 2014–2018 and period 2 spans 2019–2022. In Course One oral grades are obtained by means of a midterm exam. In Course Two period 1, 30% of the oral grade comes from homework assignments and lab exercises, while the remaining 70% comes from a midterm exam. In Course Two period 2, oral grades are the result of various alternative assessment activities. In all cases, the final grade results from a traditional written examination given at the end of the semester. Correlation and predictive models between oral and final grades were examined. The results of the analysis demonstrated that, (a) under certain conditions, oral grades based more or less on midterm exams can be good predictors of final examination scores; (b) oral grades obtained through alternative assessment activities cannot predict final examination scores.
本文对某高等军事院校学生口试成绩与期末笔试成绩的相关性进行了研究。定量的,相关的方法利用线性回归分析被采用。这些数据包括电信和电子工程专业本科学生在四年级开设的两门课程的成绩,跨度为六个学年。课程一涵盖2017-2022年,课程二涵盖2014-2018年,课程二涵盖2019-2022年。在课程一中,口头成绩是通过期中考试获得的。在课程二的第一期中,30%的口试成绩来自家庭作业和实验练习,而剩下的70%来自期中考试。在课程二第二阶段,口头成绩是各种替代评估活动的结果。在所有情况下,期末成绩都是在学期结束时进行传统的笔试。研究了口试和期末成绩之间的相关性和预测模型。分析结果表明:(a)在一定条件下,或多或少基于期中考试的口试成绩可以很好地预测期末考试成绩;(二)透过另类评核活动取得的口头成绩,并不能预测期末考试的成绩。
{"title":"Can Oral Grades Predict Final Examination Scores? Case Study in a Higher Education Military Academy","authors":"Antonios Andreatos, Apostolos Leros","doi":"10.3390/analytics2040044","DOIUrl":"https://doi.org/10.3390/analytics2040044","url":null,"abstract":"This paper investigates the correlation between oral grades and final written examination grades in a higher education military academy. A quantitative, correlational methodology utilizing linear regression analysis is employed. The data consist of undergraduate telecommunications and electronics engineering students’ grades in two courses offered during the fourth year of studies, and spans six academic years. Course One covers period 2017–2022, while Course Two, period 1 spans 2014–2018 and period 2 spans 2019–2022. In Course One oral grades are obtained by means of a midterm exam. In Course Two period 1, 30% of the oral grade comes from homework assignments and lab exercises, while the remaining 70% comes from a midterm exam. In Course Two period 2, oral grades are the result of various alternative assessment activities. In all cases, the final grade results from a traditional written examination given at the end of the semester. Correlation and predictive models between oral and final grades were examined. The results of the analysis demonstrated that, (a) under certain conditions, oral grades based more or less on midterm exams can be good predictors of final examination scores; (b) oral grades obtained through alternative assessment activities cannot predict final examination scores.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135974024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Relating the Ramsay Quotient Model to the Classical D-Scoring Rule Ramsay商模型与经典d -评分规则的关系
Pub Date : 2023-10-17 DOI: 10.3390/analytics2040043
Alexander Robitzsch
In a series of papers, Dimitrov suggested the classical D-scoring rule for scoring items that give difficult items a higher weight while easier items receive a lower weight. The latent D-scoring model has been proposed to serve as a latent mirror of the classical D-scoring model. However, the item weights implied by this latent D-scoring model are typically only weakly related to the weights in the classical D-scoring model. To this end, this article proposes an alternative item response model, the modified Ramsay quotient model, that is better-suited as a latent mirror of the classical D-scoring model. The reasoning is based on analytical arguments and numerical illustrations.
在一系列的论文中,Dimitrov提出了经典的d评分规则,即给困难的项目更高的权重,而更容易的项目得到更低的权重。潜在d评分模型被提出作为经典d评分模型的潜在镜像。然而,这种潜在d评分模型所隐含的项目权重与经典d评分模型中的权重通常只有弱相关。为此,本文提出了一种替代项目反应模型,即改进的Ramsay商模型,它更适合作为经典d评分模型的潜在镜像。推理是基于分析论证和数值例证。
{"title":"Relating the Ramsay Quotient Model to the Classical D-Scoring Rule","authors":"Alexander Robitzsch","doi":"10.3390/analytics2040043","DOIUrl":"https://doi.org/10.3390/analytics2040043","url":null,"abstract":"In a series of papers, Dimitrov suggested the classical D-scoring rule for scoring items that give difficult items a higher weight while easier items receive a lower weight. The latent D-scoring model has been proposed to serve as a latent mirror of the classical D-scoring model. However, the item weights implied by this latent D-scoring model are typically only weakly related to the weights in the classical D-scoring model. To this end, this article proposes an alternative item response model, the modified Ramsay quotient model, that is better-suited as a latent mirror of the classical D-scoring model. The reasoning is based on analytical arguments and numerical illustrations.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135993108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Exploration of Clustering Algorithms for Customer Segmentation in the UK Retail Market 英国零售市场客户细分的聚类算法研究
Pub Date : 2023-10-12 DOI: 10.3390/analytics2040042
Jeen Mary John, Olamilekan Shobayo, Bayode Ogunleye
Recently, peoples’ awareness of online purchases has significantly risen. This has given rise to online retail platforms and the need for a better understanding of customer purchasing behaviour. Retail companies are pressed with the need to deal with a high volume of customer purchases, which requires sophisticated approaches to perform more accurate and efficient customer segmentation. Customer segmentation is a marketing analytical tool that aids customer-centric service and thus enhances profitability. In this paper, we aim to develop a customer segmentation model to improve decision-making processes in the retail market industry. To achieve this, we employed a UK-based online retail dataset obtained from the UCI machine learning repository. The retail dataset consists of 541,909 customer records and eight features. Our study adopted the RFM (recency, frequency, and monetary) framework to quantify customer values. Thereafter, we compared several state-of-the-art (SOTA) clustering algorithms, namely, K-means clustering, the Gaussian mixture model (GMM), density-based spatial clustering of applications with noise (DBSCAN), agglomerative clustering, and balanced iterative reducing and clustering using hierarchies (BIRCH). The results showed the GMM outperformed other approaches, with a Silhouette Score of 0.80.
最近,人们对网上购物的意识显著提高。这导致了在线零售平台的兴起,以及对更好地了解客户购买行为的需求。零售公司迫切需要处理大量的客户购买,这需要复杂的方法来执行更准确和有效的客户细分。客户细分是一种营销分析工具,有助于以客户为中心的服务,从而提高盈利能力。在本文中,我们的目的是建立一个客户细分模型,以改善零售市场行业的决策过程。为了实现这一点,我们采用了从UCI机器学习存储库获得的基于英国的在线零售数据集。零售数据集由541,909条客户记录和8个特征组成。我们的研究采用了RFM(最近、频率和货币)框架来量化客户价值。随后,我们比较了几种最先进的(SOTA)聚类算法,即K-means聚类、高斯混合模型(GMM)、基于密度的带噪声空间聚类(DBSCAN)、聚集聚类以及使用层次结构的平衡迭代约简和聚类(BIRCH)。结果显示,GMM优于其他方法,剪影得分为0.80。
{"title":"An Exploration of Clustering Algorithms for Customer Segmentation in the UK Retail Market","authors":"Jeen Mary John, Olamilekan Shobayo, Bayode Ogunleye","doi":"10.3390/analytics2040042","DOIUrl":"https://doi.org/10.3390/analytics2040042","url":null,"abstract":"Recently, peoples’ awareness of online purchases has significantly risen. This has given rise to online retail platforms and the need for a better understanding of customer purchasing behaviour. Retail companies are pressed with the need to deal with a high volume of customer purchases, which requires sophisticated approaches to perform more accurate and efficient customer segmentation. Customer segmentation is a marketing analytical tool that aids customer-centric service and thus enhances profitability. In this paper, we aim to develop a customer segmentation model to improve decision-making processes in the retail market industry. To achieve this, we employed a UK-based online retail dataset obtained from the UCI machine learning repository. The retail dataset consists of 541,909 customer records and eight features. Our study adopted the RFM (recency, frequency, and monetary) framework to quantify customer values. Thereafter, we compared several state-of-the-art (SOTA) clustering algorithms, namely, K-means clustering, the Gaussian mixture model (GMM), density-based spatial clustering of applications with noise (DBSCAN), agglomerative clustering, and balanced iterative reducing and clustering using hierarchies (BIRCH). The results showed the GMM outperformed other approaches, with a Silhouette Score of 0.80.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135969411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Curve Clustering Method for Functional Data: Applications to COVID-19 and Financial Data 一种新的功能数据曲线聚类方法:在COVID-19和金融数据中的应用
Pub Date : 2023-10-08 DOI: 10.3390/analytics2040041
Ting Wei, Bo Wang
Functional data analysis has significantly enriched the landscape of existing data analysis methodologies, providing a new framework for comprehending data structures and extracting valuable insights. This paper is dedicated to addressing functional data clustering—a pivotal challenge within functional data analysis. Our contribution to this field manifests through the introduction of innovative clustering methodologies tailored specifically to functional curves. Initially, we present a proximity measure algorithm designed for functional curve clustering. This innovative clustering approach offers the flexibility to redefine measurement points on continuous functions, adapting to either equidistant or nonuniform arrangements, as dictated by the demands of the proximity measure. Central to this method is the “proximity threshold”, a critical parameter that governs the cluster count, and its selection is thoroughly explored. Subsequently, we propose a time-shift clustering algorithm designed for time-series data. This approach identifies historical data segments that share patterns similar to those observed in the present. To evaluate the effectiveness of our methodologies, we conduct comparisons with the classic K-means clustering method and apply them to simulated data, yielding encouraging simulation results. Moving beyond simulation, we apply the proposed proximity measure algorithm to COVID-19 data, yielding notable clustering accuracy. Additionally, the time-shift clustering algorithm is employed to analyse NASDAQ Composite data, successfully revealing underlying economic cycles.
功能数据分析极大地丰富了现有数据分析方法的格局,为理解数据结构和提取有价值的见解提供了一个新的框架。本文致力于解决功能数据聚类——功能数据分析中的关键挑战。我们对这一领域的贡献体现在引入专门为功能曲线量身定制的创新聚类方法。首先,我们提出了一种用于功能曲线聚类的接近度量算法。这种创新的聚类方法提供了在连续函数上重新定义测量点的灵活性,根据邻近测量的要求,适应等距或非均匀排列。该方法的核心是“接近阈值”,这是一个控制聚类计数的关键参数,并且对其选择进行了深入研究。随后,我们提出了一种针对时间序列数据的时移聚类算法。这种方法识别与当前观察到的模式相似的历史数据段。为了评估我们的方法的有效性,我们与经典的K-means聚类方法进行了比较,并将其应用于模拟数据,得到了令人鼓舞的模拟结果。在模拟之外,我们将提出的接近度量算法应用于COVID-19数据,产生了显着的聚类精度。此外,采用时移聚类算法分析纳斯达克综合指数数据,成功地揭示了潜在的经济周期。
{"title":"A Novel Curve Clustering Method for Functional Data: Applications to COVID-19 and Financial Data","authors":"Ting Wei, Bo Wang","doi":"10.3390/analytics2040041","DOIUrl":"https://doi.org/10.3390/analytics2040041","url":null,"abstract":"Functional data analysis has significantly enriched the landscape of existing data analysis methodologies, providing a new framework for comprehending data structures and extracting valuable insights. This paper is dedicated to addressing functional data clustering—a pivotal challenge within functional data analysis. Our contribution to this field manifests through the introduction of innovative clustering methodologies tailored specifically to functional curves. Initially, we present a proximity measure algorithm designed for functional curve clustering. This innovative clustering approach offers the flexibility to redefine measurement points on continuous functions, adapting to either equidistant or nonuniform arrangements, as dictated by the demands of the proximity measure. Central to this method is the “proximity threshold”, a critical parameter that governs the cluster count, and its selection is thoroughly explored. Subsequently, we propose a time-shift clustering algorithm designed for time-series data. This approach identifies historical data segments that share patterns similar to those observed in the present. To evaluate the effectiveness of our methodologies, we conduct comparisons with the classic K-means clustering method and apply them to simulated data, yielding encouraging simulation results. Moving beyond simulation, we apply the proposed proximity measure algorithm to COVID-19 data, yielding notable clustering accuracy. Additionally, the time-shift clustering algorithm is employed to analyse NASDAQ Composite data, successfully revealing underlying economic cycles.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135251305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Segmentation of the Sudd Wetlands in South Sudan for Environmental Analytics by GRASS GIS Scripts 基于GRASS GIS脚本的南苏丹苏德湿地环境分析图像分割
Pub Date : 2023-09-21 DOI: 10.3390/analytics2030040
Polina Lemenkova
This paper presents the object detection algorithms GRASS GIS applied for Landsat 8-9 OLI/TIRS data. The study area includes the Sudd wetlands located in South Sudan. This study describes a programming method for the automated processing of satellite images for environmental analytics, applying the scripting algorithms of GRASS GIS. This study documents how the land cover changed and developed over time in South Sudan with varying climate and environmental settings, indicating the variations in landscape patterns. A set of modules was used to process satellite images by scripting language. It streamlines the geospatial processing tasks. The functionality of the modules of GRASS GIS to image processing is called within scripts as subprocesses which automate operations. The cutting-edge tools of GRASS GIS present a cost-effective solution to remote sensing data modelling and analysis. This is based on the discrimination of the spectral reflectance of pixels on the raster scenes. Scripting algorithms of remote sensing data processing based on the GRASS GIS syntax are run from the terminal, enabling to pass commands to the module. This ensures the automation and high speed of image processing. The algorithm challenge is that landscape patterns differ substantially, and there are nonlinear dynamics in land cover types due to environmental factors and climate effects. Time series analysis of several multispectral images demonstrated changes in land cover types over the study area of the Sudd, South Sudan affected by environmental degradation of landscapes. The map is generated for each Landsat image from 2015 to 2023 using 481 maximum-likelihood discriminant analysis approaches of classification. The methodology includes image segmentation by ‘i.segment’ module, image clustering and classification by ‘i.cluster’ and ‘i.maxlike’ modules, accuracy assessment by ‘r.kappa’ module, and computing NDVI and cartographic mapping implemented using GRASS GIS. The benefits of object detection techniques for image analysis are demonstrated with the reported effects of various threshold levels of segmentation. The segmentation was performed 371 times with 90% of the threshold and minsize = 5; the process was converged in 37 to 41 iterations. The following segments are defined for images: 4515 for 2015, 4813 for 2016, 4114 for 2017, 5090 for 2018, 6021 for 2019, 3187 for 2020, 2445 for 2022, and 5181 for 2023. The percent convergence is 98% for the processed images. Detecting variations in land cover patterns is possible using spaceborne datasets and advanced applications of scripting algorithms. The implications of cartographic approach for environmental landscape analysis are discussed. The algorithm for image processing is based on a set of GRASS GIS wrapper functions for automated image classification.
本文介绍了应用于Landsat 8-9 OLI/TIRS数据的GRASS GIS目标检测算法。研究区域包括位于南苏丹的苏德湿地。本文介绍了一种应用GRASS GIS脚本算法对卫星图像进行环境分析自动化处理的编程方法。本研究记录了南苏丹在不同气候和环境背景下土地覆盖如何随时间变化和发展,表明了景观格局的变化。采用脚本语言对卫星图像进行处理。它简化了地理空间处理任务。GRASS GIS图像处理模块的功能在脚本中被称为自动操作的子过程。GRASS GIS的尖端工具为遥感数据建模和分析提供了经济有效的解决方案。这是基于光栅场景上像素的光谱反射率的辨别。在终端上运行基于GRASS GIS语法的遥感数据处理脚本算法,实现对模块的命令传递。这保证了图像处理的自动化和高速度。该算法面临的挑战是,由于环境因素和气候影响,景观格局差异很大,土地覆盖类型存在非线性动态。对几张多光谱图像的时间序列分析表明,受景观环境退化影响,南苏丹苏德研究区域的土地覆盖类型发生了变化。使用481种最大似然判别分析分类方法,对2015年至2023年的每个Landsat图像生成地图。该方法包括用' i '分割图像。分段’模块,图像聚类和分类采用’i。群集'和' i。Maxlike '模块,精度评估' r。kappa’模块,计算NDVI和使用GRASS GIS实现的制图。目标检测技术对图像分析的好处通过各种分割阈值水平的报告效果得到了证明。分割371次,阈值为90%,minsize = 5;该过程在37到41次迭代中收敛。以下为图像定义的细分:2015年为4515,2016年为4813,2017年为4114,2018年为5090,2019年为6021,2020年为3187,2022年为2445,2023年为5181。处理后的图像的收敛率为98%。利用星载数据集和脚本算法的高级应用,可以检测土地覆盖格局的变化。讨论了地图学方法在环境景观分析中的应用。图像处理算法基于一组GRASS GIS包装函数,实现图像自动分类。
{"title":"Image Segmentation of the Sudd Wetlands in South Sudan for Environmental Analytics by GRASS GIS Scripts","authors":"Polina Lemenkova","doi":"10.3390/analytics2030040","DOIUrl":"https://doi.org/10.3390/analytics2030040","url":null,"abstract":"This paper presents the object detection algorithms GRASS GIS applied for Landsat 8-9 OLI/TIRS data. The study area includes the Sudd wetlands located in South Sudan. This study describes a programming method for the automated processing of satellite images for environmental analytics, applying the scripting algorithms of GRASS GIS. This study documents how the land cover changed and developed over time in South Sudan with varying climate and environmental settings, indicating the variations in landscape patterns. A set of modules was used to process satellite images by scripting language. It streamlines the geospatial processing tasks. The functionality of the modules of GRASS GIS to image processing is called within scripts as subprocesses which automate operations. The cutting-edge tools of GRASS GIS present a cost-effective solution to remote sensing data modelling and analysis. This is based on the discrimination of the spectral reflectance of pixels on the raster scenes. Scripting algorithms of remote sensing data processing based on the GRASS GIS syntax are run from the terminal, enabling to pass commands to the module. This ensures the automation and high speed of image processing. The algorithm challenge is that landscape patterns differ substantially, and there are nonlinear dynamics in land cover types due to environmental factors and climate effects. Time series analysis of several multispectral images demonstrated changes in land cover types over the study area of the Sudd, South Sudan affected by environmental degradation of landscapes. The map is generated for each Landsat image from 2015 to 2023 using 481 maximum-likelihood discriminant analysis approaches of classification. The methodology includes image segmentation by ‘i.segment’ module, image clustering and classification by ‘i.cluster’ and ‘i.maxlike’ modules, accuracy assessment by ‘r.kappa’ module, and computing NDVI and cartographic mapping implemented using GRASS GIS. The benefits of object detection techniques for image analysis are demonstrated with the reported effects of various threshold levels of segmentation. The segmentation was performed 371 times with 90% of the threshold and minsize = 5; the process was converged in 37 to 41 iterations. The following segments are defined for images: 4515 for 2015, 4813 for 2016, 4114 for 2017, 5090 for 2018, 6021 for 2019, 3187 for 2020, 2445 for 2022, and 5181 for 2023. The percent convergence is 98% for the processed images. Detecting variations in land cover patterns is possible using spaceborne datasets and advanced applications of scripting algorithms. The implications of cartographic approach for environmental landscape analysis are discussed. The algorithm for image processing is based on a set of GRASS GIS wrapper functions for automated image classification.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136236433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Machine Learning and Deep Learning Models in Prostate Cancer Diagnosis Using Medical Images: A Systematic Review 机器学习和深度学习模型在前列腺癌医学图像诊断中的应用:系统综述
Pub Date : 2023-09-19 DOI: 10.3390/analytics2030039
Olusola Olabanjo, Ashiribo Wusu, Mauton Asokere, Oseni Afisi, Basheerat Okugbesan, Olufemi Olabanjo, Olusegun Folorunso, Manuel Mazzara
Introduction: Prostate cancer (PCa) is one of the deadliest and most common causes of malignancy and death in men worldwide, with a higher prevalence and mortality in developing countries specifically. Factors such as age, family history, race and certain genetic mutations are some of the factors contributing to the occurrence of PCa in men. Recent advances in technology and algorithms gave rise to the computer-aided diagnosis (CAD) of PCa. With the availability of medical image datasets and emerging trends in state-of-the-art machine and deep learning techniques, there has been a growth in recent related publications. Materials and Methods: In this study, we present a systematic review of PCa diagnosis with medical images using machine learning and deep learning techniques. We conducted a thorough review of the relevant studies indexed in four databases (IEEE, PubMed, Springer and ScienceDirect) using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. With well-defined search terms, a total of 608 articles were identified, and 77 met the final inclusion criteria. The key elements in the included papers are presented and conclusions are drawn from them. Results: The findings show that the United States has the most research in PCa diagnosis with machine learning, Magnetic Resonance Images are the most used datasets and transfer learning is the most used method of diagnosing PCa in recent times. In addition, some available PCa datasets and some key considerations for the choice of loss function in the deep learning models are presented. The limitations and lessons learnt are discussed, and some key recommendations are made. Conclusion: The discoveries and the conclusions of this work are organized so as to enable researchers in the same domain to use this work and make crucial implementation decisions.
前言:前列腺癌(PCa)是世界范围内男性恶性肿瘤和死亡的最致命和最常见的原因之一,特别是在发展中国家的患病率和死亡率更高。年龄、家族史、种族和某些基因突变等因素是导致男性前列腺癌发生的一些因素。近年来,随着技术和算法的进步,前列腺癌的计算机辅助诊断(CAD)应运而生。随着医学图像数据集的可用性以及最先进的机器和深度学习技术的新兴趋势,最近相关出版物有所增长。材料和方法:在本研究中,我们使用机器学习和深度学习技术对医学图像的PCa诊断进行了系统回顾。我们对四个数据库(IEEE、PubMed、b施普林格和ScienceDirect)中收录的相关研究进行了全面的综述,采用了系统评价和meta分析的首选报告项目(PRISMA)指南。通过定义明确的检索词,共确定了608篇文章,其中77篇符合最终的纳入标准。介绍了所包括的论文中的关键要素,并从中得出结论。结果:研究结果显示,美国在机器学习诊断PCa方面的研究最多,磁共振图像是近年来使用最多的数据集,迁移学习是近年来诊断PCa使用最多的方法。此外,还介绍了一些可用的PCa数据集和深度学习模型中损失函数选择的一些关键考虑因素。讨论了局限性和经验教训,并提出了一些关键建议。结论:这项工作的发现和结论被组织起来,以便使同一领域的研究人员能够使用这项工作并做出关键的实施决策。
{"title":"Application of Machine Learning and Deep Learning Models in Prostate Cancer Diagnosis Using Medical Images: A Systematic Review","authors":"Olusola Olabanjo, Ashiribo Wusu, Mauton Asokere, Oseni Afisi, Basheerat Okugbesan, Olufemi Olabanjo, Olusegun Folorunso, Manuel Mazzara","doi":"10.3390/analytics2030039","DOIUrl":"https://doi.org/10.3390/analytics2030039","url":null,"abstract":"Introduction: Prostate cancer (PCa) is one of the deadliest and most common causes of malignancy and death in men worldwide, with a higher prevalence and mortality in developing countries specifically. Factors such as age, family history, race and certain genetic mutations are some of the factors contributing to the occurrence of PCa in men. Recent advances in technology and algorithms gave rise to the computer-aided diagnosis (CAD) of PCa. With the availability of medical image datasets and emerging trends in state-of-the-art machine and deep learning techniques, there has been a growth in recent related publications. Materials and Methods: In this study, we present a systematic review of PCa diagnosis with medical images using machine learning and deep learning techniques. We conducted a thorough review of the relevant studies indexed in four databases (IEEE, PubMed, Springer and ScienceDirect) using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. With well-defined search terms, a total of 608 articles were identified, and 77 met the final inclusion criteria. The key elements in the included papers are presented and conclusions are drawn from them. Results: The findings show that the United States has the most research in PCa diagnosis with machine learning, Magnetic Resonance Images are the most used datasets and transfer learning is the most used method of diagnosing PCa in recent times. In addition, some available PCa datasets and some key considerations for the choice of loss function in the deep learning models are presented. The limitations and lessons learnt are discussed, and some key recommendations are made. Conclusion: The discoveries and the conclusions of this work are organized so as to enable researchers in the same domain to use this work and make crucial implementation decisions.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135063216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Use of a Large Language Model for Cyberbullying Detection 大型语言模型在网络欺凌检测中的应用
Pub Date : 2023-09-06 DOI: 10.3390/analytics2030038
Bayode Ogunleye, Babitha Dharmaraj
The dominance of social media has added to the channels of bullying for perpetrators. Unfortunately, cyberbullying (CB) is the most prevalent phenomenon in today’s cyber world, and is a severe threat to the mental and physical health of citizens. This opens the need to develop a robust system to prevent bullying content from online forums, blogs, and social media platforms to manage the impact in our society. Several machine learning (ML) algorithms have been proposed for this purpose. However, their performances are not consistent due to high class imbalance and generalisation issues. In recent years, large language models (LLMs) like BERT and RoBERTa have achieved state-of-the-art (SOTA) results in several natural language processing (NLP) tasks. Unfortunately, the LLMs have not been applied extensively for CB detection. In our paper, we explored the use of these models for cyberbullying (CB) detection. We have prepared a new dataset (D2) from existing studies (Formspring and Twitter). Our experimental results for dataset D1 and D2 showed that RoBERTa outperformed other models.
社交媒体的主导地位增加了施暴者欺凌的渠道。不幸的是,网络欺凌(CB)是当今网络世界中最普遍的现象,对公民的身心健康构成严重威胁。这就需要开发一个强大的系统来防止在线论坛、博客和社交媒体平台上的欺凌内容,以管理其对我们社会的影响。为此已经提出了几种机器学习(ML)算法。然而,由于等级的高度不平衡和泛化问题,它们的表现并不一致。近年来,像BERT和RoBERTa这样的大型语言模型(llm)在一些自然语言处理(NLP)任务中取得了最先进的(SOTA)结果。不幸的是,llm尚未广泛应用于CB检测。在本文中,我们探讨了这些模型在网络欺凌(CB)检测中的应用。我们从现有的研究(Formspring和Twitter)中准备了一个新的数据集(D2)。我们对数据集D1和D2的实验结果表明RoBERTa优于其他模型。
{"title":"The Use of a Large Language Model for Cyberbullying Detection","authors":"Bayode Ogunleye, Babitha Dharmaraj","doi":"10.3390/analytics2030038","DOIUrl":"https://doi.org/10.3390/analytics2030038","url":null,"abstract":"The dominance of social media has added to the channels of bullying for perpetrators. Unfortunately, cyberbullying (CB) is the most prevalent phenomenon in today’s cyber world, and is a severe threat to the mental and physical health of citizens. This opens the need to develop a robust system to prevent bullying content from online forums, blogs, and social media platforms to manage the impact in our society. Several machine learning (ML) algorithms have been proposed for this purpose. However, their performances are not consistent due to high class imbalance and generalisation issues. In recent years, large language models (LLMs) like BERT and RoBERTa have achieved state-of-the-art (SOTA) results in several natural language processing (NLP) tasks. Unfortunately, the LLMs have not been applied extensively for CB detection. In our paper, we explored the use of these models for cyberbullying (CB) detection. We have prepared a new dataset (D2) from existing studies (Formspring and Twitter). Our experimental results for dataset D1 and D2 showed that RoBERTa outperformed other models.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"168 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80102321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Heterogeneous Ensemble for Medical Data Classification 异构集成用于医疗数据分类
Pub Date : 2023-09-04 DOI: 10.3390/analytics2030037
L. Nanni, S. Brahnam, Andrea Loreggia, Leonardo Barcellona
For robust classification, selecting a proper classifier is of primary importance. However, selecting the best classifiers depends on the problem, as some classifiers work better at some tasks than on others. Despite the many results collected in the literature, the support vector machine (SVM) remains the leading adopted solution in many domains, thanks to its ease of use. In this paper, we propose a new method based on convolutional neural networks (CNNs) as an alternative to SVM. CNNs are specialized in processing data in a grid-like topology that usually represents images. To enable CNNs to work on different data types, we investigate reshaping one-dimensional vector representations into two-dimensional matrices and compared different approaches for feeding standard CNNs using two-dimensional feature vector representations. We evaluate the different techniques proposing a heterogeneous ensemble based on three classifiers: an SVM, a model based on random subspace of rotation boosting (RB), and a CNN. The robustness of our approach is tested across a set of benchmark datasets that represent a wide range of medical classification tasks. The proposed ensembles provide promising performance on all datasets.
对于鲁棒分类,选择合适的分类器是至关重要的。然而,选择最好的分类器取决于问题,因为一些分类器在某些任务上比其他任务表现得更好。尽管文献中收集了许多结果,但由于支持向量机(SVM)易于使用,它仍然是许多领域采用的主要解决方案。在本文中,我们提出了一种基于卷积神经网络(cnn)的新方法来替代支持向量机。cnn专门处理网格状拓扑中的数据,通常表示图像。为了使cnn能够处理不同的数据类型,我们研究了将一维向量表示重塑为二维矩阵,并比较了使用二维特征向量表示馈馈法的不同方法。我们评估了基于三种分类器提出异构集成的不同技术:支持向量机,基于旋转提升随机子空间(RB)的模型和CNN。我们的方法的稳健性在一组基准数据集上进行了测试,这些数据集代表了广泛的医学分类任务。所提出的集成在所有数据集上都提供了良好的性能。
{"title":"Heterogeneous Ensemble for Medical Data Classification","authors":"L. Nanni, S. Brahnam, Andrea Loreggia, Leonardo Barcellona","doi":"10.3390/analytics2030037","DOIUrl":"https://doi.org/10.3390/analytics2030037","url":null,"abstract":"For robust classification, selecting a proper classifier is of primary importance. However, selecting the best classifiers depends on the problem, as some classifiers work better at some tasks than on others. Despite the many results collected in the literature, the support vector machine (SVM) remains the leading adopted solution in many domains, thanks to its ease of use. In this paper, we propose a new method based on convolutional neural networks (CNNs) as an alternative to SVM. CNNs are specialized in processing data in a grid-like topology that usually represents images. To enable CNNs to work on different data types, we investigate reshaping one-dimensional vector representations into two-dimensional matrices and compared different approaches for feeding standard CNNs using two-dimensional feature vector representations. We evaluate the different techniques proposing a heterogeneous ensemble based on three classifiers: an SVM, a model based on random subspace of rotation boosting (RB), and a CNN. The robustness of our approach is tested across a set of benchmark datasets that represent a wide range of medical classification tasks. The proposed ensembles provide promising performance on all datasets.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83676610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surgery Scheduling and Perioperative Care: Smoothing and Visualizing Elective Surgery and Recovery Patient Flow 手术安排和围手术期护理:平滑和可视化选择性手术和恢复病人流程
Pub Date : 2023-08-21 DOI: 10.3390/analytics2030036
John S. F. Lyons, Mehmet A. Begen, Peter C. Bell
This paper addresses the practical problem of scheduling operating room (OR) elective surgeries to minimize the likelihood of surgical delays caused by the unavailability of capacity for patient recovery in a central post-anesthesia care unit (PACU). We segregate patients according to their patterns of flow through a multi-stage perioperative system and use characteristics of surgery type and surgeon booking times to predict time intervals for patient procedures and subsequent recoveries. Working with a hospital in which 50+ procedures are performed in 15+ ORs most weekdays, we develop a constraint programming (CP) model that takes the hospital’s elective surgery pre-schedule as input and produces a recommended alternate schedule designed to minimize the expected peak number of patients in the PACU over the course of the day. Our model was developed from the hospital’s data and evaluated through its application to daily schedules during a testing period. Schedules generated by our model indicated the potential to reduce the peak PACU load substantially, 20-30% during most days in our study period, or alternatively reduce average patient flow time by up to 15% given the same PACU peak load. We also developed tools for schedule visualization that can be used to aid management both before and after surgery day; plan PACU resources; propose critical schedule changes; identify the timing, location, and root causes of delay; and to discern the differences in surgical specialty case mixes and their potential impacts on the system. This work is especially timely given high surgical wait times in Ontario which even got worse due to the COVID-19 pandemic.
本文讨论了安排手术室(OR)选择性手术的实际问题,以尽量减少因中心麻醉后护理单位(PACU)患者恢复能力不足而导致手术延误的可能性。我们根据患者在多阶段围手术期系统中的流动模式对患者进行隔离,并使用手术类型和外科医生预约时间的特征来预测患者手术和随后恢复的时间间隔。我们与一家医院合作,该医院大多数工作日在15个以上的手术室中进行了50多个手术,我们开发了一个约束规划(CP)模型,该模型将医院的选择性手术预计划作为输入,并产生推荐的替代计划,旨在最大限度地减少PACU一天中患者的预期高峰数量。我们的模型是根据医院的数据开发的,并通过在测试期间将其应用于日常时间表来评估。我们的模型生成的时间表表明,在我们研究期间的大多数日子里,PACU的峰值负荷有可能大幅减少20-30%,或者在相同的PACU峰值负荷下,平均病人流量时间最多减少15%。我们还开发了日程可视化工具,可用于术前和术后的管理;规划PACU资源;提出重要的计划变更建议;确定延误的时间、地点和根本原因;并辨别外科专业病例组合的差异及其对系统的潜在影响。这项工作尤其及时,因为安大略省的手术等待时间很长,甚至由于COVID-19大流行而变得更糟。
{"title":"Surgery Scheduling and Perioperative Care: Smoothing and Visualizing Elective Surgery and Recovery Patient Flow","authors":"John S. F. Lyons, Mehmet A. Begen, Peter C. Bell","doi":"10.3390/analytics2030036","DOIUrl":"https://doi.org/10.3390/analytics2030036","url":null,"abstract":"This paper addresses the practical problem of scheduling operating room (OR) elective surgeries to minimize the likelihood of surgical delays caused by the unavailability of capacity for patient recovery in a central post-anesthesia care unit (PACU). We segregate patients according to their patterns of flow through a multi-stage perioperative system and use characteristics of surgery type and surgeon booking times to predict time intervals for patient procedures and subsequent recoveries. Working with a hospital in which 50+ procedures are performed in 15+ ORs most weekdays, we develop a constraint programming (CP) model that takes the hospital’s elective surgery pre-schedule as input and produces a recommended alternate schedule designed to minimize the expected peak number of patients in the PACU over the course of the day. Our model was developed from the hospital’s data and evaluated through its application to daily schedules during a testing period. Schedules generated by our model indicated the potential to reduce the peak PACU load substantially, 20-30% during most days in our study period, or alternatively reduce average patient flow time by up to 15% given the same PACU peak load. We also developed tools for schedule visualization that can be used to aid management both before and after surgery day; plan PACU resources; propose critical schedule changes; identify the timing, location, and root causes of delay; and to discern the differences in surgical specialty case mixes and their potential impacts on the system. This work is especially timely given high surgical wait times in Ontario which even got worse due to the COVID-19 pandemic.","PeriodicalId":93078,"journal":{"name":"Big data analytics","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87483008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Big data analytics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1