首页 > 最新文献

MethodsX最新文献

英文 中文
BioMedStatX – Statistical workflows for reliable biomedical data analysis 用于可靠生物医学数据分析的统计工作流程
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-17 DOI: 10.1016/j.mex.2025.103776
Philipp Krumm , Nicole Böttcher , Richard Ottermanns , Thomas Pufe , Athanassios Fragoulis
Robust statistical analysis is essential for scientific validity and to ensure good scientific practice. Yet many researchers, especially in biomedical fields, struggle with checking assumptions, selecting the correct tests, and interpreting results. These obstacles can lead to misleading conclusions and undermine scientific progress.
BioMedStatX explicitly addresses these issues by ensuring that the implemented workflows exclude the use of inadequate statistical tests. This Python-based desktop application features an intuitive graphical interface that automatically selects appropriate statistical tests based on the data and its characteristics, ensuring that users, even with minor statistical training, follow a statistically valid workflow.
Users can import Excel or CSV files, select groups and let BioMedStatX manage the rest: from outlier detection, assumption checks and guided data transformations to test execution (parametric or non-parametric) and guided post-hoc analyses. Results are exported in a structured Excel workbook including a decision tree that visualizes each analytical step, and customizable plots are exported as SVG-/PNG-files.
By embedding statistical expertise directly into the software, BioMedStatX prevents invalid analysis paths, increases transparency, and enables reproducibility.
稳健的统计分析对于科学有效性和确保良好的科学实践至关重要。然而,许多研究人员,特别是生物医学领域的研究人员,在检查假设、选择正确的测试和解释结果方面遇到了困难。这些障碍可能导致误导性结论,破坏科学进步。通过确保实施的工作流程排除使用不充分的统计测试,BioMedStatX明确解决了这些问题。这个基于python的桌面应用程序具有直观的图形界面,可以根据数据及其特征自动选择适当的统计测试,从而确保用户即使没有受过多少统计培训,也能遵循统计上有效的工作流程。用户可以导入Excel或CSV文件,选择组,并让BioMedStatX管理其余部分:从异常值检测,假设检查和指导数据转换到测试执行(参数或非参数)和指导事后分析。结果导出到一个结构化的Excel工作簿中,其中包括一个可视化每个分析步骤的决策树,可自定义的绘图导出为SVG / png文件。通过将统计专业知识直接嵌入到软件中,BioMedStatX防止了无效的分析路径,增加了透明度,并实现了可重复性。
{"title":"BioMedStatX – Statistical workflows for reliable biomedical data analysis","authors":"Philipp Krumm ,&nbsp;Nicole Böttcher ,&nbsp;Richard Ottermanns ,&nbsp;Thomas Pufe ,&nbsp;Athanassios Fragoulis","doi":"10.1016/j.mex.2025.103776","DOIUrl":"10.1016/j.mex.2025.103776","url":null,"abstract":"<div><div>Robust statistical analysis is essential for scientific validity and to ensure good scientific practice. Yet many researchers, especially in biomedical fields, struggle with checking assumptions, selecting the correct tests, and interpreting results. These obstacles can lead to misleading conclusions and undermine scientific progress.</div><div>BioMedStatX explicitly addresses these issues by ensuring that the implemented workflows exclude the use of inadequate statistical tests. This Python-based desktop application features an intuitive graphical interface that automatically selects appropriate statistical tests based on the data and its characteristics, ensuring that users, even with minor statistical training, follow a statistically valid workflow.</div><div>Users can import Excel or CSV files, select groups and let BioMedStatX manage the rest: from outlier detection, assumption checks and guided data transformations to test execution (parametric or non-parametric) and guided post-hoc analyses. Results are exported in a structured Excel workbook including a decision tree that visualizes each analytical step, and customizable plots are exported as SVG-/PNG-files.</div><div>By embedding statistical expertise directly into the software, BioMedStatX prevents invalid analysis paths, increases transparency, and enables reproducibility.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103776"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the implementation of maximum entropy sampling with unequal probabilities and without replacement 不等概率无替换的最大熵抽样的实现
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-30 DOI: 10.1016/j.mex.2025.103780
Philippe Aubry
Sampling with maximum entropy offers robustness to statistical inference based on randomization theory. However, there were no comprehensive, practical guides explaining how to implement maximum entropy sampling for finite populations with unequal probabilities and without replacement. This article serves as both a toolkit and a reference guide for researchers and engineers, filling a gap in the literature. It links key formal results with ready-to-use algorithms that can be implemented in any procedural programming language. Maximum entropy sampling is straightforward when the sample size is allowed to vary. This is achieved via the Poisson sampling design, in which the sample size is a random variable distributed according to a Poisson binomial distribution. In contrast, the conditional Poisson sampling design, which is obtained by conditioning Poisson sampling on a fixed sample size, has long posed a significant challenge to statisticians.
  • A compendium of formal results for Poisson sampling, the Poisson binomial distribution, and conditional Poisson sampling is presented.
  • The computation of inclusion probabilities up to the second order is detailed for the conditional Poisson sampling, and the corresponding algorithms are provided.
  • Ready-to-use algorithms are provided for implementing Poisson sampling and the Poisson binomial distribution. For conditional Poisson sampling, the rejective, draw-by-draw, sequential, and exchange sampling algorithms are detailed.
最大熵抽样对基于随机化理论的统计推断具有鲁棒性。然而,没有全面的、实用的指南来解释如何在不相等概率和不替换的有限种群中实现最大熵抽样。本文作为研究人员和工程师的工具箱和参考指南,填补了文献中的空白。它将关键的形式化结果与可用的算法联系起来,这些算法可以用任何过程编程语言实现。当允许样本量变化时,最大熵抽样是直接的。这是通过泊松抽样设计实现的,其中样本量是根据泊松二项分布分布的随机变量。相比之下,条件泊松抽样设计是通过固定样本量的泊松抽样来获得的,长期以来对统计学家提出了重大挑战。•介绍了泊松抽样、泊松二项分布和条件泊松抽样的形式结果汇编。•对于条件泊松采样,详细介绍了二阶包含概率的计算,并提供了相应的算法。•准备使用的算法提供了实现泊松采样和泊松二项分布。对于条件泊松采样,拒绝,逐抽取,顺序和交换采样算法进行了详细介绍。
{"title":"On the implementation of maximum entropy sampling with unequal probabilities and without replacement","authors":"Philippe Aubry","doi":"10.1016/j.mex.2025.103780","DOIUrl":"10.1016/j.mex.2025.103780","url":null,"abstract":"<div><div>Sampling with maximum entropy offers robustness to statistical inference based on randomization theory. However, there were no comprehensive, practical guides explaining how to implement maximum entropy sampling for finite populations with unequal probabilities and without replacement. This article serves as both a toolkit and a reference guide for researchers and engineers, filling a gap in the literature. It links key formal results with ready-to-use algorithms that can be implemented in any procedural programming language. Maximum entropy sampling is straightforward when the sample size is allowed to vary. This is achieved via the <em>Poisson sampling</em> design, in which the sample size is a random variable distributed according to a Poisson binomial distribution. In contrast, the <em>conditional Poisson sampling</em> design, which is obtained by conditioning Poisson sampling on a fixed sample size, has long posed a significant challenge to statisticians.<ul><li><span>•</span><span><div>A compendium of formal results for Poisson sampling, the Poisson binomial distribution, and conditional Poisson sampling is presented.</div></span></li><li><span>•</span><span><div>The computation of inclusion probabilities up to the second order is detailed for the conditional Poisson sampling, and the corresponding algorithms are provided.</div></span></li><li><span>•</span><span><div>Ready-to-use algorithms are provided for implementing Poisson sampling and the Poisson binomial distribution. For conditional Poisson sampling, the rejective, draw-by-draw, sequential, and exchange sampling algorithms are detailed.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103780"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combination of partial least square structural equation modeling scheme of principal component analysis with importance performance analysis 主成分分析的偏最小二乘结构方程建模方案与重要性能分析的结合
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-30 DOI: 10.1016/j.mex.2025.103783
Bambang Widjanarko Otok, Zulfani Alfasanah, Diaz Fitra Aksioma
Structural Equation Modeling (SEM) is widely used to assess causal relationships among latent variables, yet its strict assumptions often limit empirical applications. Partial Least Squares SEM (PLS-SEM) offers greater flexibility, but the choice of weighting scheme remains a methodological challenge. This study introduces a PCA-based weighting scheme to improve the stability and accuracy of PLS estimation. Importance-Performance Analysis (IPA) is further integrated to identify high-impact but underperforming indicators. Applied to child malnutrition in East Java, the approach reveals that socio-economic conditions most strongly influence food security, parenting, and health–environment services. IPA highlights exclusive breastfeeding as a priority for intervention. The proposed methodological approach strengthens PLS estimation and yields actionable insights for prioritizing policy measures.
结构方程模型(SEM)被广泛用于评估潜在变量之间的因果关系,但其严格的假设往往限制了实证应用。偏最小二乘扫描电镜(PLS-SEM)提供了更大的灵活性,但加权方案的选择仍然是一个方法上的挑战。为了提高PLS估计的稳定性和准确性,本文引入了一种基于pca的加权方案。重要性-绩效分析(IPA)进一步整合,以确定高影响但表现不佳的指标。该方法适用于东爪哇儿童营养不良问题,结果表明,社会经济条件对粮食安全、养育子女和卫生环境服务的影响最大。国际出版商协会强调纯母乳喂养是干预的重点。提出的方法方法加强了PLS估计,并为优先考虑政策措施提供了可操作的见解。
{"title":"Combination of partial least square structural equation modeling scheme of principal component analysis with importance performance analysis","authors":"Bambang Widjanarko Otok,&nbsp;Zulfani Alfasanah,&nbsp;Diaz Fitra Aksioma","doi":"10.1016/j.mex.2025.103783","DOIUrl":"10.1016/j.mex.2025.103783","url":null,"abstract":"<div><div>Structural Equation Modeling (SEM) is widely used to assess causal relationships among latent variables, yet its strict assumptions often limit empirical applications. Partial Least Squares SEM (PLS-SEM) offers greater flexibility, but the choice of weighting scheme remains a methodological challenge. This study introduces a PCA-based weighting scheme to improve the stability and accuracy of PLS estimation. Importance-Performance Analysis (IPA) is further integrated to identify high-impact but underperforming indicators. Applied to child malnutrition in East Java, the approach reveals that socio-economic conditions most strongly influence food security, parenting, and health–environment services. IPA highlights exclusive breastfeeding as a priority for intervention. The proposed methodological approach strengthens PLS estimation and yields actionable insights for prioritizing policy measures.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103783"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Managing missing MUKEYs in the QSWAT+ SSURGO database 管理QSWAT+ SSURGO数据库中缺失的mukey
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-13 DOI: 10.1016/j.mex.2025.103764
Mahesh R. Tapas , Asmita Murumkar , Chris George , Brian Brandt , Jay Martin
The Soil and Water Assessment Tool Plus (SWAT+) is a widely used hydrological model for simulating water flow, sediment transport, and the impacts of land management on watersheds. It relies on soil data to represent how soil properties influence water movement, nutrient cycling, and crop growth within a watershed. The Soil Survey Geographic (SSURGO) database provides detailed, high-resolution soil information essential for such modeling. However, a common challenge arises when missing (Map Unit Keys) in the SWAT+ SSURGO database prevents the creation of Hydrological Response Units (HRUs), effectively halting model development. This study presents a method using QGIS to resolve missing MUKEY issues. Specifically, we demonstrate the use of QGIS's Eliminate Selected Polygons tool with the "Largest Common Boundary" option to merge missing MUKEY polygons, maintaining spatial coherence and enabling HRU generation. This approach simplifies the workflow, reduces manual effort, and improves model readiness and accuracy.
  • The study presents a method for resolving missing MUKEYs in the QSWAT+ SSURGO database, facilitating further development of SWAT+ and minimizing manual errors.
  • QSWAT+ version 3.0.2 improves error reporting by listing missing MUKEYs in the QGIS log, simplifying troubleshooting.
  • Using QGIS’s Eliminate Selected Polygons tool with the "Largest Common Boundary" option preserves spatial integrity while addressing missing data gaps.
SWAT+是一种广泛使用的水文模型,用于模拟水流、泥沙运移以及土地管理对流域的影响。它依靠土壤数据来表示土壤特性如何影响流域内的水分运动、养分循环和作物生长。土壤调查地理(SSURGO)数据库为这种建模提供了详细的、高分辨率的土壤信息。然而,SWAT+ SSURGO数据库中缺少地图单元键(Map Unit Keys)会阻碍水文响应单元(hru)的创建,从而有效地阻止了模型的开发。本研究提出了一种使用QGIS来解决缺失的MUKEY问题的方法。具体来说,我们演示了使用QGIS的“消除选定多边形”工具和“最大公共边界”选项来合并缺失的MUKEY多边形,保持空间一致性并启用HRU生成。这种方法简化了工作流程,减少了手工工作,并提高了模型的准备程度和准确性。•本研究提出了一种解决QSWAT+ SSURGO数据库中缺失mukey的方法,促进了SWAT+的进一步发展,并最大限度地减少了人工错误。•QSWAT+ 3.0.2版本通过在QGIS日志中列出丢失的mukey来改进错误报告,简化故障排除。•使用QGIS的消除选定多边形工具与“最大公共边界”选项保持空间完整性,同时解决缺失的数据差距。
{"title":"Managing missing MUKEYs in the QSWAT+ SSURGO database","authors":"Mahesh R. Tapas ,&nbsp;Asmita Murumkar ,&nbsp;Chris George ,&nbsp;Brian Brandt ,&nbsp;Jay Martin","doi":"10.1016/j.mex.2025.103764","DOIUrl":"10.1016/j.mex.2025.103764","url":null,"abstract":"<div><div>The Soil and Water Assessment Tool Plus (SWAT+) is a widely used hydrological model for simulating water flow, sediment transport, and the impacts of land management on watersheds. It relies on soil data to represent how soil properties influence water movement, nutrient cycling, and crop growth within a watershed. The Soil Survey Geographic (SSURGO) database provides detailed, high-resolution soil information essential for such modeling. However, a common challenge arises when missing (Map Unit Keys) in the SWAT+ SSURGO database prevents the creation of Hydrological Response Units (HRUs), effectively halting model development. This study presents a method using QGIS to resolve missing MUKEY issues. Specifically, we demonstrate the use of QGIS's Eliminate Selected Polygons tool with the \"Largest Common Boundary\" option to merge missing MUKEY polygons, maintaining spatial coherence and enabling HRU generation. This approach simplifies the workflow, reduces manual effort, and improves model readiness and accuracy.<ul><li><span>•</span><span><div>The study presents a method for resolving missing MUKEYs in the QSWAT+ SSURGO database, facilitating further development of SWAT+ and minimizing manual errors.</div></span></li><li><span>•</span><span><div>QSWAT+ version 3.0.2 improves error reporting by listing missing MUKEYs in the QGIS log, simplifying troubleshooting.</div></span></li><li><span>•</span><span><div>Using QGIS’s Eliminate Selected Polygons tool with the \"Largest Common Boundary\" option preserves spatial integrity while addressing missing data gaps.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103764"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMI reduction method of CISPR 36 pre-compliance testing using affordable rubber-based materials CISPR 36预合规测试的EMI减少方法,使用价格合理的橡胶基材料
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-10 DOI: 10.1016/j.mex.2025.103760
Arief Rufiyanto , Gamantyo Hendrantoro , Reza Septiawan , Eko Setijadi , Budi Sulistya , Sardjono Trihatmo
Electric motor performance is greatly affected by emissions from the automotive drive system (drivetrain), necessitating research to mitigate electromagnetic interference (EMI). This study proposes a set of methods that employs simple and inexpensive rubber-based materials as shielding to reduce EMI in electric vehicle modules and further explores suitable materials to reduce emissions. The effectiveness of three different rubber compositions as shielding EMI, focusing on the frequency ranges regulated in the CISPR 36 standard, is investigated as pre-compliance testing in radial and transversal orientations of the measurement antenna. The study shows that using these methods together, the rubber-based materials under test can reduce EMI emissions by shielding effectiveness (SE) from 37.742 dB to 37.362 dB for single layer and 74.874 dB to 75.479 dB for combination of 2 layers with up to 50 % probability across several frequency ranges, especially the frequencies regulated in the CISPR 36 standard.
A realistic method to provide a reasonably cost-effective solution to reduce EMI, particularly for electric cars in the pre-compliance stage, using simple and inexpensive materials, mainly rubber-based materials,
EMI mitigation method using organic material as an absorber for pre-compliance testing in the frequency range of the CISPR 36 standard,
Method to determine the best combination of materials to reduce the emissions that arise from the electrical module of the DUT.
电动马达的性能受到汽车驱动系统(传动系统)排放的极大影响,因此需要研究如何减轻电磁干扰(EMI)。本研究提出了一套方法,采用简单和廉价的橡胶基材料作为屏蔽,以减少电动汽车模块中的电磁干扰,并进一步探索合适的材料,以减少排放。在CISPR 36标准规定的频率范围内,研究了三种不同橡胶成分作为屏蔽EMI的有效性,并对测量天线的径向和横向方向进行了预合规测试。研究表明,在多个频率范围内,特别是CISPR 36标准规定的频率范围内,橡胶基材料单层屏蔽效能(SE)从37.742 dB降低到37.362 dB,两层组合屏蔽效能(SE)从74.874 dB降低到75.479 dB,概率高达50%。一种切实可行的方法,提供一种合理的经济有效的解决方案,以减少电磁干扰,特别是在预合规阶段的电动汽车,使用简单和廉价的材料,主要是橡胶基材料;电磁干扰缓解方法,使用有机材料作为吸收剂,在CISPR 36标准的频率范围内进行预合规测试;确定材料的最佳组合,以减少由被测设备的电气模块产生的排放。
{"title":"EMI reduction method of CISPR 36 pre-compliance testing using affordable rubber-based materials","authors":"Arief Rufiyanto ,&nbsp;Gamantyo Hendrantoro ,&nbsp;Reza Septiawan ,&nbsp;Eko Setijadi ,&nbsp;Budi Sulistya ,&nbsp;Sardjono Trihatmo","doi":"10.1016/j.mex.2025.103760","DOIUrl":"10.1016/j.mex.2025.103760","url":null,"abstract":"<div><div>Electric motor performance is greatly affected by emissions from the automotive drive system (drivetrain), necessitating research to mitigate electromagnetic interference (EMI). This study proposes a set of methods that employs simple and inexpensive rubber-based materials as shielding to reduce EMI in electric vehicle modules and further explores suitable materials to reduce emissions. The effectiveness of three different rubber compositions as shielding EMI, focusing on the frequency ranges regulated in the CISPR 36 standard, is investigated as pre-compliance testing in radial and transversal orientations of the measurement antenna. The study shows that using these methods together, the rubber-based materials under test can reduce EMI emissions by shielding effectiveness (SE) from 37.742 dB to 37.362 dB for single layer and 74.874 dB to 75.479 dB for combination of 2 layers with up to 50 % probability across several frequency ranges, especially the frequencies regulated in the CISPR 36 standard.</div><div>A realistic method to provide a reasonably cost-effective solution to reduce EMI, particularly for electric cars in the pre-compliance stage, using simple and inexpensive materials, mainly rubber-based materials,</div><div>EMI mitigation method using organic material as an absorber for pre-compliance testing in the frequency range of the CISPR 36 standard,</div><div>Method to determine the best combination of materials to reduce the emissions that arise from the electrical module of the DUT.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103760"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative scoping review approach: identifying the intersection of carbon, biodiversity, and water offsetting 一种比较范围审查方法:确定碳、生物多样性和水抵消的交集
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2026-01-13 DOI: 10.1016/j.mex.2026.103799
Felice Diekel, Rosalie Arendt, Markus Berger
Environmental and climate policies, as well as the knowledge underpinning them, are often developed in isolation. This is evident in offsetting research and policy, which tend to address carbon, biodiversity, and water as separate issues. This paper presents the development of an adapted scoping review methodology to compare these three distinct bodies of literature within a unified framework, which also allows for the introduction of the emerging water offsetting literature. The approach ensures comparability across datasets of relevant literature while addressing the challenge of managing large volumes of literature within time and resource constraints. It provides a practical solution for managing diverse bodies of literature in scoping reviews, enabling a holistic understanding of the interrelationships among carbon, biodiversity, and water offsetting.
Key elements of the method include:
  • Applying a consistent approach across all three datasets, while accommodating the specificities of each.
  • Utilizing the machine learning tool ASReviewer to streamline the screening process, alongside a pilot screening phase to establish consistent inclusion criteria.
  • Combining quantitative bibliometric analysis with qualitative thematic analysis.
环境和气候政策以及支撑这些政策的知识往往是孤立地制定的。这在抵消研究和政策中表现得很明显,这些研究和政策倾向于将碳、生物多样性和水作为单独的问题来解决。本文介绍了一种适应范围审查方法的发展,以在统一的框架内比较这三个不同的文献,这也允许引入新兴的水抵消文献。该方法确保了相关文献数据集之间的可比性,同时解决了在时间和资源限制下管理大量文献的挑战。它提供了一种实用的解决方案,用于管理范围审查中的各种文献,使人们能够全面了解碳、生物多样性和水抵消之间的相互关系。该方法的关键要素包括:•在所有三个数据集上应用一致的方法,同时适应每个数据集的特殊性。•利用机器学习工具ASReviewer简化筛选过程,并与试点筛选阶段一起建立一致的纳入标准。•定量文献计量学分析与定性专题分析相结合。
{"title":"A comparative scoping review approach: identifying the intersection of carbon, biodiversity, and water offsetting","authors":"Felice Diekel,&nbsp;Rosalie Arendt,&nbsp;Markus Berger","doi":"10.1016/j.mex.2026.103799","DOIUrl":"10.1016/j.mex.2026.103799","url":null,"abstract":"<div><div>Environmental and climate policies, as well as the knowledge underpinning them, are often developed in isolation. This is evident in offsetting research and policy, which tend to address carbon, biodiversity, and water as separate issues. This paper presents the development of an adapted scoping review methodology to compare these three distinct bodies of literature within a unified framework, which also allows for the introduction of the emerging water offsetting literature. The approach ensures comparability across datasets of relevant literature while addressing the challenge of managing large volumes of literature within time and resource constraints. It provides a practical solution for managing diverse bodies of literature in scoping reviews, enabling a holistic understanding of the interrelationships among carbon, biodiversity, and water offsetting.</div><div>Key elements of the method include:<ul><li><span>•</span><span><div>Applying a consistent approach across all three datasets, while accommodating the specificities of each.</div></span></li><li><span>•</span><span><div>Utilizing the machine learning tool ASReviewer to streamline the screening process, alongside a pilot screening phase to establish consistent inclusion criteria.</div></span></li><li><span>•</span><span><div>Combining quantitative bibliometric analysis with qualitative thematic analysis.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103799"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BRAIN-META: A reproducible CNN–vision transformer meta-ensemble pipeline for explainable brain tumor classification brain - meta:一个可重复的cnn -视觉转换器元集成管道,用于解释脑肿瘤分类
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-17 DOI: 10.1016/j.mex.2025.103769
Komal Kumar Napa , Sangeetha Murugan , J.Senthil Murugan , A. Jayanthi
This study presents BRAIN-META, a reproducible deep learning methodology designed for multi-class brain tumor classification using structural MRI. The proposed approach combines ten hybrid CNN–Vision Transformer (ViT) models with a meta-learning ensemble framework. The dataset includes 2D MRI images representing four tumor categories: glioma, meningioma, pituitary, and notumor. A standardized preprocessing pipeline involving image resizing, normalization, and CLAHE (Contrast Limited Adaptive Histogram Equalization) is applied to improve image quality and feature visibility. Ten pre-trained CNN architectures—DenseNet121, DenseNet169, DenseNet201, MobileNet, MobileNetV2, EfficientNetB0, EfficientNetB1, EfficientNetB4, InceptionV3, and Xception—are fused with Vision Transformer blocks to extract both local and global features. Each CNN-ViT model is trained independently, and the softmax outputs from validation data are used to generate stacked feature vectors. These vectors are input to two meta-learners, Logistic Regression and XGBoost, which are trained to produce final predictions. Evaluation metrics include accuracy, precision, recall, F1-score, and confusion matrix. XGBoost meta-learner achieved the highest accuracy of 97.10%, followed by Logistic Regression meta-learner at 97.03%, outperforming all individual base models. To enhance interpretability, Grad-CAM was employed, visually highlighting regions influencing classification. The proposed method is accurate, explainable, and modular, making it a strong candidate for clinical decision support in neuro-oncology.
本研究提出了brain - meta,一种可重复的深度学习方法,用于使用结构MRI对多类脑肿瘤进行分类。该方法将十种混合CNN-Vision Transformer (ViT)模型与元学习集成框架相结合。该数据集包括代表四种肿瘤类别的二维MRI图像:胶质瘤、脑膜瘤、垂体和非肿瘤。一个标准化的预处理管道,包括图像调整大小,归一化和CLAHE(对比度有限自适应直方图均衡化),用于提高图像质量和特征可见性。十个预训练的CNN架构- densenet121, DenseNet169, DenseNet201, MobileNet, MobileNetV2, EfficientNetB0, EfficientNetB1, EfficientNetB4, InceptionV3和例外-与Vision Transformer块融合以提取局部和全局特征。每个CNN-ViT模型都是独立训练的,使用验证数据的softmax输出来生成堆叠的特征向量。这些向量被输入到两个元学习器,逻辑回归和XGBoost,它们被训练来产生最终的预测。评估指标包括准确性、精密度、召回率、f1分数和混淆矩阵。XGBoost元学习器的准确率最高,为97.10%,其次是Logistic回归元学习器,准确率为97.03%,优于所有个体基础模型。为了提高可解释性,采用了Grad-CAM,直观地突出了影响分类的区域。所提出的方法是准确的,可解释的,模块化的,使其成为神经肿瘤学临床决策支持的有力候选人。
{"title":"BRAIN-META: A reproducible CNN–vision transformer meta-ensemble pipeline for explainable brain tumor classification","authors":"Komal Kumar Napa ,&nbsp;Sangeetha Murugan ,&nbsp;J.Senthil Murugan ,&nbsp;A. Jayanthi","doi":"10.1016/j.mex.2025.103769","DOIUrl":"10.1016/j.mex.2025.103769","url":null,"abstract":"<div><div>This study presents BRAIN-META, a reproducible deep learning methodology designed for multi-class brain tumor classification using structural MRI. The proposed approach combines ten hybrid CNN–Vision Transformer (ViT) models with a meta-learning ensemble framework. The dataset includes 2D MRI images representing four tumor categories: glioma, meningioma, pituitary, and notumor. A standardized preprocessing pipeline involving image resizing, normalization, and CLAHE (Contrast Limited Adaptive Histogram Equalization) is applied to improve image quality and feature visibility. Ten pre-trained CNN architectures—DenseNet121, DenseNet169, DenseNet201, MobileNet, MobileNetV2, EfficientNetB0, EfficientNetB1, EfficientNetB4, InceptionV3, and Xception—are fused with Vision Transformer blocks to extract both local and global features. Each CNN-ViT model is trained independently, and the softmax outputs from validation data are used to generate stacked feature vectors. These vectors are input to two meta-learners, Logistic Regression and XGBoost, which are trained to produce final predictions. Evaluation metrics include accuracy, precision, recall, F1-score, and confusion matrix. XGBoost meta-learner achieved the highest accuracy of 97.10%, followed by Logistic Regression meta-learner at 97.03%, outperforming all individual base models. To enhance interpretability, Grad-CAM was employed, visually highlighting regions influencing classification. The proposed method is accurate, explainable, and modular, making it a strong candidate for clinical decision support in neuro-oncology.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103769"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SatTCR: a pipeline for performing saturation analysis of the T cell receptor repertoire and a case study of a healthy canine SatTCR:一个管道执行饱和分析的T细胞受体库和一个健康犬的案例研究
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-11-27 DOI: 10.1016/j.mex.2025.103733
Rene Welch Schwartz , Cindy L. Zuleger , Michael A. Newton , David M. Vail , Mark R. Albertini , Irene M. Ong

Motivation

Profiling the T cell receptor (TCR) repertoire using next-generation sequencing (NGS) to quantify adaptive immune responses has become common in human and animal research. Companion dogs with spontaneous tumors have similarities with humans who have cancer. T cells undergo clonal expansion when they recognize specific antigens via surface TCRs. TCR counts from NGS data provide a way to quantify T cell response to vaccines, cancer, or infectious diseases for preclinical and clinical health studies. One complication is that the power and accuracy of TCR experiments depend substantially on the TCR sequencing depth, therefore it is important to determine the optimal read depth of an experiment to verify whether a subject’s repertoire is correctly represented.

Results

The optimal TCR sequencing depth for future experiments can be determined by randomly sampling lower TCR sequencing depths from a sequencing experiment, assembling the TCR clonotypes, and determining where the saturation of power and accuracy occurs. Moreover, one can determine whether an existing experiment has sufficient sequencing depth to justify its conclusions. We provide guidelines to determine whether the sequencing depth is adequate and a computational pipeline that:
Samples pairs of sequences and assembles clonotypes
Summarizes the results in a parametrized report
利用新一代测序(NGS)分析T细胞受体(TCR)库以量化适应性免疫反应已在人类和动物研究中变得普遍。患有自发性肿瘤的陪伴犬与患有癌症的人有相似之处。当T细胞通过表面tcr识别特定抗原时,会进行克隆扩增。来自NGS数据的TCR计数为临床前和临床健康研究提供了一种量化T细胞对疫苗、癌症或传染病反应的方法。一个复杂的问题是,TCR实验的能力和准确性在很大程度上取决于TCR测序深度,因此确定实验的最佳读取深度以验证受试者的曲目是否被正确代表是很重要的。结果通过随机抽取一次测序实验中较低的TCR测序深度,组装TCR克隆型,确定功率和准确度的饱和位置,确定后续实验的最佳TCR测序深度。此外,人们可以确定现有的实验是否有足够的测序深度来证明其结论是正确的。我们提供了确定测序深度是否足够的指导方针,并提供了一个计算管道:对序列和组装克隆类型进行采样,并在参数化报告中总结结果
{"title":"SatTCR: a pipeline for performing saturation analysis of the T cell receptor repertoire and a case study of a healthy canine","authors":"Rene Welch Schwartz ,&nbsp;Cindy L. Zuleger ,&nbsp;Michael A. Newton ,&nbsp;David M. Vail ,&nbsp;Mark R. Albertini ,&nbsp;Irene M. Ong","doi":"10.1016/j.mex.2025.103733","DOIUrl":"10.1016/j.mex.2025.103733","url":null,"abstract":"<div><h3>Motivation</h3><div>Profiling the T cell receptor (TCR) repertoire using next-generation sequencing (NGS) to quantify adaptive immune responses has become common in human and animal research. Companion dogs with spontaneous tumors have similarities with humans who have cancer. T cells undergo clonal expansion when they recognize specific antigens via surface TCRs. TCR counts from NGS data provide a way to quantify T cell response to vaccines, cancer, or infectious diseases for preclinical and clinical health studies. One complication is that the power and accuracy of TCR experiments depend substantially on the TCR sequencing depth, therefore it is important to determine the optimal read depth of an experiment to verify whether a subject’s repertoire is correctly represented.</div></div><div><h3>Results</h3><div>The optimal TCR sequencing depth for future experiments can be determined by randomly sampling lower TCR sequencing depths from a sequencing experiment, assembling the TCR clonotypes, and determining where the saturation of power and accuracy occurs. Moreover, one can determine whether an existing experiment has sufficient sequencing depth to justify its conclusions. We provide guidelines to determine whether the sequencing depth is adequate and a computational pipeline that:</div><div>Samples pairs of sequences and assembles clonotypes</div><div>Summarizes the results in a parametrized report</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103733"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145749536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative parallel grasshopper optimization algorithm for reliability optimization 创新的并行蚱蜢优化算法用于可靠性优化
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-14 DOI: 10.1016/j.mex.2025.103759
Dipti Singh, Neha Chand
This study introduces a novel Parallel Grasshopper Optimization Algorithm (p-GOA), specifically designed to address reliability optimization problems. Although several hybrid algorithms exist in this field, the proposed p-GOA distinctly differs through its parallel cooperative strategy. Unlike sequential methods that apply techniques one after another, p-GOA simultaneously divides the population into two groups operating in parallel: one group employs a migration strategy (SOMA) for broad global exploration of the search space, while the other utilizes a mutation operator (NUMO) for focused local refinement of solutions. This dual-strategy parallel operation creates achieving a stronger balance between global exploration and local refinement, while a smart penalty-free method naturally steers the search toward workable solutions. When tested on four well-known reliability problems, the results demonstrate that our method consistently finds more reliable systems and converges faster than existing approaches, demonstrating its effectiveness in handling real-world engineering constraints.
● This study introduces a Parallel Grasshopper Optimization Algorithm (p-GOA) that integrates GOA, SOMA, and a Non-Uniform Mutation Operator (NUMO). It employs mutation, migration, and a parallel approach to efficiently explore both feasible and near-feasible regions without relying on penalty functions.
● The p-GOA dividing the population into two parallel groups—one updated using SOMA-based migration and the other using NUMO-based mutation. This dual-strategy, simultaneous processing not only accelerates convergence but also strengthens the balance between global search and local optimization.
● Specifically targets reliability optimization problems, particularly redundancy allocation issues where components must meet specific reliability and resource consumption (cost, weight, volume) constraints.
本文介绍了一种新的并行蚱蜢优化算法(p-GOA),专门用于解决可靠性优化问题。虽然该领域存在多种混合算法,但所提出的p-GOA通过其并行合作策略具有明显的不同之处。与顺序方法不同,p-GOA同时将种群分为并行操作的两组:一组使用迁移策略(SOMA)对搜索空间进行广泛的全局探索,而另一组使用突变算子(NUMO)对解决方案进行集中的局部优化。这种双策略并行操作在全局勘探和局部优化之间实现了更强的平衡,而一种智能的无惩罚方法自然地引导了对可行解决方案的搜索。当对四个众所周知的可靠性问题进行测试时,结果表明,我们的方法始终能够找到更可靠的系统,并且比现有方法收敛得更快,证明了它在处理实际工程约束方面的有效性。●本研究引入了一种并行蚱蜢优化算法(p-GOA),该算法集成了GOA、SOMA和非均匀变异算子(NUMO)。它采用突变、迁移和并行方法来有效地探索可行和近可行区域,而不依赖于惩罚函数。●p-GOA将种群划分为两个平行组,一个使用基于somo的迁移更新,另一个使用基于numo的突变更新。这种双重策略的同时处理不仅加快了收敛速度,而且加强了全局搜索和局部优化之间的平衡。●专门针对可靠性优化问题,特别是冗余分配问题,其中组件必须满足特定的可靠性和资源消耗(成本,重量,体积)的限制。
{"title":"Innovative parallel grasshopper optimization algorithm for reliability optimization","authors":"Dipti Singh,&nbsp;Neha Chand","doi":"10.1016/j.mex.2025.103759","DOIUrl":"10.1016/j.mex.2025.103759","url":null,"abstract":"<div><div>This study introduces a novel Parallel Grasshopper Optimization Algorithm (p-GOA), specifically designed to address reliability optimization problems. Although several hybrid algorithms exist in this field, the proposed p-GOA distinctly differs through its parallel cooperative strategy. Unlike sequential methods that apply techniques one after another, p-GOA simultaneously divides the population into two groups operating in parallel: one group employs a migration strategy (SOMA) for broad global exploration of the search space, while the other utilizes a mutation operator (NUMO) for focused local refinement of solutions. This dual-strategy parallel operation creates achieving a stronger balance between global exploration and local refinement, while a smart penalty-free method naturally steers the search toward workable solutions. When tested on four well-known reliability problems, the results demonstrate that our method consistently finds more reliable systems and converges faster than existing approaches, demonstrating its effectiveness in handling real-world engineering constraints.</div><div>● This study introduces a Parallel Grasshopper Optimization Algorithm (p-GOA) that integrates GOA, SOMA, and a Non-Uniform Mutation Operator (NUMO). It employs mutation, migration, and a parallel approach to efficiently explore both feasible and near-feasible regions without relying on penalty functions.</div><div>● The p-GOA dividing the population into two parallel groups—one updated using SOMA-based migration and the other using NUMO-based mutation. This dual-strategy, simultaneous processing not only accelerates convergence but also strengthens the balance between global search and local optimization.</div><div>● Specifically targets reliability optimization problems, particularly redundancy allocation issues where components must meet specific reliability and resource consumption (cost, weight, volume) constraints.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103759"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multivariate correlated poisson generalized inverse gaussian regression model for dependent count data: Estimation and testing procedures 依赖计数数据的多元相关泊松广义逆高斯回归模型:估计和检验程序
IF 1.9 Q2 MULTIDISCIPLINARY SCIENCES Pub Date : 2026-06-01 Epub Date: 2025-12-17 DOI: 10.1016/j.mex.2025.103772
Yusrianti Hanike , Purhadi , Achmad Choiruddin
Regression modeling for multivariate count data often struggles with assumption of overdispersion and correlation among response variables. To address these issues, this study proposes a new model called Multivariate Correlated Poisson Generalized Inverse Gaussian Regression (MCPGIGR), which integrates random effects through common shock variables and allows for flexible mean structures via a log-link function. This research develops a Maximum Likelihood Estimation (MLE) and Maximum Likelihood Ratio Tests (MLRT) to evaluate both simultaneous and partial significance of predictors. We conduct simulation studies to assess the consistency and performance of the proposed estimators. Furthermore, in an application to maternal and neonatal mortality across 38 districts/cities in East Java (Indonesia), MCPGIGR substantially improves model fit relative to a Multivariate Poisson Regression (MPR) baseline (AICc decreases from 2378.63 to 1924.60 for γ=1/2). The proposed framework provides a practical and flexible tool for analyzing correlated, overdispersed multivariate counts in public health and related domains. The highlights of this research are:
• The MCPGIGR model introduces a correlated multivariate count regression framework with exposure adjustment.
• It provides robust parameter estimation and hypothesis testing via MLE and MLRT.
• MCPGIGR demonstrates improved model fit and practical interpretability in public health applications.
多变量计数数据的回归建模经常与响应变量之间的过度分散和相关性假设作斗争。为了解决这些问题,本研究提出了一种称为多元相关泊松广义逆高斯回归(MCPGIGR)的新模型,该模型通过常见的冲击变量集成随机效应,并通过对数链接函数允许灵活的平均值结构。本研究发展了最大似然估计(MLE)和最大似然比检验(MLRT)来评估预测因子的同时显著性和部分显著性。我们进行模拟研究,以评估所提出的估计器的一致性和性能。此外,在东爪哇(印度尼西亚)38个地区/城市的孕产妇和新生儿死亡率的应用中,MCPGIGR相对于多变量泊松回归(MPR)基线显著改善了模型拟合(当γ= - 1/2时,AICc从2378.63降至1924.60)。提出的框架为分析公共卫生和相关领域中相关的、过度分散的多变量计数提供了一个实用和灵活的工具。本研究的重点是:•MCPGIGR模型引入了一个与曝光调整相关的多变量计数回归框架。•通过MLE和MLRT提供鲁棒参数估计和假设检验。•MCPGIGR在公共卫生应用中展示了改进的模型拟合性和实际可解释性。
{"title":"A multivariate correlated poisson generalized inverse gaussian regression model for dependent count data: Estimation and testing procedures","authors":"Yusrianti Hanike ,&nbsp;Purhadi ,&nbsp;Achmad Choiruddin","doi":"10.1016/j.mex.2025.103772","DOIUrl":"10.1016/j.mex.2025.103772","url":null,"abstract":"<div><div>Regression modeling for multivariate count data often struggles with assumption of overdispersion and correlation among response variables. To address these issues, this study proposes a new model called Multivariate Correlated Poisson Generalized Inverse Gaussian Regression (MCPGIGR), which integrates random effects through common shock variables and allows for flexible mean structures via a log-link function. This research develops a Maximum Likelihood Estimation (MLE) and Maximum Likelihood Ratio Tests (MLRT) to evaluate both simultaneous and partial significance of predictors. We conduct simulation studies to assess the consistency and performance of the proposed estimators. Furthermore, in an application to maternal and neonatal mortality across 38 districts/cities in East Java (Indonesia), MCPGIGR substantially improves model fit relative to a Multivariate Poisson Regression (MPR) baseline (AICc decreases from 2378.63 to 1924.60 for <span><math><mrow><mi>γ</mi><mo>=</mo><mo>−</mo><mrow><mn>1</mn><mspace></mspace><mo>/</mo><mspace></mspace><mn>2</mn></mrow></mrow></math></span>). The proposed framework provides a practical and flexible tool for analyzing correlated, overdispersed multivariate counts in public health and related domains. The highlights of this research are:</div><div>• The MCPGIGR model introduces a correlated multivariate count regression framework with exposure adjustment.</div><div>• It provides robust parameter estimation and hypothesis testing via MLE and MLRT.</div><div>• MCPGIGR demonstrates improved model fit and practical interpretability in public health applications.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103772"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
MethodsX
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1