Pub Date : 2026-06-01Epub Date: 2026-01-06DOI: 10.1016/j.mex.2026.103790
Val Snow , Dean Holzworth , Rogerio Cichota , Olle Hartvigson
Pasture production and nutrient cycling through grazed pastures are inherently difficult to model with process-based simulation models. This arises because the urine depositions from grazing livestock create extreme heterogeneity in soil nutrient concentrations and dynamics. They result in relatively small patches of soil with very high mineral nitrogen (N) concentrations with the remainder of the soil with low N concentrations. These variations are such that simply averaging over them will somewhat overestimate pasture production and vastly underestimate environmental losses such as N leaching and greenhouse gas emissions. Explicit representation of the heterogeneity will allow correct simulation of environmental losses, but this comes at the expense of long runtimes in simulations – runtimes that can make the model intractable to use. Here we outline an update to an existing method that preserves the most important part of the heterogeneity while still allowing tractable runtimes for simulations. While we applied this method to grazed pasture systems, it could be extended to other sources of heterogeneity such as spatially variable fertiliser management.
A method to model non-uniform applications of nutrients to soils in simulation models
Captures the major implications of the non-uniformity on soil and plant processes
The method is computationally efficient resulting in tractable simulations
{"title":"Updated method to incorporate soil nutrient heterogeneity caused by urine excretion into a process-based simulation model","authors":"Val Snow , Dean Holzworth , Rogerio Cichota , Olle Hartvigson","doi":"10.1016/j.mex.2026.103790","DOIUrl":"10.1016/j.mex.2026.103790","url":null,"abstract":"<div><div>Pasture production and nutrient cycling through grazed pastures are inherently difficult to model with process-based simulation models. This arises because the urine depositions from grazing livestock create extreme heterogeneity in soil nutrient concentrations and dynamics. They result in relatively small patches of soil with very high mineral nitrogen (N) concentrations with the remainder of the soil with low N concentrations. These variations are such that simply averaging over them will somewhat overestimate pasture production and vastly underestimate environmental losses such as N leaching and greenhouse gas emissions. Explicit representation of the heterogeneity will allow correct simulation of environmental losses, but this comes at the expense of long runtimes in simulations – runtimes that can make the model intractable to use. Here we outline an update to an existing method that preserves the most important part of the heterogeneity while still allowing tractable runtimes for simulations. While we applied this method to grazed pasture systems, it could be extended to other sources of heterogeneity such as spatially variable fertiliser management.</div><div>A method to model non-uniform applications of nutrients to soils in simulation models</div><div>Captures the major implications of the non-uniformity on soil and plant processes</div><div>The method is computationally efficient resulting in tractable simulations</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103790"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-01-08DOI: 10.1016/j.mex.2026.103793
Tania Elizabeth Sandoval-Valencia , Gerardo Hurtado-Hurtado , Luis Morales-Velázquez , Dante Ruiz-Robles , Juan Carlos Jáuregui-Correa
Railway wheel wear poses major safety and maintenance challenges, yet accurate predictive models are limited by a lack of synchronized dynamic and wear data from scaled systems. This article presents an integrated methodology to generate a correlative dataset of dynamic parameters and wear progression on the wheels of a 1:20 scale railway system. The experimental approach combines synchronized multisensor data acquisition with sequential microscopic imaging under controlled operating conditions, specifically during braking maneuvers at track transitions. The resulting publicly available dataset enables direct analysis of how operational factors influence physical degradation.
Integration of synchronized sensor data with sequential microscopic imaging to correlate dynamics and wear progression.
Controlled factorial experimental design varying speed and braking zones to ensure reproducible testing conditions.
Publicly available dataset supporting model calibration, predictive algorithm development, and defect quantification for railway maintenance applications.
{"title":"Integrated methodology for correlating dynamic parameters with wheel wear progression in a scaled railway system","authors":"Tania Elizabeth Sandoval-Valencia , Gerardo Hurtado-Hurtado , Luis Morales-Velázquez , Dante Ruiz-Robles , Juan Carlos Jáuregui-Correa","doi":"10.1016/j.mex.2026.103793","DOIUrl":"10.1016/j.mex.2026.103793","url":null,"abstract":"<div><div>Railway wheel wear poses major safety and maintenance challenges, yet accurate predictive models are limited by a lack of synchronized dynamic and wear data from scaled systems. This article presents an integrated methodology to generate a correlative dataset of dynamic parameters and wear progression on the wheels of a 1:20 scale railway system. The experimental approach combines synchronized multisensor data acquisition with sequential microscopic imaging under controlled operating conditions, specifically during braking maneuvers at track transitions. The resulting publicly available dataset enables direct analysis of how operational factors influence physical degradation.</div><div>Integration of synchronized sensor data with sequential microscopic imaging to correlate dynamics and wear progression.</div><div>Controlled factorial experimental design varying speed and braking zones to ensure reproducible testing conditions.</div><div>Publicly available dataset supporting model calibration, predictive algorithm development, and defect quantification for railway maintenance applications.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103793"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study introduces the Geographically Weighted Weibull Regression (GWWR) model as an extension of the Weibull regression (WR) within the geographically weighted regression framework and applies it to spatial environmental data on dissolved oxygen (DO) levels in East Kalimantan in 2024, rather than to time-to-event data. This study maps the river water quality (RWQ) and its influencing factors using the GWWR model. The results indicate that the RWQ in East Kalimantan in 2024 generally tends to degrade, with the main influencing factors being dissolved iron, total phosphate, water temperature, and biochemical oxygen demand. The main highlights of the proposed method are as follows:
•
This study presents the GWWR model as an extension of the WR model and demonstrates its applicability to spatially heterogeneous data rather than to time-to-event data.
•
The GWWR model is employed to locally analyze RWQ and its influencing factors.
•
The GWWR approach represents RWQ characteristics using several statistical measures, including the probability of water quality improvement, the probability of water quality degradation, the water quality degradation rate, and the mean DO level. These statistical measures are analyzed respectively through spatial Weibull survival, cumulative distribution, hazard, and mean regression models.
{"title":"Geographically weighted Weibull regression modeling on dissolved oxygen data to analyze river water quality in East Kalimantan","authors":"Suyitno Suyitno , Darnah , Memi Nor Hayati , Andrea Tri Rian Dani , Ika Purnamasari , Rito Goejantoro , Meiliyani Siringoringo , Pratama Yuly Nugraha , Meirinda Fauziyah , Zabrina Nathania Fauziyah , Mislan","doi":"10.1016/j.mex.2025.103745","DOIUrl":"10.1016/j.mex.2025.103745","url":null,"abstract":"<div><div>This study introduces the Geographically Weighted Weibull Regression (GWWR) model as an extension of the Weibull regression (WR) within the geographically weighted regression framework and applies it to spatial environmental data on dissolved oxygen (DO) levels in East Kalimantan in 2024, rather than to time-to-event data. This study maps the river water quality (RWQ) and its influencing factors using the GWWR model. The results indicate that the RWQ in East Kalimantan in 2024 generally tends to degrade, with the main influencing factors being dissolved iron, total phosphate, water temperature, and biochemical oxygen demand. The main highlights of the proposed method are as follows:<ul><li><span>•</span><span><div>This study presents the GWWR model as an extension of the WR model and demonstrates its applicability to spatially heterogeneous data rather than to time-to-event data.</div></span></li><li><span>•</span><span><div>The GWWR model is employed to locally analyze RWQ and its influencing factors.</div></span></li><li><span>•</span><span><div>The GWWR approach represents RWQ characteristics using several statistical measures, including the probability of water quality improvement, the probability of water quality degradation, the water quality degradation rate, and the mean DO level. These statistical measures are analyzed respectively through spatial Weibull survival, cumulative distribution, hazard, and mean regression models.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103745"},"PeriodicalIF":1.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145749540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-06-27DOI: 10.1016/j.mex.2025.103474
Yihan Zhang, Shanshan Li
This study proposes a logistic regression-integrated cellular automata (CA) model for oil spill simulation, addressing challenges in parameter determination of traditional CA models. The method involves data preprocessing (geospatial alignment, resampling, normalization), Monte Carlo sampling for training data, logistic regression-based weight assignment to impact factors, neighborhood function and stochastic term computation, and iterative oil spill simulation. The model can be calibrated through sensitivity analyses of sampling ratios, spatial scales, and neighborhood structures. Finally, it was validated using DeepSpill experimental data. Results show optimal accuracy (97.40 %) under 22 % sampling ratio, 12.61 % oil area proportion, 6 m spatial scale, and 7 × 7 Moore neighborhood.
•
Innovative Model Integration & Calibration: Merged logistic regression with CA to objectively quantify environmental drivers (currents, wind, salinity) and optimize parameters (sampling, scale and neighborhood) in oil simulation.
•
Dynamic Optimization & Scale Sensitivity: Peak accuracy (96.41 %) can be obtained at 22 % sampling rate and 12.61 % oil area. 97.32 % accuracy at 6 m resolution balances resolution and boundary roughness.
{"title":"A method to modelling oil spill using combination of logistic regression and cellular automata","authors":"Yihan Zhang, Shanshan Li","doi":"10.1016/j.mex.2025.103474","DOIUrl":"10.1016/j.mex.2025.103474","url":null,"abstract":"<div><div>This study proposes a logistic regression-integrated cellular automata (CA) model for oil spill simulation, addressing challenges in parameter determination of traditional CA models. The method involves data preprocessing (geospatial alignment, resampling, normalization), Monte Carlo sampling for training data, logistic regression-based weight assignment to impact factors, neighborhood function and stochastic term computation, and iterative oil spill simulation. The model can be calibrated through sensitivity analyses of sampling ratios, spatial scales, and neighborhood structures. Finally, it was validated using DeepSpill experimental data. Results show optimal accuracy (97.40 %) under 22 % sampling ratio, 12.61 % oil area proportion, 6 m spatial scale, and 7 × 7 Moore neighborhood.<ul><li><span>•</span><span><div>Innovative Model Integration & Calibration: Merged logistic regression with CA to objectively quantify environmental drivers (currents, wind, salinity) and optimize parameters (sampling, scale and neighborhood) in oil simulation.</div></span></li><li><span>•</span><span><div>Dynamic Optimization & Scale Sensitivity: Peak accuracy (96.41 %) can be obtained at 22 % sampling rate and 12.61 % oil area. 97.32 % accuracy at 6 m resolution balances resolution and boundary roughness.</div></span></li><li><span>•</span><span><div>Neighborhood-Driven Diffusion Enhancement: 7 × 7 Moore neighborhood boosts accuracy to 97.40 % (vs. 3 × 3), proving neighborhood size critically shapes diffusion dynamics.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103474"},"PeriodicalIF":1.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144570202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-30DOI: 10.1016/j.mex.2025.103652
N. Shobha Rani , Bhavya K R , I. Jeena Jacob , Pushpa B. R , Bipin Nair BJ , Akshatha Prabhu
The reliable classification of medicinal plant species plays a vital role in ensuring their quality, authenticity, and safe use in healthcare. However, existing methods often face difficulties when species exhibit strong visual similarities or when datasets are imbalanced, which limits their effectiveness in practice. Although deep learning models such as ResNet18 and VGG16 have proven influential in image recognition tasks, our experiments showed that they tended to overfit, with validation losses reaching 42.99 % and test accuracy falling to 73.99 % in certain groups. To overcome these challenges, we introduce a multi-level fusion feature model that combines 3D normalized color histograms, extended uniform Local Binary Patterns (LBP with P = 24, R = 3), multi-orientation Gabor filters, and Histogram of Oriented Gradients (HOG). This approach captures a richer set of visual cues by bringing together global color statistics, detailed textures, frequency-domain patterns, and shape descriptors. We incorporate SMOTE-based synthetic augmentation to address further class imbalance, which helps balance feature distributions across categories. We employ a soft-voting ensemble of machine learning classifiers for classification and use cosine similarity metrics to capture inter-class relationships better. Tests on Indian medicinal plant datasets show that our model consistently outperforms deep learning baselines, reaching 100 % accuracy in Group 1, 95.82 % in Group 3, and over 90 % in other groups. These results suggest that the proposed model offers a more robust and computationally efficient solution for plant species classification, particularly under conditions of high inter-class similarity and dataset imbalance.
•
The proposed domain-specific model can be applied explicitly to Indian plant species groups exhibiting high inter-class visual similarities through a novel feature fusion strategy.
•
The proposed multi-level feature fusion method's innovation integrates 3D normalized color histograms, extended uniform LBP (P = 24, R = 3), multi-orientation Gabor filters, and HOG features to capture the color, texture, and shape characteristics.
•
The proposed work offers a scalable ensemble framework for inter-class similarity analysis by combining SMOTE-based class balancing, feature normalization, and a soft-voting ensemble of diverse classifiers that support biodiversity and ecological studies.
药用植物物种的可靠分类对保证其质量、真实性和在医疗保健中的安全使用起着至关重要的作用。然而,当物种表现出强烈的视觉相似性或当数据集不平衡时,现有的方法往往面临困难,这限制了它们在实践中的有效性。尽管ResNet18和VGG16等深度学习模型已被证明在图像识别任务中具有影响力,但我们的实验表明,它们倾向于过拟合,在某些组中验证损失达到42.99%,测试准确率下降到73.99%。为了克服这些挑战,我们引入了一种多层次融合特征模型,该模型结合了3D归一化颜色直方图、扩展均匀局部二值模式(LBP, P = 24, R = 3)、多向Gabor滤波器和定向梯度直方图(HOG)。这种方法通过将全局颜色统计、详细纹理、频域模式和形状描述符结合在一起,捕获了一组更丰富的视觉线索。我们结合了基于smote的合成增强来解决进一步的类别不平衡,这有助于平衡类别之间的特征分布。我们采用机器学习分类器的软投票集成进行分类,并使用余弦相似度度量来更好地捕获类间关系。对印度药用植物数据集的测试表明,我们的模型始终优于深度学习基线,在第1组中达到100%的准确率,在第3组中达到95.82%,在其他组中达到90%以上。这些结果表明,该模型为植物物种分类提供了一种鲁棒性和计算效率更高的解决方案,特别是在类间相似性高和数据不平衡的情况下。•提出的领域特定模型可以通过一种新颖的特征融合策略明确应用于表现出高度类间视觉相似性的印度植物物种群。•提出的多层次特征融合方法的创新之处是集成了3D归一化颜色直方图、扩展均匀LBP (P = 24, R = 3)、多向Gabor滤波器和HOG特征来捕获颜色、纹理和形状特征。•提出的工作提供了一个可扩展的集成框架,通过结合基于smote的类平衡、特征规范化和支持生物多样性和生态研究的各种分类器的软投票集成,用于类间相似性分析。
{"title":"CISCS: Classification of inter-class similarity based medicinal plant species groups with machine learning","authors":"N. Shobha Rani , Bhavya K R , I. Jeena Jacob , Pushpa B. R , Bipin Nair BJ , Akshatha Prabhu","doi":"10.1016/j.mex.2025.103652","DOIUrl":"10.1016/j.mex.2025.103652","url":null,"abstract":"<div><div>The reliable classification of medicinal plant species plays a vital role in ensuring their quality, authenticity, and safe use in healthcare. However, existing methods often face difficulties when species exhibit strong visual similarities or when datasets are imbalanced, which limits their effectiveness in practice. Although deep learning models such as ResNet18 and VGG16 have proven influential in image recognition tasks, our experiments showed that they tended to overfit, with validation losses reaching 42.99 % and test accuracy falling to 73.99 % in certain groups. To overcome these challenges, we introduce a multi-level fusion feature model that combines 3D normalized color histograms, extended uniform Local Binary Patterns (LBP with <em>P</em> = 24, <em>R</em> = 3), multi-orientation Gabor filters, and Histogram of Oriented Gradients (HOG). This approach captures a richer set of visual cues by bringing together global color statistics, detailed textures, frequency-domain patterns, and shape descriptors. We incorporate SMOTE-based synthetic augmentation to address further class imbalance, which helps balance feature distributions across categories. We employ a soft-voting ensemble of machine learning classifiers for classification and use cosine similarity metrics to capture inter-class relationships better. Tests on Indian medicinal plant datasets show that our model consistently outperforms deep learning baselines, reaching 100 % accuracy in Group 1, 95.82 % in Group 3, and over 90 % in other groups. These results suggest that the proposed model offers a more robust and computationally efficient solution for plant species classification, particularly under conditions of high inter-class similarity and dataset imbalance.<ul><li><span>•</span><span><div>The proposed domain-specific model can be applied explicitly to Indian plant species groups exhibiting high inter-class visual similarities through a novel feature fusion strategy.</div></span></li><li><span>•</span><span><div>The proposed multi-level feature fusion method's innovation integrates 3D normalized color histograms, extended uniform LBP (<em>P</em> = 24, <em>R</em> = 3), multi-orientation Gabor filters, and HOG features to capture the color, texture, and shape characteristics.</div></span></li><li><span>•</span><span><div>The proposed work offers a scalable ensemble framework for inter-class similarity analysis by combining SMOTE-based class balancing, feature normalization, and a soft-voting ensemble of diverse classifiers that support biodiversity and ecological studies.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103652"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-18DOI: 10.1016/j.mex.2025.103632
Sonam Singh , Amol Dhumane
Deepfakes, which are driven by developments in generative AI, seriously jeopardize public trust, cybersecurity, and the veracity of information. This study offers a comprehensive analysis of the most recent methods for creating and detecting deepfakes in image, video, and audio modalities. With a focus on their advantages and disadvantages in cross-dataset and real-world scenarios, we compile the latest developments in transformer-based detection models, multimodal biometric defenses, and Generative Adversarial Networks (GANs). We provide implementation-level information such as pseudocode workflows, hyperparameter settings, and preprocessing pipelines for popular detection frameworks to improve reproducibility. We also examine the implications of cybersecurity, including identity theft and biometric spoofing, as well as policy-oriented solutions that incorporate federated learning, explainable AI, and ethical protections. By enriching technical insights with interdisciplinary perspectives, this review charts a roadmap for building robust, scalable, and trustworthy deepfake detection systems.
{"title":"Unmasking digital deceptions: An integrative review of deepfake detection, multimedia forensics, and cybersecurity challenges","authors":"Sonam Singh , Amol Dhumane","doi":"10.1016/j.mex.2025.103632","DOIUrl":"10.1016/j.mex.2025.103632","url":null,"abstract":"<div><div>Deepfakes, which are driven by developments in generative AI, seriously jeopardize public trust, cybersecurity, and the veracity of information. This study offers a comprehensive analysis of the most recent methods for creating and detecting deepfakes in image, video, and audio modalities. With a focus on their advantages and disadvantages in cross-dataset and real-world scenarios, we compile the latest developments in transformer-based detection models, multimodal biometric defenses, and Generative Adversarial Networks (GANs). We provide implementation-level information such as pseudocode workflows, hyperparameter settings, and preprocessing pipelines for popular detection frameworks to improve reproducibility. We also examine the implications of cybersecurity, including identity theft and biometric spoofing, as well as policy-oriented solutions that incorporate federated learning, explainable AI, and ethical protections. By enriching technical insights with interdisciplinary perspectives, this review charts a roadmap for building robust, scalable, and trustworthy deepfake detection systems.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103632"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145154427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-16DOI: 10.1016/j.mex.2025.103564
Anusha R, Srinivas Prasad
Cervical cancer is a serious health concern that entails high risks for individuals due to delayed detection and treatment worldwide. Formal screening for the condition is challenging in developing countries due to several factors, including medical costs, access to healthcare facilities, and delayed symptom manifestation. A blockchain-enabled healthcare system for cervical cancer risk prediction ensures data security, privacy, and accurate risk assessment. This system uses blockchain to provide decentralised, tamper-proof storage and access control over sensitive patient data, ensuring that only authorized entities can interact with the information. An improved spotted hyena optimization algorithm is employed for cervical cancer risk prediction, fine-tuning a Graph Convolutional Network (GCN) integrated with an Attention Mechanism and a Gated Recurrent Unit (GRU). The GCN captures complex relationships between medical attributes and patients, while the attention mechanism dynamically assigns weights to features based on relevance, improving predictive accuracy. The GRU processes sequential data, such as medical history, to model temporal dependencies in the risk factors. The metaheuristic optimization further enhances the model by finding the optimal parameters, boosting performance
Introduces a blockchain-enabled system for secure and decentralized medical data management
Applies an intelligent model for predicting cervical cancer risk using patient health records
Demonstrates improved accuracy, privacy, and reliability over traditional diagnostic methods
{"title":"A blockchain-enabled healthcare system for cervical cancer risk prediction using enhanced metaheuristic optimised graph convolutional attention based GRU","authors":"Anusha R, Srinivas Prasad","doi":"10.1016/j.mex.2025.103564","DOIUrl":"10.1016/j.mex.2025.103564","url":null,"abstract":"<div><div>Cervical cancer is a serious health concern that entails high risks for individuals due to delayed detection and treatment worldwide. Formal screening for the condition is challenging in developing countries due to several factors, including medical costs, access to healthcare facilities, and delayed symptom manifestation. A blockchain-enabled healthcare system for cervical cancer risk prediction ensures data security, privacy, and accurate risk assessment. This system uses blockchain to provide decentralised, tamper-proof storage and access control over sensitive patient data, ensuring that only authorized entities can interact with the information. An improved spotted hyena optimization algorithm is employed for cervical cancer risk prediction, fine-tuning a Graph Convolutional Network (GCN) integrated with an Attention Mechanism and a Gated Recurrent Unit (GRU). The GCN captures complex relationships between medical attributes and patients, while the attention mechanism dynamically assigns weights to features based on relevance, improving predictive accuracy. The GRU processes sequential data, such as medical history, to model temporal dependencies in the risk factors. The metaheuristic optimization further enhances the model by finding the optimal parameters, boosting performance</div><div>Introduces a blockchain-enabled system for secure and decentralized medical data management</div><div>Applies an intelligent model for predicting cervical cancer risk using patient health records</div><div>Demonstrates improved accuracy, privacy, and reliability over traditional diagnostic methods</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103564"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-17DOI: 10.1016/j.mex.2025.103680
Md Tomal Ahmed Sajib , Nazmul Huda Badhon , Imrus Salehin , Md Sakibul Hassan Rifat , Faysal Ahmmed , Pritom Saha , Nazmun Nessa Moon
Deep learning has become a leading approach for agricultural image analysis and leveraging it for pest recognition has offered tangible value for crop protection. This work has presented a comparative methodology for plant-insect image classification on the BAU-Insectv2 dataset, emphasizing how augmentation choices and optimizers have shaped model behavior on small, field-collected data. We have evaluated four convolutional architectures (ResNet101V2, EfficientNet-B1, InceptionV3, InceptionResNetV1) under transfer learning, six single-factor augmentations, and three optimizers (Adam, SGD, RMSprop). Performance has been assessed with accuracy, precision, recall, and F1-score. Across settings, Adam has generally produced the most stable high accuracy on limited data; model–augmentation pairings have also mattered—e.g., EfficientNet-B1 with cropping has achieved near-perfect accuracy, while ResNet101V2 with rotation and InceptionV3 with brightness have remained competitive. The study has delivered a reproducible pipeline and augmentation-aware guidance that practitioners can adopt when data are scarce, enabling robust insect recognition for downstream agronomic decision support.
• We have curated BAU-Insectv2 and designed six single-factor augmentations.
• We have benchmarked four transfer-learned CNNs with three optimizers.
• We have validated with standard metrics and optimizer–augmentation ablations.
{"title":"A comparative deep learning methodology for plant insect image classification: Assessment of CNN architectures and augmentation techniques","authors":"Md Tomal Ahmed Sajib , Nazmul Huda Badhon , Imrus Salehin , Md Sakibul Hassan Rifat , Faysal Ahmmed , Pritom Saha , Nazmun Nessa Moon","doi":"10.1016/j.mex.2025.103680","DOIUrl":"10.1016/j.mex.2025.103680","url":null,"abstract":"<div><div>Deep learning has become a leading approach for agricultural image analysis and leveraging it for pest recognition has offered tangible value for crop protection. This work has presented a comparative methodology for plant-insect image classification on the BAU-Insectv2 dataset, emphasizing how augmentation choices and optimizers have shaped model behavior on small, field-collected data. We have evaluated four convolutional architectures (ResNet101V2, EfficientNet-B1, InceptionV3, InceptionResNetV1) under transfer learning, six single-factor augmentations, and three optimizers (Adam, SGD, RMSprop). Performance has been assessed with accuracy, precision, recall, and F1-score. Across settings, Adam has generally produced the most stable high accuracy on limited data; model–augmentation pairings have also mattered—e.g., EfficientNet-B1 with cropping has achieved near-perfect accuracy, while ResNet101V2 with rotation and InceptionV3 with brightness have remained competitive. The study has delivered a reproducible pipeline and augmentation-aware guidance that practitioners can adopt when data are scarce, enabling robust insect recognition for downstream agronomic decision support.</div><div>• We have curated BAU-Insectv2 and designed six single-factor augmentations.</div><div>• We have benchmarked four transfer-learned CNNs with three optimizers.</div><div>• We have validated with standard metrics and optimizer–augmentation ablations.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103680"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-15DOI: 10.1016/j.mex.2025.103678
Anita Gunjal , T. Judgi
Global health is increasingly concerned with and interested in Cardiovascular Diseases (CVD), which necessitates new and innovative ways of identifying them earlier and treating them more effectively. AI techniques, such as ML and DL provide a promising pathway to address these challenges. This study investigated the interplay between advancing technology and medical science, focusing on AI’s application in improving CVD diagnosis. Traditionally, CVD diagnosis has relied on clinical assessments, laboratory tests, and imaging modalities such as echocardiography and angiography. Several researchers are using online datasets as well as utilizing inexpensive sensors to collect data in the healthcare field to carry out their research to develop different ML and DL algorithms that can detect diseases automatically. Feature-based ML algorithms, CNNs, RNNs, and hybrid models are commonly used techniques in this area. This study highlights the importance of ML and DL in cardiac health and emphasizes precise and enhanced prediction of cardiovascular disease. The developments in state-of-art technologies and the increasing influence of cardiovascular disease on public health, this study attempts to present an in-depth analysis of the topic based on current AI-based methods used for CVD management based on reports from Electronic Health Records (EHR) and Electrocardiogram (ECG). It shows areas which requires improvement, and proposes avenues for future investigation. This study aims to direct future advancements in diagnostic tools by highlighting the critical role of AI in rethinking methods to CVD diagnosis and treatment approaches to enhance the patient outcomes.
{"title":"A comprehensive survey of artificial intelligence methods for cardiovascular disease detection: Recent advances and future challenges","authors":"Anita Gunjal , T. Judgi","doi":"10.1016/j.mex.2025.103678","DOIUrl":"10.1016/j.mex.2025.103678","url":null,"abstract":"<div><div>Global health is increasingly concerned with and interested in Cardiovascular Diseases (CVD), which necessitates new and innovative ways of identifying them earlier and treating them more effectively. AI techniques, such as ML and DL provide a promising pathway to address these challenges. This study investigated the interplay between advancing technology and medical science, focusing on AI’s application in improving CVD diagnosis. Traditionally, CVD diagnosis has relied on clinical assessments, laboratory tests, and imaging modalities such as echocardiography and angiography. Several researchers are using online datasets as well as utilizing inexpensive sensors to collect data in the healthcare field to carry out their research to develop different ML and DL algorithms that can detect diseases automatically. Feature-based ML algorithms, CNNs, RNNs, and hybrid models are commonly used techniques in this area. This study highlights the importance of ML and DL in cardiac health and emphasizes precise and enhanced prediction of cardiovascular disease. The developments in state-of-art technologies and the increasing influence of cardiovascular disease on public health, this study attempts to present an in-depth analysis of the topic based on current AI-based methods used for CVD management based on reports from Electronic Health Records (EHR) and Electrocardiogram (ECG). It shows areas which requires improvement, and proposes avenues for future investigation. This study aims to direct future advancements in diagnostic tools by highlighting the critical role of AI in rethinking methods to CVD diagnosis and treatment approaches to enhance the patient outcomes.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103678"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-05DOI: 10.1016/j.mex.2025.103493
Olivia De-Jongh González , Jianxia Fan , Isabelle Marc , Hong Jiang , Andraea Van Hulst , Claire N. Tugault-Lafleur , Yanting Wu , Yanhui Hao , Liping Wang , Xiaoyu Hu , Caifeng Wang , Wenguang Sun , Sonia Semenic , Yamei Yu , Lei Chen , Weibin Wu , Yulai Zhou , Ting Li , Wenli Fang , Yinan Liu , Louise C. Mâsse
This paper describes the methods for the development and implementation of the Sino-Canadian Healthy Life Trajectories Initiative (SCHeLTI) intervention, part of a World Health Organization-supported effort to prevent childhood obesity through four international randomized controlled trials. SCHeLTI is a multi-center, cluster-randomized trial in Shanghai, supporting 4500 families from preconception through the child’s fifth year. This Community-Family-Mother-Child intervention includes coordinated components such as Healthy Conversation sessions, nutrition consultations, breastfeeding support, an obesity clinic, and educational courses tailored to key reproductive and developmental stages and risk profiles. Guided by implementation science principles, SCHeLTI’s development followed four main phases: 1) establishing the conceptual foundation (theoretical framework, outcomes, logic model); 2) building delivery infrastructure and engaging stakeholders in formative research; 3) finalizing the intervention design tailored to families’ needs; and 4) implementing the intervention, including capacity building, adaptation, and process evaluation strategies.
•
A four-phase development process grounded in implementation science principles guided intervention design and delivery
•
Tailored components align with reproductive and developmental stages and risk profiles to support family and child needs across the life course
•
Stakeholder engagement and iterative adaptation ensured contextual relevance and feasibility
{"title":"A stepwise approach to designing and delivering the SCHeLTI trial community-family-mother-child obesity prevention intervention","authors":"Olivia De-Jongh González , Jianxia Fan , Isabelle Marc , Hong Jiang , Andraea Van Hulst , Claire N. Tugault-Lafleur , Yanting Wu , Yanhui Hao , Liping Wang , Xiaoyu Hu , Caifeng Wang , Wenguang Sun , Sonia Semenic , Yamei Yu , Lei Chen , Weibin Wu , Yulai Zhou , Ting Li , Wenli Fang , Yinan Liu , Louise C. Mâsse","doi":"10.1016/j.mex.2025.103493","DOIUrl":"10.1016/j.mex.2025.103493","url":null,"abstract":"<div><div>This paper describes the methods for the development and implementation of the Sino-Canadian Healthy Life Trajectories Initiative (SCHeLTI) intervention, part of a World Health Organization-supported effort to prevent childhood obesity through four international randomized controlled trials. SCHeLTI is a multi-center, cluster-randomized trial in Shanghai, supporting 4500 families from preconception through the child’s fifth year. This Community-Family-Mother-Child intervention includes coordinated components such as Healthy Conversation sessions, nutrition consultations, breastfeeding support, an obesity clinic, and educational courses tailored to key reproductive and developmental stages and risk profiles. Guided by implementation science principles, SCHeLTI’s development followed four main phases: 1) establishing the conceptual foundation (theoretical framework, outcomes, logic model); 2) building delivery infrastructure and engaging stakeholders in formative research; 3) finalizing the intervention design tailored to families’ needs; and 4) implementing the intervention, including capacity building, adaptation, and process evaluation strategies.<ul><li><span>•</span><span><div>A four-phase development process grounded in implementation science principles guided intervention design and delivery</div></span></li><li><span>•</span><span><div>Tailored components align with reproductive and developmental stages and risk profiles to support family and child needs across the life course</div></span></li><li><span>•</span><span><div>Stakeholder engagement and iterative adaptation ensured contextual relevance and feasibility</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103493"},"PeriodicalIF":1.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144605932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}