首页 > 最新文献

Archives of Computational Methods in Engineering最新文献

英文 中文
A Comprehensive Review of Various Machine Learning and Deep Learning Models for Anti-Cancer Drug Response Prediction: Comparative Analysis With Existing State of the Art Methods 用于抗癌药物反应预测的各种机器学习和深度学习模型的综合综述:与现有最先进方法的比较分析
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-10 DOI: 10.1007/s11831-025-10255-2
Davinder Paul Singh, Pawandeep Kour, Tathagat Banerjee, Debabrata Swain

The optimal treatment selection for cancer patients is extensive, and pharmacogenetic prediction is done using genetic cohort, chemical structure, and target information. Though previous studies sought to characterise pharmacological reactions, there were limits in categorization. Due to the development of various solutions, existing feature selection techniques such as statistical combinations suffer from drawbacks such as local optima, lack of heuristics, and so on. This further leads to a low convergence rate which affects the classification rate. To address this, the current study describes a hybrid approach that is based on machine learning and deep learning, as well as a comparison of the localization heuristic-based Harris Hawk intelligence method and Gravitational Optimization methods with Machine Learning (ML) and Deep- Learning (DL) algorithms. The study suggests the use of Conditional Generative Adversarial Network (CGAN) to obtain better feature selection with less volatility in order to improve data quality and minimise intrinsic variation. In this study, the possible associations between cell lines and drugs are deduced using the CCLE- Cancer-Cell Line Encyclopaedia and Genomics of Drug Sensitivity in Cancer- GDSC datasets, and the study proposes a hybrid Bi-Residual Dense Attention Network for cell line categorization. The proposed method shows better prediction performance based on precision, accuracy, F1-score, Area under curve (AUC), Area under the receiver operating characteristic curve (AUROC), specificity and recall. For the GDSC dataset, the BRDAN-HH framework achieved an accuracy of 0.9675, recall of 0.9795, specificity of 0.975, precision of 0.9785, F1-score of 0.9799, AUC of 0.97, and AUROC of 0.9705. Similarly, for the CCLE dataset, it demonstrated robust performance with an accuracy of 0.9655, recall of 0.986094, specificity of 0.975, precision of 0.975, F1-score of 0.986, AUC of 0.966, and AUROC of 0.9758. The results highlight the efficacy of the BRDAN-HH framework in delivering superior classification metrics, making it a valuable tool for analysing large-scale biomedical datasets.

癌症患者的最佳治疗选择是广泛的,药物遗传学预测是利用遗传队列、化学结构和靶点信息来完成的。虽然以前的研究试图描述药理反应的特征,但在分类上存在局限性。由于各种解决方案的发展,现有的特征选择技术(如统计组合)存在局部最优、缺乏启发式等缺点。这进一步导致较低的收敛速度,从而影响分类速度。为了解决这个问题,目前的研究描述了一种基于机器学习和深度学习的混合方法,并将基于定位启发式的Harris Hawk智能方法和重力优化方法与机器学习(ML)和深度学习(DL)算法进行了比较。该研究建议使用条件生成对抗网络(CGAN)来获得更好的特征选择和更少的波动性,以提高数据质量和最小化内在变化。本研究利用CCLE- Cancer- cell Line Encyclopaedia和Genomics of Drug Sensitivity In Cancer- GDSC数据集推导了细胞系和药物之间可能存在的关联,并提出了一种用于细胞系分类的混合bi -残差密集注意网络。该方法在精密度、准确度、f1评分、曲线下面积(AUC)、受试者工作特征曲线下面积(AUROC)、特异性和召回率方面均表现出较好的预测效果。对于GDSC数据集,BRDAN-HH框架的准确率为0.9675,召回率为0.9795,特异性为0.975,精密度为0.9785,f1评分为0.9799,AUC为0.97,AUROC为0.9705。同样,对于CCLE数据集,它表现出稳健的性能,准确率为0.9655,召回率为0.986094,特异性为0.975,精密度为0.975,f1评分为0.986,AUC为0.966,AUROC为0.9758。结果突出了BRDAN-HH框架在提供卓越分类指标方面的功效,使其成为分析大规模生物医学数据集的有价值工具。
{"title":"A Comprehensive Review of Various Machine Learning and Deep Learning Models for Anti-Cancer Drug Response Prediction: Comparative Analysis With Existing State of the Art Methods","authors":"Davinder Paul Singh,&nbsp;Pawandeep Kour,&nbsp;Tathagat Banerjee,&nbsp;Debabrata Swain","doi":"10.1007/s11831-025-10255-2","DOIUrl":"10.1007/s11831-025-10255-2","url":null,"abstract":"<div><p>The optimal treatment selection for cancer patients is extensive, and pharmacogenetic prediction is done using genetic cohort, chemical structure, and target information. Though previous studies sought to characterise pharmacological reactions, there were limits in categorization. Due to the development of various solutions, existing feature selection techniques such as statistical combinations suffer from drawbacks such as local optima, lack of heuristics, and so on. This further leads to a low convergence rate which affects the classification rate. To address this, the current study describes a hybrid approach that is based on machine learning and deep learning, as well as a comparison of the localization heuristic-based Harris Hawk intelligence method and Gravitational Optimization methods with Machine Learning (ML) and Deep- Learning (DL) algorithms. The study suggests the use of Conditional Generative Adversarial Network (CGAN) to obtain better feature selection with less volatility in order to improve data quality and minimise intrinsic variation. In this study, the possible associations between cell lines and drugs are deduced using the CCLE- Cancer-Cell Line Encyclopaedia and Genomics of Drug Sensitivity in Cancer- GDSC datasets, and the study proposes a hybrid Bi-Residual Dense Attention Network for cell line categorization. The proposed method shows better prediction performance based on precision, accuracy, F1-score, Area under curve (AUC), Area under the receiver operating characteristic curve (AUROC), specificity and recall. For the GDSC dataset, the BRDAN-HH framework achieved an accuracy of 0.9675, recall of 0.9795, specificity of 0.975, precision of 0.9785, F1-score of 0.9799, AUC of 0.97, and AUROC of 0.9705. Similarly, for the CCLE dataset, it demonstrated robust performance with an accuracy of 0.9655, recall of 0.986094, specificity of 0.975, precision of 0.975, F1-score of 0.986, AUC of 0.966, and AUROC of 0.9758. The results highlight the efficacy of the BRDAN-HH framework in delivering superior classification metrics, making it a valuable tool for analysing large-scale biomedical datasets.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 6","pages":"3733 - 3757"},"PeriodicalIF":12.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Trends and Progress in Molecular Dynamics Simulations of 2D Materials for Tribological Applications: An Extensive Review 二维材料分子动力学模拟在摩擦学应用中的最新趋势和进展
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-10 DOI: 10.1007/s11831-025-10257-0
Kamal Kumar, Jiaqin Xu, Gang Wu, Akarsh Verma, Abhishek Kumar Mishra, Lei Gao, Shigenobu Ogata

The influence of tribology has broadened across diverse fields, witnessing substantial and immense growth in different research-related activities over the last decade. This exciting domain drives innovation in lubricant material for extending the lifetime of machinery and contributing to the conservation of energy. Molecular dynamics (MD) simulations play an important role in tribological studies and provide useful insights into atomic-level interactions among sliding surfaces. MD simulations allow researchers to design the model and track the interactions and movements of individual molecules and atoms. This degree of accuracy offers a better understanding of the basic mechanism, including the response of material to different loads and different environmental circumstances. Two-dimensional materials showcase remarkable tribological characteristics. The ultrathin nature and unique atomic arrangement of these materials offer various advantages in the reduction of wear and friction by making them ideal candidates for numerous applications in coatings and lubrication. This review paper explores the MD simulations on tribology in recent years, with a focus on both traditional two-dimensional materials (such as graphene, hexagonal boron nitride, and molybdenum disulfide) and emerging materials (such as MXenes and phosphorene). Our investigation covers the complexity of frictional force at both macroscopic and microscopic scales, the wear mechanism, and the role of adding lubrication for preventing wear and minimizing friction. The main aim is to offer engineers, researchers, and scientists a cherished resource for a better understanding of the complicated ingredients of tribology and direct them to future developments in this critical domain.

摩擦学的影响已经扩展到不同的领域,在过去的十年中见证了不同研究相关活动的实质性和巨大的增长。这个令人兴奋的领域推动了润滑油材料的创新,延长了机械的使用寿命,并有助于节约能源。分子动力学(MD)模拟在摩擦学研究中起着重要作用,并为滑动表面之间的原子水平相互作用提供了有用的见解。MD模拟允许研究人员设计模型并跟踪单个分子和原子的相互作用和运动。这种精度提供了对基本机制的更好理解,包括材料对不同载荷和不同环境情况的响应。二维材料表现出显著的摩擦学特性。这些材料的超薄性质和独特的原子排列为减少磨损和摩擦提供了各种优势,使它们成为涂料和润滑领域众多应用的理想候选者。本文综述了近年来摩擦学的MD模拟,重点介绍了传统的二维材料(如石墨烯、六方氮化硼和二硫化钼)和新兴材料(如MXenes和磷烯)。我们的研究涵盖了宏观和微观尺度上摩擦力的复杂性,磨损机制,以及添加润滑对防止磨损和减少摩擦的作用。主要目的是为工程师、研究人员和科学家提供宝贵的资源,以便更好地了解摩擦学的复杂成分,并指导他们在这一关键领域的未来发展。
{"title":"Recent Trends and Progress in Molecular Dynamics Simulations of 2D Materials for Tribological Applications: An Extensive Review","authors":"Kamal Kumar,&nbsp;Jiaqin Xu,&nbsp;Gang Wu,&nbsp;Akarsh Verma,&nbsp;Abhishek Kumar Mishra,&nbsp;Lei Gao,&nbsp;Shigenobu Ogata","doi":"10.1007/s11831-025-10257-0","DOIUrl":"10.1007/s11831-025-10257-0","url":null,"abstract":"<div><p>The influence of tribology has broadened across diverse fields, witnessing substantial and immense growth in different research-related activities over the last decade. This exciting domain drives innovation in lubricant material for extending the lifetime of machinery and contributing to the conservation of energy. Molecular dynamics (MD) simulations play an important role in tribological studies and provide useful insights into atomic-level interactions among sliding surfaces. MD simulations allow researchers to design the model and track the interactions and movements of individual molecules and atoms. This degree of accuracy offers a better understanding of the basic mechanism, including the response of material to different loads and different environmental circumstances. Two-dimensional materials showcase remarkable tribological characteristics. The ultrathin nature and unique atomic arrangement of these materials offer various advantages in the reduction of wear and friction by making them ideal candidates for numerous applications in coatings and lubrication. This review paper explores the MD simulations on tribology in recent years, with a focus on both traditional two-dimensional materials (such as graphene, hexagonal boron nitride, and molybdenum disulfide) and emerging materials (such as MXenes and phosphorene). Our investigation covers the complexity of frictional force at both macroscopic and microscopic scales, the wear mechanism, and the role of adding lubrication for preventing wear and minimizing friction. The main aim is to offer engineers, researchers, and scientists a cherished resource for a better understanding of the complicated ingredients of tribology and direct them to future developments in this critical domain.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 6","pages":"3909 - 3931"},"PeriodicalIF":12.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Parallel Computing for Large-scale Reservoir Numerical Simulation 大规模油藏数值模拟并行计算研究进展
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-10 DOI: 10.1007/s11831-025-10263-2
Xiangling Meng, Xiao He, Changjun Hu, Xu Lu, Huayu Li

Reservoir numerical simulation is crucial for advancing research and development in petroleum engineering. To obtain high-precision spatial and temporal simulation results, a great amount of time and computational resources are needed. Parallel computing addresses this problem by distributing computational workloads and memory requirements across multiple processors. It enables large-scale and high-fidelity simulations and reduces time costs. In this paper, we review existing parallel computing for large-scale reservoir numerical simulation. The paper is achieved by conducting a systematic literature review published between 1990 and 2024. Using the PRISMA guideline, 134 supporting studies are selected for detailed extraction. The key contributions of this paper are threefold: (1) classification and analysis of numerical methods (including discretization methods, nonlinear methods, and linear iterative solvers and preconditioner methods); (2) an in-depth discussion on parallel techniques in high-performance computing (HPC), such as parallel programming models, load balancing, communication optimization, and GPU acceleration; and (3) an outline of software implementations, particularly solvers and reservoir simulators. In conclusion, developing efficient, robust, and scalable linear solving tools is key to reservoir simulation. We compare available preconditioner options and summarise the current state of the art in linear solving tools. Meanwhile, CPU and GPU parallel acceleration techniques have been rapidly developed. These emphases will provide a theoretical foundation and practical guidance for optimizing linear solution processes in the future.

油藏数值模拟对于推进石油工程研究与开发具有重要意义。为了获得高精度的时空模拟结果,需要耗费大量的时间和计算资源。并行计算通过在多个处理器之间分配计算工作负载和内存需求来解决这个问题。它可以实现大规模和高保真度的模拟,并减少时间成本。本文综述了大规模油藏数值模拟的并行计算方法。该论文是通过对1990年至2024年间发表的文献进行系统回顾而得出的。使用PRISMA指南,选择134个支持性研究进行详细提取。本文的主要贡献有三个方面:(1)数值方法的分类和分析(包括离散化方法、非线性方法、线性迭代求解和预条件法);(2)深入讨论了高性能计算(HPC)中的并行技术,如并行编程模型、负载平衡、通信优化和GPU加速;(3)软件实现概述,特别是求解器和油藏模拟器。总之,开发高效、稳健、可扩展的线性求解工具是油藏模拟的关键。我们比较了可用的预条件选项,并总结了当前在线性求解工具的艺术状态。同时,CPU和GPU并行加速技术也得到了迅速发展。这些重点将为今后优化线性求解过程提供理论基础和实践指导。
{"title":"A Review of Parallel Computing for Large-scale Reservoir Numerical Simulation","authors":"Xiangling Meng,&nbsp;Xiao He,&nbsp;Changjun Hu,&nbsp;Xu Lu,&nbsp;Huayu Li","doi":"10.1007/s11831-025-10263-2","DOIUrl":"10.1007/s11831-025-10263-2","url":null,"abstract":"<div><p>Reservoir numerical simulation is crucial for advancing research and development in petroleum engineering. To obtain high-precision spatial and temporal simulation results, a great amount of time and computational resources are needed. Parallel computing addresses this problem by distributing computational workloads and memory requirements across multiple processors. It enables large-scale and high-fidelity simulations and reduces time costs. In this paper, we review existing parallel computing for large-scale reservoir numerical simulation. The paper is achieved by conducting a systematic literature review published between 1990 and 2024. Using the PRISMA guideline, 134 supporting studies are selected for detailed extraction. The key contributions of this paper are threefold: (1) classification and analysis of numerical methods (including discretization methods, nonlinear methods, and linear iterative solvers and preconditioner methods); (2) an in-depth discussion on parallel techniques in high-performance computing (HPC), such as parallel programming models, load balancing, communication optimization, and GPU acceleration; and (3) an outline of software implementations, particularly solvers and reservoir simulators. In conclusion, developing efficient, robust, and scalable linear solving tools is key to reservoir simulation. We compare available preconditioner options and summarise the current state of the art in linear solving tools. Meanwhile, CPU and GPU parallel acceleration techniques have been rapidly developed. These emphases will provide a theoretical foundation and practical guidance for optimizing linear solution processes in the future.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 7","pages":"4125 - 4162"},"PeriodicalIF":12.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Study of Existing and New Sphere Clump Generation Algorithms for Modeling Arbitrary Shaped Particles 用于任意形状粒子建模的现有和新的球团生成算法的比较研究
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-10 DOI: 10.1007/s11831-025-10256-1
Hadi Fathipour-Azar, Jérôme Duriez

This paper presents a comparative analysis of multiple algorithms for generating sphere clumps as approximations for irregularly-shaped particles in granular systems. Investigated algorithms both include four previously used techniques and two new ones developed in this study. They are often built on common concepts such as distance transforms, filling, packing techniques, and particle medial surface. The two new algorithms herein proposed arrange individual spheres in a clump shape using either a greedy volume coverage or a clustering approach based on a k-means machine learning technique. The performances of the various algorithms are evaluated in terms of both the number of spheres generated per clump and the volume error. The evaluation is conducted on diverse superquadric shapes, serving as ground-truth references, as well as on real rock pieces. Among considered clump generators, results show that existing algorithms may output dispersed results depending on user parameters that are difficult to calibrate, while both proposed algorithms generate realistic sphere clumps, with the volume coverage one being more convenient than the k-means based approach. As a matter of fact, the volume coverage technique is found to be the most effective approach among the studied algorithms in terms of sphere generation and volume precision.

本文比较分析了多种生成球形团块的算法,作为颗粒系统中不规则形状颗粒的近似。所研究的算法都包括四种以前使用的技术和两种在本研究中开发的新技术。它们通常建立在距离变换、填充、填充技术和粒子中间表面等共同概念之上。本文提出的两种新算法使用贪婪体积覆盖或基于k-means机器学习技术的聚类方法将单个球体排列成簇状。根据每团生成的球数和体积误差对各种算法的性能进行了评价。评估是在不同的超二次型上进行的,作为地基真值参考,以及在真实的岩块上。在考虑的团块生成器中,结果表明,现有算法可能会根据难以校准的用户参数输出分散的结果,而两种算法都能生成真实的球体团块,其中体积覆盖方法比基于k-means的方法更方便。事实上,在所研究的算法中,体积覆盖技术在球体生成和体积精度方面是最有效的方法。
{"title":"A Comparative Study of Existing and New Sphere Clump Generation Algorithms for Modeling Arbitrary Shaped Particles","authors":"Hadi Fathipour-Azar,&nbsp;Jérôme Duriez","doi":"10.1007/s11831-025-10256-1","DOIUrl":"10.1007/s11831-025-10256-1","url":null,"abstract":"<div><p>This paper presents a comparative analysis of multiple algorithms for generating sphere clumps as approximations for irregularly-shaped particles in granular systems. Investigated algorithms both include four previously used techniques and two new ones developed in this study. They are often built on common concepts such as distance transforms, filling, packing techniques, and particle medial surface. The two new algorithms herein proposed arrange individual spheres in a clump shape using either a greedy volume coverage or a clustering approach based on a k-means machine learning technique. The performances of the various algorithms are evaluated in terms of both the number of spheres generated per clump and the volume error. The evaluation is conducted on diverse superquadric shapes, serving as ground-truth references, as well as on real rock pieces. Among considered clump generators, results show that existing algorithms may output dispersed results depending on user parameters that are difficult to calibrate, while both proposed algorithms generate realistic sphere clumps, with the volume coverage one being more convenient than the k-means based approach. As a matter of fact, the volume coverage technique is found to be the most effective approach among the studied algorithms in terms of sphere generation and volume precision.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 7","pages":"4033 - 4048"},"PeriodicalIF":12.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical Application of Finite Element Analysis in Meniscus Diseases: A Comprehensive Review 有限元分析在半月板疾病中的临床应用综述
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-10 DOI: 10.1007/s11831-025-10265-0
Jiangbo Zhang, Bingpeng Chen, Bo Chen, Hao Wang, Qing Han, Xiongfeng Tang, Yanguo Qin

In recent years, finite element analysis has advanced significantly in the clinical study of meniscus diseases. As a numerical simulation technique, finite element analysis provides accurate biomechanical information for diagnosing and treating orthopedic conditions. Compared to traditional methods, finite element analysis is more efficient, convenient, and economical, generating precise data to validate models, guide designs, and optimize clinical protocols. However, there is currently a lack of reviews investigating finite element analysis’s application in meniscal studies. This review addresses this gap by examining current research and practices. It begins by discussing the biomechanical value of finite element analysis in meniscal anatomy and diseases. To thoroughly evaluate the application of finite element analysis in meniscus tear injuries, congenital meniscus abnormalities, and the development of artificial meniscus implants, we explore various research directions from a medical perspective: bionic design, treatment strategy comparison, modeling optimization, prognostic prediction, damage process simulation, damage state analysis, and specific movement investigation. The findings indicate that while finite element analysis shows substantial promise in meniscal research and treatment, challenges remain in establishing standardized experimental protocols and achieving clinical translation. Finally, the paper explored potential directions that may advance the application of finite element analysis in the medical field.

近年来,有限元分析在半月板疾病的临床研究中取得了显著进展。作为一种数值模拟技术,有限元分析为骨科疾病的诊断和治疗提供了准确的生物力学信息。与传统方法相比,有限元分析更高效、方便、经济,可以生成精确的数据来验证模型、指导设计和优化临床方案。然而,目前缺乏关于有限元分析在半月板研究中的应用的综述。本综述通过检查当前的研究和实践来解决这一差距。首先讨论有限元分析在半月板解剖和疾病中的生物力学价值。为了全面评估有限元分析在半月板撕裂损伤、先天性半月板异常、人工半月板植入物发展中的应用,我们从医学角度探索了仿生设计、治疗策略比较、建模优化、预后预测、损伤过程模拟、损伤状态分析、具体运动研究等多个研究方向。研究结果表明,虽然有限元分析在半月板研究和治疗中显示出巨大的希望,但在建立标准化实验方案和实现临床转化方面仍然存在挑战。最后,探讨了有限元分析在医学领域应用的潜在发展方向。
{"title":"Clinical Application of Finite Element Analysis in Meniscus Diseases: A Comprehensive Review","authors":"Jiangbo Zhang,&nbsp;Bingpeng Chen,&nbsp;Bo Chen,&nbsp;Hao Wang,&nbsp;Qing Han,&nbsp;Xiongfeng Tang,&nbsp;Yanguo Qin","doi":"10.1007/s11831-025-10265-0","DOIUrl":"10.1007/s11831-025-10265-0","url":null,"abstract":"<div><p>In recent years, finite element analysis has advanced significantly in the clinical study of meniscus diseases. As a numerical simulation technique, finite element analysis provides accurate biomechanical information for diagnosing and treating orthopedic conditions. Compared to traditional methods, finite element analysis is more efficient, convenient, and economical, generating precise data to validate models, guide designs, and optimize clinical protocols. However, there is currently a lack of reviews investigating finite element analysis’s application in meniscal studies. This review addresses this gap by examining current research and practices. It begins by discussing the biomechanical value of finite element analysis in meniscal anatomy and diseases. To thoroughly evaluate the application of finite element analysis in meniscus tear injuries, congenital meniscus abnormalities, and the development of artificial meniscus implants, we explore various research directions from a medical perspective: bionic design, treatment strategy comparison, modeling optimization, prognostic prediction, damage process simulation, damage state analysis, and specific movement investigation. The findings indicate that while finite element analysis shows substantial promise in meniscal research and treatment, challenges remain in establishing standardized experimental protocols and achieving clinical translation. Finally, the paper explored potential directions that may advance the application of finite element analysis in the medical field.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 7","pages":"4163 - 4195"},"PeriodicalIF":12.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s11831-025-10265-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Matrix Learning Vector Quantization Computational Method for Intelligent Decision Making: A Systematic Literature Review 智能决策的广义矩阵学习向量量化计算方法:系统的文献综述
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-10 DOI: 10.1007/s11831-025-10267-y
Fredrick Mumali, Joanna Kałkowska

Data’s increasing complexity and uncertainty across domains continue to drive the demand for more robust, efficient, and accurate computational methods, including machine learning algorithms for pattern recognition and classification problems. Kohonen’s Learning Vector Quantization algorithms have been integral to classification algorithms for decades. However, variants such as the Generalized Matrix Learning Vector Quantization have emerged as highly promising and capable computational models for analyzing complex patterns in high-dimensional and noisy datasets with increased performance in recent literature. As a result, this systematic literature review attempts to comprehensively examine recent studies on Generalized Matrix Learning Vector Quantization algorithms, focusing on algorithmic enhancements and variations, inherent features like feature relevance and metric learning, application domains, and performance. Using the Denyer and Tranfield 5-stage systematic literature review method, 61 studies published between 2015 and 2024 are selected for analysis from Scopus, Web of Science, IEEE, and Sprinter. The findings reveal significant advancements and applications of the Generalized Matrix Learning Vector Quantization across healthcare, bioinformatics, and agriculture. The analyzed empirical studies highlight the algorithm’s adaptability to various classification problems and enhanced performance. While the cross-disciplinary potential for Generalized Matrix Learning Vector Quantization is well documented, the review identifies gaps in the literature, particularly in the manufacturing domain. Given the rapid advances in manufacturing and the voluminous amounts of data generated, Generalized Matrix Learning Vector Quantization holds great potential in advancing intelligent decision-making across the domain, such as in the selection and management of manufacturing processes.

数据在各个领域日益增加的复杂性和不确定性继续推动对更强大、高效和准确的计算方法的需求,包括用于模式识别和分类问题的机器学习算法。Kohonen的学习向量量化算法几十年来一直是分类算法的组成部分。然而,在最近的文献中,广义矩阵学习向量量化等变体已经成为分析高维和噪声数据集中复杂模式的非常有前途和能力的计算模型,其性能有所提高。因此,本系统的文献综述试图全面检查最近关于广义矩阵学习向量量化算法的研究,重点关注算法的增强和变化,固有特征,如特征相关性和度量学习,应用领域和性能。采用Denyer和Tranfield五阶段系统文献综述法,从Scopus、Web of Science、IEEE和Sprinter中选取2015 - 2024年间发表的61篇研究进行分析。研究结果揭示了广义矩阵学习向量量化在医疗保健、生物信息学和农业领域的重大进步和应用。通过对实证研究的分析,突出了该算法对各种分类问题的适应性和性能的提高。虽然广义矩阵学习向量量化的跨学科潜力得到了很好的证明,但该综述指出了文献中的空白,特别是在制造领域。鉴于制造业的快速发展和产生的大量数据,广义矩阵学习向量量化在推进跨领域的智能决策方面具有巨大的潜力,例如在制造过程的选择和管理方面。
{"title":"Generalized Matrix Learning Vector Quantization Computational Method for Intelligent Decision Making: A Systematic Literature Review","authors":"Fredrick Mumali,&nbsp;Joanna Kałkowska","doi":"10.1007/s11831-025-10267-y","DOIUrl":"10.1007/s11831-025-10267-y","url":null,"abstract":"<div><p>Data’s increasing complexity and uncertainty across domains continue to drive the demand for more robust, efficient, and accurate computational methods, including machine learning algorithms for pattern recognition and classification problems. Kohonen’s Learning Vector Quantization algorithms have been integral to classification algorithms for decades. However, variants such as the Generalized Matrix Learning Vector Quantization have emerged as highly promising and capable computational models for analyzing complex patterns in high-dimensional and noisy datasets with increased performance in recent literature. As a result, this systematic literature review attempts to comprehensively examine recent studies on Generalized Matrix Learning Vector Quantization algorithms, focusing on algorithmic enhancements and variations, inherent features like feature relevance and metric learning, application domains, and performance. Using the Denyer and Tranfield 5-stage systematic literature review method, 61 studies published between 2015 and 2024 are selected for analysis from Scopus, Web of Science, IEEE, and Sprinter. The findings reveal significant advancements and applications of the Generalized Matrix Learning Vector Quantization across healthcare, bioinformatics, and agriculture. The analyzed empirical studies highlight the algorithm’s adaptability to various classification problems and enhanced performance. While the cross-disciplinary potential for Generalized Matrix Learning Vector Quantization is well documented, the review identifies gaps in the literature, particularly in the manufacturing domain. Given the rapid advances in manufacturing and the voluminous amounts of data generated, Generalized Matrix Learning Vector Quantization holds great potential in advancing intelligent decision-making across the domain, such as in the selection and management of manufacturing processes.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 6","pages":"3885 - 3907"},"PeriodicalIF":12.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of the Application of Machine Learning for Pipeline Integrity Predictive Analysis in Water Distribution Networks 机器学习在配水管网完整性预测分析中的应用综述
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-08 DOI: 10.1007/s11831-025-10251-6
Runfei Chen, Qiuping Wang, Ahad Javanmardi

Water Distribution Networks (WDNs), as critical urban infrastructures, face heightened vulnerability to damage and failure due to aging systems and external factors such as environmental changes, operational demands, and urban development pressures. Accurate predictive integrity assessment for pipeline systems is crucial for implementing proactive maintenance strategies that prevent catastrophic failures and ensure service reliability. In recent decades, the application of Machine Learning (ML) has emerged as a promising technique for processing and extracting complex interactions between influencing factors and failure trends within WDN systems. This article systematically reviews application scenarios, critical factors influencing WDN integrity, and the modeling and analysis of ML-based predictive models for WDNs. The review analyzes pertinent literature from the past two decades, up to 2024, using the PRISMA procedure and the snowballing method. The findings highlight the superior capabilities of specific ML models, such as tree-based algorithms, artificial neural networks, support vector machines, and other recent deep learning methods in predicting network failures and enhancing system health diagnostics. In addition, key challenges identified include: (i) insufficient standardization in variable selection, model selection and evaluation; (ii) limited data availability due to inconsistent historical failure records; (iii) a lack of systematic feature engineering pipelines for data preprocessing; and (iv) constraints in real-world generalization across finer temporal scales and different geographical regions. Furthermore, the main future research recommendations include developing a standardized framework for variable selection and model architectures, improving multi-source data fusion and collection techniques, enhancing feature engineering methodologies, and conducting systematic evaluations across diverse operational environments.

供水管网作为重要的城市基础设施,由于系统老化以及环境变化、运营需求和城市发展压力等外部因素,面临着日益严重的损坏和故障风险。对管道系统进行准确的预测性完整性评估对于实施主动维护策略,防止灾难性故障和确保服务可靠性至关重要。近几十年来,机器学习(ML)的应用已经成为处理和提取WDN系统中影响因素和故障趋势之间复杂相互作用的一种有前途的技术。本文系统地综述了WDN的应用场景、影响WDN完整性的关键因素以及基于ml的WDN预测模型的建模与分析。本综述使用PRISMA程序和滚雪球法分析了过去20年至2024年的相关文献。研究结果强调了特定机器学习模型的卓越能力,例如基于树的算法、人工神经网络、支持向量机和其他最近的深度学习方法,可以预测网络故障和增强系统健康诊断。此外,确定的主要挑战包括:(i)变量选择,模型选择和评估的标准化不足;(ii)由于历史故障记录不一致,数据可用性有限;(iii)缺乏系统的数据预处理特征工程管道;(iv)在更精细的时间尺度和不同地理区域的现实世界泛化中的约束。此外,未来的主要研究建议包括开发变量选择和模型架构的标准化框架,改进多源数据融合和收集技术,增强特征工程方法,以及在不同的操作环境中进行系统评估。
{"title":"A Review of the Application of Machine Learning for Pipeline Integrity Predictive Analysis in Water Distribution Networks","authors":"Runfei Chen,&nbsp;Qiuping Wang,&nbsp;Ahad Javanmardi","doi":"10.1007/s11831-025-10251-6","DOIUrl":"10.1007/s11831-025-10251-6","url":null,"abstract":"<div><p>Water Distribution Networks (WDNs), as critical urban infrastructures, face heightened vulnerability to damage and failure due to aging systems and external factors such as environmental changes, operational demands, and urban development pressures. Accurate predictive integrity assessment for pipeline systems is crucial for implementing proactive maintenance strategies that prevent catastrophic failures and ensure service reliability. In recent decades, the application of Machine Learning (ML) has emerged as a promising technique for processing and extracting complex interactions between influencing factors and failure trends within WDN systems. This article systematically reviews application scenarios, critical factors influencing WDN integrity, and the modeling and analysis of ML-based predictive models for WDNs. The review analyzes pertinent literature from the past two decades, up to 2024, using the PRISMA procedure and the snowballing method. The findings highlight the superior capabilities of specific ML models, such as tree-based algorithms, artificial neural networks, support vector machines, and other recent deep learning methods in predicting network failures and enhancing system health diagnostics. In addition, key challenges identified include: (i) insufficient standardization in variable selection, model selection and evaluation; (ii) limited data availability due to inconsistent historical failure records; (iii) a lack of systematic feature engineering pipelines for data preprocessing; and (iv) constraints in real-world generalization across finer temporal scales and different geographical regions. Furthermore, the main future research recommendations include developing a standardized framework for variable selection and model architectures, improving multi-source data fusion and collection techniques, enhancing feature engineering methodologies, and conducting systematic evaluations across diverse operational environments.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 6","pages":"3821 - 3849"},"PeriodicalIF":12.1,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145163454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A State-of-the-Art Review on Model Reduction and Substructuring Techniques in Finite Element Model Updating for Structural Health Monitoring Applications 结构健康监测中有限元模型更新中的模型简化和子结构技术研究进展
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-05 DOI: 10.1007/s11831-025-10231-w
Partha Sengupta, Subrata Chakraborty

The model reduction technique (MRT) is an integral part of the finite element model updating (FEMU) approach to address the issue of incompleteness in measurement. It basically condenses the size of a finite element (FE) model to fit with the available responses at limited degrees of freedom. The developments in MRTs and substructure coupling for structural health monitoring (SHM) applications have been enormous. The MRTs are partly discussed in the review articles on FEMU. However, no article is dedicated explicitly to MRTs in SHM applications. Thus, a review article on MRTs will likely augment the state-of-the-art developments of MRTs in FEMU for SHM applications. This review article synthesises the growing literature on different variants of MRTs in time and frequency domains. In doing so, the fundamentals of MRT, salient modifications on the basic MRTs to ease the computational efforts and understanding of its implementation and related developments are presented first. Further, the developments of various substructure coupling techniques used to reduce the order of large FE models are presented. The authors’ recently proposed improved MRTs are also briefly presented. Finally, the prospects and challenges in MRT and substructuring techniques are critically discussed. The review, in general, reveals that the developments in MRTs are gaining importance due to their excellent capability of handling incomplete measurements, indicating the relevance of reviewing the subject from time to time to update the latest developments.

模型约简技术(MRT)是解决测量不完备问题的有限元模型更新(FEMU)方法的重要组成部分。它基本上压缩了有限元(FE)模型的大小,以适应有限自由度下的可用响应。mrt和子结构耦合在结构健康监测(SHM)中的应用已经取得了巨大的进展。在FEMU的评论文章中部分讨论了mrt。然而,没有一篇文章明确地专门讨论SHM应用程序中的mrt。因此,一篇关于mrt的综述文章可能会增加FEMU中用于SHM应用的mrt的最新发展。这篇综述文章综合了越来越多的文献在时间和频率域的不同变体的磁共振成像。在此过程中,首先介绍了MRT的基本原理,对基本MRT的显著修改,以简化计算工作,并了解其实施和相关发展。此外,还介绍了用于降低大型有限元模型阶数的各种子结构耦合技术的发展。本文还简要介绍了作者最近提出的改进的mrt。最后,对MRT和子结构技术的前景和挑战进行了批判性的讨论。总的来说,这项研究表明,磁共振成像技术的发展正变得越来越重要,因为它们具有处理不完整测量的出色能力,这表明不时回顾这一主题以更新最新发展是有意义的。
{"title":"A State-of-the-Art Review on Model Reduction and Substructuring Techniques in Finite Element Model Updating for Structural Health Monitoring Applications","authors":"Partha Sengupta,&nbsp;Subrata Chakraborty","doi":"10.1007/s11831-025-10231-w","DOIUrl":"10.1007/s11831-025-10231-w","url":null,"abstract":"<div><p>The model reduction technique (MRT) is an integral part of the finite element model updating (FEMU) approach to address the issue of incompleteness in measurement. It basically condenses the size of a finite element (FE) model to fit with the available responses at limited degrees of freedom. The developments in MRTs and substructure coupling for structural health monitoring (SHM) applications have been enormous. The MRTs are partly discussed in the review articles on FEMU. However, no article is dedicated explicitly to MRTs in SHM applications. Thus, a review article on MRTs will likely augment the state-of-the-art developments of MRTs in FEMU for SHM applications. This review article synthesises the growing literature on different variants of MRTs in time and frequency domains. In doing so, the fundamentals of MRT, salient modifications on the basic MRTs to ease the computational efforts and understanding of its implementation and related developments are presented first. Further, the developments of various substructure coupling techniques used to reduce the order of large FE models are presented. The authors’ recently proposed improved MRTs are also briefly presented. Finally, the prospects and challenges in MRT and substructuring techniques are critically discussed. The review, in general, reveals that the developments in MRTs are gaining importance due to their excellent capability of handling incomplete measurements, indicating the relevance of reviewing the subject from time to time to update the latest developments.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 5","pages":"3031 - 3062"},"PeriodicalIF":12.1,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Deep Learning for Single Cell Multi-Omics: A State-of-the-Art Review 深度学习在单细胞多组学研究中的应用
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-01 DOI: 10.1007/s11831-025-10230-x
Shahid Ahmad Wani, Sumeer Ahmad Khan, SMK Quadri

Since its inception in 2009 to being highlighted as the method of the year in 2013, single cell sequencing technology has shown tremendous potential to study various omics profiles or data at an unprecedented resolution. The advances in single cell technology have led to the development of multi-omics techniques which can profile more than one modality from a single cell simultaneously. Thus, providing a significant measure of information which can be utilized to study the cell state and functions eventually the disease and health. The multi-omics profiling has led to a significant increase in production of single cell data. The single cell data is complex due to the heterogeneous nature, thus offers various challenges to deal with such largely complex data. Several computational methods have been proposed to get insights from the single cell multi-omics data. A comprehensive review describing the methods would be great step towards the growth of the field of single cell analysis. Here we provide an in-depth survey of the deep learning computational methods for single cell applications. We provide a brief history of sequencing technologies with a timeline depicting the evolution of various profiling techniques developed over the time. We identify various deep learning techniques that have been employed for single cell applications. This paper presents in-depth survey of deep learning based methods for various downstream applications such as imputation, batch effect (BE) removal, single cell integration and more. We identify various challenges and issues associated with each application which are critical to be addressed. This review will serve as a source of knowledge for new researchers aspiring to begin their research journey in building computational methods to overcome various challenges faced by the field.

从2009年开始,到2013年被强调为年度方法,单细胞测序技术在以前所未有的分辨率研究各种组学图谱或数据方面显示出巨大的潜力。单细胞技术的进步导致了多组学技术的发展,这种技术可以同时从单个细胞中分析多种形态。因此,为研究细胞状态和功能,最终疾病和健康提供了重要的信息测量方法。多组学分析导致单细胞数据的生产显著增加。单细胞数据由于其异构性而变得复杂,为处理如此庞大的复杂数据带来了各种挑战。已经提出了几种计算方法来从单细胞多组学数据中获得见解。对这些方法的全面回顾将是单细胞分析领域发展的重要一步。在这里,我们对单细胞应用的深度学习计算方法进行了深入的调查。我们提供了测序技术的简史与时间轴描绘了各种分析技术的发展随着时间的推移。我们确定了用于单细胞应用的各种深度学习技术。本文对基于深度学习的各种下游应用方法进行了深入的调查,如imputation, batch effect (BE) removal,单细胞集成等。我们确定了与每个应用程序相关的各种挑战和问题,这些挑战和问题至关重要。这篇综述将为渴望开始他们的研究旅程的新研究人员提供知识来源,以建立计算方法来克服该领域面临的各种挑战。
{"title":"Application of Deep Learning for Single Cell Multi-Omics: A State-of-the-Art Review","authors":"Shahid Ahmad Wani,&nbsp;Sumeer Ahmad Khan,&nbsp;SMK Quadri","doi":"10.1007/s11831-025-10230-x","DOIUrl":"10.1007/s11831-025-10230-x","url":null,"abstract":"<div><p>Since its inception in 2009 to being highlighted as the method of the year in 2013, single cell sequencing technology has shown tremendous potential to study various omics profiles or data at an unprecedented resolution. The advances in single cell technology have led to the development of multi-omics techniques which can profile more than one modality from a single cell simultaneously. Thus, providing a significant measure of information which can be utilized to study the cell state and functions eventually the disease and health. The multi-omics profiling has led to a significant increase in production of single cell data. The single cell data is complex due to the heterogeneous nature, thus offers various challenges to deal with such largely complex data. Several computational methods have been proposed to get insights from the single cell multi-omics data. A comprehensive review describing the methods would be great step towards the growth of the field of single cell analysis. Here we provide an in-depth survey of the deep learning computational methods for single cell applications. We provide a brief history of sequencing technologies with a timeline depicting the evolution of various profiling techniques developed over the time. We identify various deep learning techniques that have been employed for single cell applications. This paper presents in-depth survey of deep learning based methods for various downstream applications such as imputation, batch effect (BE) removal, single cell integration and more. We identify various challenges and issues associated with each application which are critical to be addressed. This review will serve as a source of knowledge for new researchers aspiring to begin their research journey in building computational methods to overcome various challenges faced by the field.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 5","pages":"2987 - 3029"},"PeriodicalIF":12.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Progress of Digital Reconstruction in Polycrystalline Materials 多晶材料数字化重构研究进展
IF 12.1 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-01 DOI: 10.1007/s11831-025-10245-4
Bingbing Chen, Dongfeng Li, Peter Davies, Richard Johnston, Xiangyun Ge, Chenfeng Li

This study comprehensively reviews recent advances in the digital reconstruction of polycrystalline materials. Digital reconstruction serves as both a representative volume element for multiscale modelling and a source of quantitative data for microstructure characterisation. Three main types of digital reconstruction in polycrystalline materials exist: (i) experimental reconstruction, which links processing-structure-properties-performance by reconstructing actual polycrystalline microstructures using destructive or non-destructive methods; (ii) physics-based models, which replicate evolutionary processes to establish processing-structure linkages, including cellular automata, Monte Carlo, vertex/front tracking, level set, machine learning, and phase field methods; and (iii) geometry-based models, which create ensembles of statistically equivalent polycrystalline microstructures for structure-properties-performance linkages, using simplistic morphology, Voronoi tessellation, ellipsoid packing, texture synthesis, high-order, reduced-order, and machine learning methods. This work reviews the key features, procedures, advantages, and limitations of these methods, with a particular focus on their application in constructing processing-structure-properties-performance linkages. Finally, it summarises the conclusions, challenges, and future directions for digital reconstruction in polycrystalline materials within the framework of computational materials engineering.

本文综述了多晶材料数字化重建的最新进展。数字重建既是多尺度建模的代表性体积元素,也是微观结构表征的定量数据来源。在多晶材料中存在三种主要类型的数字重建:(i)实验重建,通过使用破坏性或非破坏性方法重建实际的多晶微结构,将加工-结构-性能-性能联系起来;(ii)基于物理的模型,复制进化过程以建立处理-结构联系,包括元胞自动机、蒙特卡罗、顶点/前端跟踪、水平集、机器学习和相场方法;(iii)基于几何的模型,使用简单形态学、Voronoi镶嵌、椭球体填充、纹理合成、高阶、降阶和机器学习方法,为结构-性能-性能联系创建统计等效的多晶微结构集成。本文回顾了这些方法的主要特点、程序、优点和局限性,特别关注了它们在构建加工-结构-性能-性能联系方面的应用。最后,总结了计算材料工程框架下多晶材料数字化重建的结论、挑战和未来发展方向。
{"title":"Recent Progress of Digital Reconstruction in Polycrystalline Materials","authors":"Bingbing Chen,&nbsp;Dongfeng Li,&nbsp;Peter Davies,&nbsp;Richard Johnston,&nbsp;Xiangyun Ge,&nbsp;Chenfeng Li","doi":"10.1007/s11831-025-10245-4","DOIUrl":"10.1007/s11831-025-10245-4","url":null,"abstract":"<div><p>This study comprehensively reviews recent advances in the digital reconstruction of polycrystalline materials. Digital reconstruction serves as both a representative volume element for multiscale modelling and a source of quantitative data for microstructure characterisation. Three main types of digital reconstruction in polycrystalline materials exist: (i) experimental reconstruction, which links processing-structure-properties-performance by reconstructing actual polycrystalline microstructures using destructive or non-destructive methods; (ii) physics-based models, which replicate evolutionary processes to establish processing-structure linkages, including cellular automata, Monte Carlo, vertex/front tracking, level set, machine learning, and phase field methods; and (iii) geometry-based models, which create ensembles of statistically equivalent polycrystalline microstructures for structure-properties-performance linkages, using simplistic morphology, Voronoi tessellation, ellipsoid packing, texture synthesis, high-order, reduced-order, and machine learning methods. This work reviews the key features, procedures, advantages, and limitations of these methods, with a particular focus on their application in constructing processing-structure-properties-performance linkages. Finally, it summarises the conclusions, challenges, and future directions for digital reconstruction in polycrystalline materials within the framework of computational materials engineering.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 6","pages":"3447 - 3498"},"PeriodicalIF":12.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s11831-025-10245-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Archives of Computational Methods in Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1