Mohammed Chekroun, Youssef Mourchid, Igor Bessières, Alain Lalande
The advent of the 0.35 T MR-Linac (MRIdian, ViewRay) system in radiation therapy allows precise tumor targeting for moving lesions. However, the lack of an automatic volume segmentation function in the MR-Linac’s treatment planning system poses a challenge. In this paper, we propose a deep-learning-based multiorgan segmentation approach for the thoracic region, using EfficientNet as the backbone for the network architecture. The objectives of this approach include accurate segmentation of critical organs, such as the left and right lungs, the heart, the spinal cord, and the esophagus, essential for minimizing radiation toxicity during external radiation therapy. Our proposed approach, when evaluated on an internal dataset comprising 81 patients, demonstrated superior performance compared to other state-of-the-art methods. Specifically, the results for our approach with a 2.5D strategy were as follows: a dice similarity coefficient (DSC) of 0.820 ± 0.041, an intersection over union (IoU) of 0.725 ± 0.052, and a 3D Hausdorff distance (HD) of 10.353 ± 4.974 mm. Notably, the 2.5D strategy surpassed the 2D strategy in all three metrics, exhibiting higher DSC and IoU values, as well as lower HD values. This improvement strongly suggests that our proposed approach with the 2.5D strategy may hold promise in achieving more precise and accurate segmentations when compared to the conventional 2D strategy. Our work has practical implications in the improvement of treatment planning precision, aligning with the evolution of medical imaging and innovative strategies for multiorgan segmentation tasks.
{"title":"Deep Learning Based on EfficientNet for Multiorgan Segmentation of Thoracic Structures on a 0.35 T MR-Linac Radiation Therapy System","authors":"Mohammed Chekroun, Youssef Mourchid, Igor Bessières, Alain Lalande","doi":"10.3390/a16120564","DOIUrl":"https://doi.org/10.3390/a16120564","url":null,"abstract":"The advent of the 0.35 T MR-Linac (MRIdian, ViewRay) system in radiation therapy allows precise tumor targeting for moving lesions. However, the lack of an automatic volume segmentation function in the MR-Linac’s treatment planning system poses a challenge. In this paper, we propose a deep-learning-based multiorgan segmentation approach for the thoracic region, using EfficientNet as the backbone for the network architecture. The objectives of this approach include accurate segmentation of critical organs, such as the left and right lungs, the heart, the spinal cord, and the esophagus, essential for minimizing radiation toxicity during external radiation therapy. Our proposed approach, when evaluated on an internal dataset comprising 81 patients, demonstrated superior performance compared to other state-of-the-art methods. Specifically, the results for our approach with a 2.5D strategy were as follows: a dice similarity coefficient (DSC) of 0.820 ± 0.041, an intersection over union (IoU) of 0.725 ± 0.052, and a 3D Hausdorff distance (HD) of 10.353 ± 4.974 mm. Notably, the 2.5D strategy surpassed the 2D strategy in all three metrics, exhibiting higher DSC and IoU values, as well as lower HD values. This improvement strongly suggests that our proposed approach with the 2.5D strategy may hold promise in achieving more precise and accurate segmentations when compared to the conventional 2D strategy. Our work has practical implications in the improvement of treatment planning precision, aligning with the evolution of medical imaging and innovative strategies for multiorgan segmentation tasks.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"10 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139009640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clara Freijo, Joaquin L. Herraiz, F. Arias-Valcayo, Paula Ibáñez, Gabriela Moreno, A. Villa-Abaunza, José Manuel Udías
Chest X-rays (CXRs) represent the first tool globally employed to detect cardiopulmonary pathologies. These acquisitions are highly affected by scattered photons due to the large field of view required. Scatter in CXRs introduces background in the images, which reduces their contrast. We developed three deep-learning-based models to estimate and correct scatter contribution to CXRs. We used a Monte Carlo (MC) ray-tracing model to simulate CXRs from human models obtained from CT scans using different configurations (depending on the availability of dual-energy acquisitions). The simulated CXRs contained the separated contribution of direct and scattered X-rays in the detector. These simulated datasets were then used as the reference for the supervised training of several NNs. Three NN models (single and dual energy) were trained with the MultiResUNet architecture. The performance of the NN models was evaluated on CXRs obtained, with an MC code, from chest CT scans of patients affected by COVID-19. The results show that the NN models were able to estimate and correct the scatter contribution to CXRs with an error of <5%, being robust to variations in the simulation setup and improving contrast in soft tissue. The single-energy model was tested on real CXRs, providing robust estimations of the scatter-corrected CXRs.
胸部 X 射线(CXR)是全球用于检测心肺病变的第一种工具。由于需要较大的视野,这些采集受到散射光子的影响很大。CXR 中的散射会在图像中引入背景,从而降低图像的对比度。我们开发了三种基于深度学习的模型来估计和纠正 CXR 的散射。我们使用蒙特卡洛(Monte Carlo,MC)射线追踪模型模拟从使用不同配置(取决于是否有双能量采集)的 CT 扫描中获得的人体模型的 CXR。模拟的 CXR 包含探测器中直接 X 射线和散射 X 射线的分离贡献。然后,这些模拟数据集被用作多个 NN 的监督训练参考。使用 MultiResUNet 架构训练了三个 NN 模型(单能量和双能量)。使用 MC 代码对 COVID-19 患者胸部 CT 扫描获得的 CXR 对 NN 模型的性能进行了评估。结果表明,NN 模型能够估算和纠正 CXR 的散射贡献,误差小于 5%,对模拟设置的变化具有鲁棒性,并能改善软组织的对比度。单能量模型在真实 CXR 上进行了测试,对散射校正后的 CXR 进行了稳健的估计。
{"title":"Robustness of Single- and Dual-Energy Deep-Learning-Based Scatter Correction Models on Simulated and Real Chest X-rays","authors":"Clara Freijo, Joaquin L. Herraiz, F. Arias-Valcayo, Paula Ibáñez, Gabriela Moreno, A. Villa-Abaunza, José Manuel Udías","doi":"10.3390/a16120565","DOIUrl":"https://doi.org/10.3390/a16120565","url":null,"abstract":"Chest X-rays (CXRs) represent the first tool globally employed to detect cardiopulmonary pathologies. These acquisitions are highly affected by scattered photons due to the large field of view required. Scatter in CXRs introduces background in the images, which reduces their contrast. We developed three deep-learning-based models to estimate and correct scatter contribution to CXRs. We used a Monte Carlo (MC) ray-tracing model to simulate CXRs from human models obtained from CT scans using different configurations (depending on the availability of dual-energy acquisitions). The simulated CXRs contained the separated contribution of direct and scattered X-rays in the detector. These simulated datasets were then used as the reference for the supervised training of several NNs. Three NN models (single and dual energy) were trained with the MultiResUNet architecture. The performance of the NN models was evaluated on CXRs obtained, with an MC code, from chest CT scans of patients affected by COVID-19. The results show that the NN models were able to estimate and correct the scatter contribution to CXRs with an error of <5%, being robust to variations in the simulation setup and improving contrast in soft tissue. The single-energy model was tested on real CXRs, providing robust estimations of the scatter-corrected CXRs.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"51 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139007147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel prediction model termed the social and spatial attentive generative adversarial network (SSA-GAN). The SSA-GAN framework utilizes a generative approach, where the generator employs social attention mechanisms to accurately model social interactions among pedestrians. Unlike previous methodologies, our model utilizes comprehensive motion features as query vectors, significantly enhancing predictive performance. Additionally, spatial attention is integrated to encapsulate the interactions between pedestrians and their spatial context through semantic spatial features. Moreover, we present a novel approach for generating simulated multi-trajectory datasets using the CARLA simulator. This method circumvents the limitations inherent in existing public datasets such as UCY and ETH, particularly when evaluating multi-trajectory metrics. Our experimental findings substantiate the efficacy of the proposed SSA-GAN model in capturing the nuances of pedestrian interactions and providing accurate multimodal trajectory predictions.
本文提出了一种新颖的预测模型,称为社会和空间注意力生成对抗网络(SSA-GAN)。SSA-GAN 框架采用生成式方法,生成器利用社会关注机制来准确模拟行人之间的社会互动。与以往的方法不同,我们的模型利用综合运动特征作为查询向量,大大提高了预测性能。此外,我们还整合了空间注意力,通过语义空间特征来概括行人之间的互动及其空间环境。此外,我们还提出了一种利用 CARLA 模拟器生成模拟多轨迹数据集的新方法。这种方法规避了 UCY 和 ETH 等现有公共数据集固有的局限性,尤其是在评估多轨迹指标时。我们的实验结果证明了所提出的 SSA-GAN 模型在捕捉行人互动的细微差别和提供准确的多模态轨迹预测方面的功效。
{"title":"Predicting Pedestrian Trajectories with Deep Adversarial Networks Considering Motion and Spatial Information","authors":"Liming Lao, Dangkui Du, Pengzhan Chen","doi":"10.3390/a16120566","DOIUrl":"https://doi.org/10.3390/a16120566","url":null,"abstract":"This paper proposes a novel prediction model termed the social and spatial attentive generative adversarial network (SSA-GAN). The SSA-GAN framework utilizes a generative approach, where the generator employs social attention mechanisms to accurately model social interactions among pedestrians. Unlike previous methodologies, our model utilizes comprehensive motion features as query vectors, significantly enhancing predictive performance. Additionally, spatial attention is integrated to encapsulate the interactions between pedestrians and their spatial context through semantic spatial features. Moreover, we present a novel approach for generating simulated multi-trajectory datasets using the CARLA simulator. This method circumvents the limitations inherent in existing public datasets such as UCY and ETH, particularly when evaluating multi-trajectory metrics. Our experimental findings substantiate the efficacy of the proposed SSA-GAN model in capturing the nuances of pedestrian interactions and providing accurate multimodal trajectory predictions.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"24 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139008509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Imbalanced data present a pervasive challenge in many real-world applications of statistical and machine learning, where the instances of one class significantly outnumber those of the other. This paper examines the impact of class imbalance on the performance of Gaussian mixture models in classification tasks and establishes the need for a strategy to reduce the adverse effects of imbalanced data on the accuracy and reliability of classification outcomes. We explore various strategies to address this problem, including cost-sensitive learning, threshold adjustments, and sampling-based techniques. Through extensive experiments on synthetic and real-world datasets, we evaluate the effectiveness of these methods. Our findings emphasize the need for effective mitigation strategies for class imbalance in supervised Gaussian mixtures, offering valuable insights for practitioners and researchers in improving classification outcomes.
{"title":"On the Influence of Data Imbalance on Supervised Gaussian Mixture Models","authors":"Luca Scrucca","doi":"10.3390/a16120563","DOIUrl":"https://doi.org/10.3390/a16120563","url":null,"abstract":"Imbalanced data present a pervasive challenge in many real-world applications of statistical and machine learning, where the instances of one class significantly outnumber those of the other. This paper examines the impact of class imbalance on the performance of Gaussian mixture models in classification tasks and establishes the need for a strategy to reduce the adverse effects of imbalanced data on the accuracy and reliability of classification outcomes. We explore various strategies to address this problem, including cost-sensitive learning, threshold adjustments, and sampling-based techniques. Through extensive experiments on synthetic and real-world datasets, we evaluate the effectiveness of these methods. Our findings emphasize the need for effective mitigation strategies for class imbalance in supervised Gaussian mixtures, offering valuable insights for practitioners and researchers in improving classification outcomes.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"38 5","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138979780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matěj Vrtal, R. Fujdiak, Jan Benedikt, P. Praks, R. Briš, Michal Ptacek, Petr Toman
This paper presents a time-dependent reliability analysis created for a critical energy infrastructure use case, which consists of an interconnected urban power grid and a communication network. By utilizing expert knowledge from the energy and communication sectors and integrating the renewal theory of multi-component systems, a representative reliability model of this interconnected energy infrastructure, based on real network located in the Czech Republic, is established. This model assumes reparable and non-reparable components and captures the topology of the interconnected infrastructure and reliability characteristics of both the power grid and the communication network. Moreover, a time-dependent reliability assessment of the interconnected system is provided. One of the significant outputs of this research is the identification of the critical components of the interconnected network and their interdependencies by the directed acyclic graph. Numerical results indicate that the original design has an unacceptable large unavailability. Thus, to improve the reliability of the interconnected system, a slightly modified design, in which only a limited number of components in the system are modified to keep the additional costs of the improved design limited, is proposed. Consequently, numerical results indicate reducing the unavailability of the improved interconnected system in comparison with the initial reliability design. The proposed unavailability exploration strategy is general and can bring a valuable reliability improvement in the power and communication sectors.
{"title":"Time-Dependent Unavailability Exploration of Interconnected Urban Power Grid and Communication Network","authors":"Matěj Vrtal, R. Fujdiak, Jan Benedikt, P. Praks, R. Briš, Michal Ptacek, Petr Toman","doi":"10.3390/a16120561","DOIUrl":"https://doi.org/10.3390/a16120561","url":null,"abstract":"This paper presents a time-dependent reliability analysis created for a critical energy infrastructure use case, which consists of an interconnected urban power grid and a communication network. By utilizing expert knowledge from the energy and communication sectors and integrating the renewal theory of multi-component systems, a representative reliability model of this interconnected energy infrastructure, based on real network located in the Czech Republic, is established. This model assumes reparable and non-reparable components and captures the topology of the interconnected infrastructure and reliability characteristics of both the power grid and the communication network. Moreover, a time-dependent reliability assessment of the interconnected system is provided. One of the significant outputs of this research is the identification of the critical components of the interconnected network and their interdependencies by the directed acyclic graph. Numerical results indicate that the original design has an unacceptable large unavailability. Thus, to improve the reliability of the interconnected system, a slightly modified design, in which only a limited number of components in the system are modified to keep the additional costs of the improved design limited, is proposed. Consequently, numerical results indicate reducing the unavailability of the improved interconnected system in comparison with the initial reliability design. The proposed unavailability exploration strategy is general and can bring a valuable reliability improvement in the power and communication sectors.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"918 ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138982694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamad Abou Ali, F. Dornaika, Ignacio Arganda-Carreras
Artificial intelligence (AI) has emerged as a cutting-edge tool, simultaneously accelerating, securing, and enhancing the diagnosis and treatment of patients. An exemplification of this capability is evident in the analysis of peripheral blood smears (PBS). In university medical centers, hematologists routinely examine hundreds of PBS slides daily to validate or correct outcomes produced by advanced hematology analyzers assessing samples from potentially problematic patients. This process may logically lead to erroneous PBC readings, posing risks to patient health. AI functions as a transformative tool, significantly improving the accuracy and precision of readings and diagnoses. This study reshapes the parameters of blood cell classification, harnessing the capabilities of AI and broadening the scope from 5 to 11 specific blood cell categories with the challenging 11-class PBC dataset. This transformation facilitates a more profound exploration of blood cell diversity, surpassing prior constraints in medical image analysis. Our approach combines state-of-the-art deep learning techniques, including pre-trained ConvNets, ViTb16 models, and custom CNN architectures. We employ transfer learning, fine-tuning, and ensemble strategies, such as CBAM and Averaging ensembles, to achieve unprecedented accuracy and interpretability. Our fully fine-tuned EfficientNetV2 B0 model sets a new standard, with a macro-average precision, recall, and F1-score of 91%, 90%, and 90%, respectively, and an average accuracy of 93%. This breakthrough underscores the transformative potential of 11-class blood cell classification for more precise medical diagnoses. Moreover, our groundbreaking “Naturalize” augmentation technique produces remarkable results. The 2K-PBC dataset generated with “Naturalize” boasts a macro-average precision, recall, and F1-score of 97%, along with an average accuracy of 96% when leveraging the fully fine-tuned EfficientNetV2 B0 model. This innovation not only elevates classification performance but also addresses data scarcity and bias in medical deep learning. Our research marks a paradigm shift in blood cell classification, enabling more nuanced and insightful medical analyses. The “Naturalize” technique’s impact extends beyond blood cell classification, emphasizing the vital role of diverse and comprehensive datasets in advancing healthcare applications through deep learning.
{"title":"Blood Cell Revolution: Unveiling 11 Distinct Types with ‘Naturalize’ Augmentation","authors":"Mohamad Abou Ali, F. Dornaika, Ignacio Arganda-Carreras","doi":"10.3390/a16120562","DOIUrl":"https://doi.org/10.3390/a16120562","url":null,"abstract":"Artificial intelligence (AI) has emerged as a cutting-edge tool, simultaneously accelerating, securing, and enhancing the diagnosis and treatment of patients. An exemplification of this capability is evident in the analysis of peripheral blood smears (PBS). In university medical centers, hematologists routinely examine hundreds of PBS slides daily to validate or correct outcomes produced by advanced hematology analyzers assessing samples from potentially problematic patients. This process may logically lead to erroneous PBC readings, posing risks to patient health. AI functions as a transformative tool, significantly improving the accuracy and precision of readings and diagnoses. This study reshapes the parameters of blood cell classification, harnessing the capabilities of AI and broadening the scope from 5 to 11 specific blood cell categories with the challenging 11-class PBC dataset. This transformation facilitates a more profound exploration of blood cell diversity, surpassing prior constraints in medical image analysis. Our approach combines state-of-the-art deep learning techniques, including pre-trained ConvNets, ViTb16 models, and custom CNN architectures. We employ transfer learning, fine-tuning, and ensemble strategies, such as CBAM and Averaging ensembles, to achieve unprecedented accuracy and interpretability. Our fully fine-tuned EfficientNetV2 B0 model sets a new standard, with a macro-average precision, recall, and F1-score of 91%, 90%, and 90%, respectively, and an average accuracy of 93%. This breakthrough underscores the transformative potential of 11-class blood cell classification for more precise medical diagnoses. Moreover, our groundbreaking “Naturalize” augmentation technique produces remarkable results. The 2K-PBC dataset generated with “Naturalize” boasts a macro-average precision, recall, and F1-score of 97%, along with an average accuracy of 96% when leveraging the fully fine-tuned EfficientNetV2 B0 model. This innovation not only elevates classification performance but also addresses data scarcity and bias in medical deep learning. Our research marks a paradigm shift in blood cell classification, enabling more nuanced and insightful medical analyses. The “Naturalize” technique’s impact extends beyond blood cell classification, emphasizing the vital role of diverse and comprehensive datasets in advancing healthcare applications through deep learning.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"1001 ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138982798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stereo 3D object detection remains a crucial challenge within the realm of 3D vision. In the pursuit of enhancing stereo 3D object detection, feature fusion has emerged as a potent strategy. However, the design of the feature fusion module and the determination of pivotal features in this fusion process remain critical. This paper proposes a novel feature attention module tailored for stereo 3D object detection. Serving as a pivotal element for feature fusion, this module not only discerns feature importance but also facilitates informed enhancements based on its conclusions. This study delved into the various facets aided by the feature attention module. Firstly, a interpretability analysis was conducted concerning the function of the image segmentation methods. Secondly, we explored the augmentation of the feature fusion module through a category reweighting strategy. Lastly, we investigated global feature fusion methods and model compression strategies. The models devised through our proposed design underwent an effective analysis, yielding commendable performance, especially in small object detection within the pedestrian category.
{"title":"Stereo 3D Object Detection Using a Feature Attention Module","authors":"Kexin Zhao, Rui Jiang, Jun He","doi":"10.3390/a16120560","DOIUrl":"https://doi.org/10.3390/a16120560","url":null,"abstract":"Stereo 3D object detection remains a crucial challenge within the realm of 3D vision. In the pursuit of enhancing stereo 3D object detection, feature fusion has emerged as a potent strategy. However, the design of the feature fusion module and the determination of pivotal features in this fusion process remain critical. This paper proposes a novel feature attention module tailored for stereo 3D object detection. Serving as a pivotal element for feature fusion, this module not only discerns feature importance but also facilitates informed enhancements based on its conclusions. This study delved into the various facets aided by the feature attention module. Firstly, a interpretability analysis was conducted concerning the function of the image segmentation methods. Secondly, we explored the augmentation of the feature fusion module through a category reweighting strategy. Lastly, we investigated global feature fusion methods and model compression strategies. The models devised through our proposed design underwent an effective analysis, yielding commendable performance, especially in small object detection within the pedestrian category.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"52 46","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138593002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two-Derivative Runge–Kutta methods have been proposed by Chan and Tsai in 2010 and order conditions up to the fifth order are given. In this work, for the first time, we derive order conditions for order six. Simplifying assumptions that reduce the number of order conditions are also given. The procedure for constructing sixth-order methods is presented. A specific method is derived in order to illustrate the procedure; this method is of the sixth algebraic order with a reduced phase-lag and amplification error. For numerical comparison, five well-known test problems have been solved using a seventh-order Two-Derivative Runge–Kutta method developed by Chan and Tsai and several Runge–Kutta methods of orders 6 and 8. Diagrams of the maximum absolute error vs. computation time show the efficiency of the new method.
{"title":"Construction of Two-Derivative Runge–Kutta Methods of Order Six","authors":"Z. Kalogiratou, T. Monovasilis","doi":"10.3390/a16120558","DOIUrl":"https://doi.org/10.3390/a16120558","url":null,"abstract":"Two-Derivative Runge–Kutta methods have been proposed by Chan and Tsai in 2010 and order conditions up to the fifth order are given. In this work, for the first time, we derive order conditions for order six. Simplifying assumptions that reduce the number of order conditions are also given. The procedure for constructing sixth-order methods is presented. A specific method is derived in order to illustrate the procedure; this method is of the sixth algebraic order with a reduced phase-lag and amplification error. For numerical comparison, five well-known test problems have been solved using a seventh-order Two-Derivative Runge–Kutta method developed by Chan and Tsai and several Runge–Kutta methods of orders 6 and 8. Diagrams of the maximum absolute error vs. computation time show the efficiency of the new method.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"13 11","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138596633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quadratic unconstrained binary optimization (QUBO) is a classic NP-hard problem with an enormous number of applications. Local search strategy (LSS) is one of the most fundamental algorithmic concepts and has been successfully applied to a wide range of hard combinatorial optimization problems. One LSS that has gained the attention of researchers is the r-flip (also known as r-Opt) strategy. Given a binary solution with n variables, the r-flip strategy “flips” r binary variables to obtain a new solution if the changes improve the objective function. The main purpose of this paper is to develop several results for the implementation of r-flip moves in QUBO, including a necessary and sufficient condition that when a 1-flip search reaches local optimality, the number of candidates for implementation of the r-flip moves can be reduced significantly. The results of the substantial computational experiments are reported to compare an r-flip strategy-embedded algorithm and a multiple start tabu search algorithm on a set of benchmark instances and three very-large-scale QUBO instances. The r-flip strategy implemented within the algorithm makes the algorithm very efficient, leading to very high-quality solutions within a short CPU time.
{"title":"An Efficient Closed-Form Formula for Evaluating r-Flip Moves in Quadratic Unconstrained Binary Optimization","authors":"B. Alidaee, Haibo Wang, L. Sua","doi":"10.3390/a16120557","DOIUrl":"https://doi.org/10.3390/a16120557","url":null,"abstract":"Quadratic unconstrained binary optimization (QUBO) is a classic NP-hard problem with an enormous number of applications. Local search strategy (LSS) is one of the most fundamental algorithmic concepts and has been successfully applied to a wide range of hard combinatorial optimization problems. One LSS that has gained the attention of researchers is the r-flip (also known as r-Opt) strategy. Given a binary solution with n variables, the r-flip strategy “flips” r binary variables to obtain a new solution if the changes improve the objective function. The main purpose of this paper is to develop several results for the implementation of r-flip moves in QUBO, including a necessary and sufficient condition that when a 1-flip search reaches local optimality, the number of candidates for implementation of the r-flip moves can be reduced significantly. The results of the substantial computational experiments are reported to compare an r-flip strategy-embedded algorithm and a multiple start tabu search algorithm on a set of benchmark instances and three very-large-scale QUBO instances. The r-flip strategy implemented within the algorithm makes the algorithm very efficient, leading to very high-quality solutions within a short CPU time.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"121 11","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. K. Alzahrani, A. Alsheikhy, T. Shawly, Ahmed Azzahrani, Y. Said
Blood cancer occurs due to changes in white blood cells (WBCs). These changes are known as leukemia. Leukemia occurs mostly in children and affects their tissues or plasma. However, it could occur in adults. This disease becomes fatal and causes death if it is discovered and diagnosed late. In addition, leukemia can occur from genetic mutations. Therefore, there is a need to detect it early to save a patient’s life. Recently, researchers have developed various methods to detect leukemia using different technologies. Deep learning approaches (DLAs) have been widely utilized because of their high accuracy. However, some of these methods are time-consuming and costly. Thus, a need for a practical solution with low cost and higher accuracy is required. This article proposes a novel segmentation and classification framework model to discover and categorize leukemia using a deep learning structure. The proposed system encompasses two main parts, which are a deep learning technology to perform segmentation and characteristic extraction and classification on the segmented section. A new UNET architecture is developed to provide the segmentation and feature extraction processes. Various experiments were performed on four datasets to evaluate the model using numerous performance factors, including precision, recall, F-score, and Dice Similarity Coefficient (DSC). It achieved an average 97.82% accuracy for segmentation and categorization. In addition, 98.64% was achieved for F-score. The obtained results indicate that the presented method is a powerful technique for discovering leukemia and categorizing it into suitable groups. Furthermore, the model outperforms some of the implemented methods. The proposed system can assist healthcare providers in their services.
{"title":"A Novel Deep Learning Segmentation and Classification Framework for Leukemia Diagnosis","authors":"A. K. Alzahrani, A. Alsheikhy, T. Shawly, Ahmed Azzahrani, Y. Said","doi":"10.3390/a16120556","DOIUrl":"https://doi.org/10.3390/a16120556","url":null,"abstract":"Blood cancer occurs due to changes in white blood cells (WBCs). These changes are known as leukemia. Leukemia occurs mostly in children and affects their tissues or plasma. However, it could occur in adults. This disease becomes fatal and causes death if it is discovered and diagnosed late. In addition, leukemia can occur from genetic mutations. Therefore, there is a need to detect it early to save a patient’s life. Recently, researchers have developed various methods to detect leukemia using different technologies. Deep learning approaches (DLAs) have been widely utilized because of their high accuracy. However, some of these methods are time-consuming and costly. Thus, a need for a practical solution with low cost and higher accuracy is required. This article proposes a novel segmentation and classification framework model to discover and categorize leukemia using a deep learning structure. The proposed system encompasses two main parts, which are a deep learning technology to perform segmentation and characteristic extraction and classification on the segmented section. A new UNET architecture is developed to provide the segmentation and feature extraction processes. Various experiments were performed on four datasets to evaluate the model using numerous performance factors, including precision, recall, F-score, and Dice Similarity Coefficient (DSC). It achieved an average 97.82% accuracy for segmentation and categorization. In addition, 98.64% was achieved for F-score. The obtained results indicate that the presented method is a powerful technique for discovering leukemia and categorizing it into suitable groups. Furthermore, the model outperforms some of the implemented methods. The proposed system can assist healthcare providers in their services.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"125 40","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}