首页 > 最新文献

Algorithms最新文献

英文 中文
Analysis of a Two-Step Gradient Method with Two Momentum Parameters for Strongly Convex Unconstrained Optimization 带两个动量参数的两步梯度法在强凸无约束优化中的应用分析
Pub Date : 2024-03-18 DOI: 10.3390/a17030126
G. Krivovichev, Valentina Yu. Sergeeva
The paper is devoted to the theoretical and numerical analysis of the two-step method, constructed as a modification of Polyak’s heavy ball method with the inclusion of an additional momentum parameter. For the quadratic case, the convergence conditions are obtained with the use of the first Lyapunov method. For the non-quadratic case, sufficiently smooth strongly convex functions are obtained, and these conditions guarantee local convergence.An approach to finding optimal parameter values based on the solution of a constrained optimization problem is proposed. The effect of an additional parameter on the convergence rate is analyzed. With the use of an ordinary differential equation, equivalent to the method, the damping effect of this parameter on the oscillations, which is typical for the non-monotonic convergence of the heavy ball method, is demonstrated. In different numerical examples for non-quadratic convex and non-convex test functions and machine learning problems (regularized smoothed elastic net regression, logistic regression, and recurrent neural network training), the positive influence of an additional parameter value on the convergence process is demonstrated.
本文专门对两步法进行了理论和数值分析,该方法是对波利亚克重球法的改进,加入了额外的动量参数。对于二次情况,收敛条件是通过使用第一Lyapunov方法获得的。对于非二次情况,可以获得足够平滑的强凸函数,这些条件保证了局部收敛。分析了附加参数对收敛速度的影响。通过使用与该方法等价的常微分方程,证明了该参数对振荡的阻尼效应,这是重球方法非单调收敛的典型特征。在非二次凸和非凸测试函数和机器学习问题(正则化平滑弹性网回归、逻辑回归和循环神经网络训练)的不同数值示例中,证明了附加参数值对收敛过程的积极影响。
{"title":"Analysis of a Two-Step Gradient Method with Two Momentum Parameters for Strongly Convex Unconstrained Optimization","authors":"G. Krivovichev, Valentina Yu. Sergeeva","doi":"10.3390/a17030126","DOIUrl":"https://doi.org/10.3390/a17030126","url":null,"abstract":"The paper is devoted to the theoretical and numerical analysis of the two-step method, constructed as a modification of Polyak’s heavy ball method with the inclusion of an additional momentum parameter. For the quadratic case, the convergence conditions are obtained with the use of the first Lyapunov method. For the non-quadratic case, sufficiently smooth strongly convex functions are obtained, and these conditions guarantee local convergence.An approach to finding optimal parameter values based on the solution of a constrained optimization problem is proposed. The effect of an additional parameter on the convergence rate is analyzed. With the use of an ordinary differential equation, equivalent to the method, the damping effect of this parameter on the oscillations, which is typical for the non-monotonic convergence of the heavy ball method, is demonstrated. In different numerical examples for non-quadratic convex and non-convex test functions and machine learning problems (regularized smoothed elastic net regression, logistic regression, and recurrent neural network training), the positive influence of an additional parameter value on the convergence process is demonstrated.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"30 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140234650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GDUI: Guided Diffusion Model for Unlabeled Images GDUI:无标记图像的引导扩散模型
Pub Date : 2024-03-18 DOI: 10.3390/a17030125
Xuanyuan Xie, Jieyu Zhao
The diffusion model has made progress in the field of image synthesis, especially in the area of conditional image synthesis. However, this improvement is highly dependent on large annotated datasets. To tackle this challenge, we present the Guided Diffusion model for Unlabeled Images (GDUI) framework in this article. It utilizes the inherent feature similarity and semantic differences in the data, as well as the downstream transferability of Contrastive Language-Image Pretraining (CLIP), to guide the diffusion model in generating high-quality images. We design two semantic-aware algorithms, namely, the pseudo-label-matching algorithm and label-matching refinement algorithm, to match the clustering results with the true semantic information and provide more accurate guidance for the diffusion model. First, GDUI encodes the image into a semantically meaningful latent vector through clustering. Then, pseudo-label matching is used to complete the matching of the true semantic information of the image. Finally, the label-matching refinement algorithm is used to adjust the irrelevant semantic information in the data, thereby improving the quality of the guided diffusion model image generation. Our experiments on labeled datasets show that GDUI outperforms diffusion models without any guidance and significantly reduces the gap between it and models guided by ground-truth labels.
扩散模型在图像合成领域取得了进展,尤其是在条件图像合成领域。然而,这种进步在很大程度上依赖于大型注释数据集。为了应对这一挑战,我们在本文中提出了未标注图像的引导扩散模型(GDUI)框架。它利用数据固有的特征相似性和语义差异,以及对比语言-图像预训练(CLIP)的下游可转移性,引导扩散模型生成高质量图像。我们设计了两种语义感知算法,即伪标签匹配算法和标签匹配细化算法,使聚类结果与真实语义信息相匹配,为扩散模型提供更准确的指导。首先,GDUI 通过聚类将图像编码为具有语义意义的潜在向量。然后,使用伪标签匹配完成图像真实语义信息的匹配。最后,使用标签匹配细化算法调整数据中无关的语义信息,从而提高引导扩散模型图像生成的质量。我们在有标签的数据集上进行的实验表明,GDUI 的性能优于没有任何引导的扩散模型,并显著缩小了它与由地面真实标签引导的模型之间的差距。
{"title":"GDUI: Guided Diffusion Model for Unlabeled Images","authors":"Xuanyuan Xie, Jieyu Zhao","doi":"10.3390/a17030125","DOIUrl":"https://doi.org/10.3390/a17030125","url":null,"abstract":"The diffusion model has made progress in the field of image synthesis, especially in the area of conditional image synthesis. However, this improvement is highly dependent on large annotated datasets. To tackle this challenge, we present the Guided Diffusion model for Unlabeled Images (GDUI) framework in this article. It utilizes the inherent feature similarity and semantic differences in the data, as well as the downstream transferability of Contrastive Language-Image Pretraining (CLIP), to guide the diffusion model in generating high-quality images. We design two semantic-aware algorithms, namely, the pseudo-label-matching algorithm and label-matching refinement algorithm, to match the clustering results with the true semantic information and provide more accurate guidance for the diffusion model. First, GDUI encodes the image into a semantically meaningful latent vector through clustering. Then, pseudo-label matching is used to complete the matching of the true semantic information of the image. Finally, the label-matching refinement algorithm is used to adjust the irrelevant semantic information in the data, thereby improving the quality of the guided diffusion model image generation. Our experiments on labeled datasets show that GDUI outperforms diffusion models without any guidance and significantly reduces the gap between it and models guided by ground-truth labels.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"70 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140234270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Virtual Environments to Assess the Quality of Public Spaces 探索虚拟环境以评估公共空间的质量
Pub Date : 2024-03-16 DOI: 10.3390/a17030124
R. Belaroussi, Elie Issa, Leonardo Cameli, C. Lantieri, Sonia Adelé
Human impression plays a crucial role in effectively designing infrastructures that support active mobility such as walking and cycling. By involving users early in the design process, valuable insights can be gathered before physical environments are constructed. This proactive approach enhances the attractiveness and safety of designed spaces for users. This study conducts an experiment comparing real street observations with immersive virtual reality (VR) visits to evaluate user perceptions and assess the quality of public spaces. For this experiment, a high-resolution 3D city model of a large-scale neighborhood was created, utilizing Building Information Modeling (BIM) and Geographic Information System (GIS) data. The model incorporated dynamic elements representing various urban environments: a public area with a tramway station, a commercial street with a road, and a residential playground with green spaces. Participants were presented with identical views of existing urban scenes, both in reality and through reconstructed 3D scenes using a Head-Mounted Display (HMD). They were asked questions related to the quality of the streetscape, its walkability, and cyclability. From the questionnaire, algorithms for assessing public spaces were computed, namely Sustainable Mobility Indicators (SUMI) and Pedestrian Level of Service (PLOS). The study quantifies the relevance of these indicators in a VR setup and correlates them with critical factors influencing the experience of using and spending time on a street. This research contributes to understanding the suitability of these algorithms in a VR environment for predicting the quality of future spaces before occupancy.
在有效设计支持步行和骑自行车等主动交通的基础设施时,人的印象起着至关重要的作用。通过让用户尽早参与设计过程,可以在物理环境建成之前收集到宝贵的意见。这种积极主动的方法可增强设计空间对用户的吸引力和安全性。本研究进行了一项实验,将真实街道观察与沉浸式虚拟现实(VR)访问进行比较,以评估用户感知和公共空间质量。在这项实验中,利用建筑信息模型(BIM)和地理信息系统(GIS)数据,创建了一个大型街区的高分辨率三维城市模型。该模型包含代表各种城市环境的动态元素:带有电车站的公共区域、带有道路的商业街以及带有绿地的住宅操场。参与者在现实中和通过头戴式显示器(HMD)重建的三维场景中看到了相同的现有城市场景。他们被问及与街景质量、步行能力和骑车能力相关的问题。通过问卷调查,计算出了评估公共空间的算法,即可持续交通指标(SUMI)和行人服务水平(PLOS)。该研究量化了这些指标在虚拟现实设置中的相关性,并将其与影响街道使用和停留体验的关键因素联系起来。这项研究有助于了解这些算法在 VR 环境中的适用性,以便在入住前预测未来空间的质量。
{"title":"Exploring Virtual Environments to Assess the Quality of Public Spaces","authors":"R. Belaroussi, Elie Issa, Leonardo Cameli, C. Lantieri, Sonia Adelé","doi":"10.3390/a17030124","DOIUrl":"https://doi.org/10.3390/a17030124","url":null,"abstract":"Human impression plays a crucial role in effectively designing infrastructures that support active mobility such as walking and cycling. By involving users early in the design process, valuable insights can be gathered before physical environments are constructed. This proactive approach enhances the attractiveness and safety of designed spaces for users. This study conducts an experiment comparing real street observations with immersive virtual reality (VR) visits to evaluate user perceptions and assess the quality of public spaces. For this experiment, a high-resolution 3D city model of a large-scale neighborhood was created, utilizing Building Information Modeling (BIM) and Geographic Information System (GIS) data. The model incorporated dynamic elements representing various urban environments: a public area with a tramway station, a commercial street with a road, and a residential playground with green spaces. Participants were presented with identical views of existing urban scenes, both in reality and through reconstructed 3D scenes using a Head-Mounted Display (HMD). They were asked questions related to the quality of the streetscape, its walkability, and cyclability. From the questionnaire, algorithms for assessing public spaces were computed, namely Sustainable Mobility Indicators (SUMI) and Pedestrian Level of Service (PLOS). The study quantifies the relevance of these indicators in a VR setup and correlates them with critical factors influencing the experience of using and spending time on a street. This research contributes to understanding the suitability of these algorithms in a VR environment for predicting the quality of future spaces before occupancy.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"85 S2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140236408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Third-Order Scheme Based on Runge–Kutta and Taylor Series Expansion for Solving Initial Value Problems 基于 Runge-Kutta 和泰勒级数展开的高效三阶方案求解初值问题
Pub Date : 2024-03-16 DOI: 10.3390/a17030123
Noori Y. Abdul-Hassan, Zainab J. Kadum, Ali Hasan Ali
In this paper, we propose a new numerical scheme based on a variation of the standard formulation of the Runge–Kutta method using Taylor series expansion for solving initial value problems (IVPs) in ordinary differential equations. Analytically, the accuracy, consistency, and absolute stability of the new method are discussed. It is established that the new method is consistent and stable and has third-order convergence. Numerically, we present two models involving applications from physics and engineering to illustrate the efficiency and accuracy of our new method and compare it with further pertinent techniques carried out in the same order.
本文基于使用泰勒级数展开的 Runge-Kutta 方法标准公式的变体,提出了一种新的数值方案,用于求解常微分方程中的初值问题(IVP)。分析讨论了新方法的准确性、一致性和绝对稳定性。结果表明,新方法具有一致性和稳定性,并具有三阶收敛性。在数值上,我们提出了两个涉及物理学和工程学应用的模型,以说明我们的新方法的效率和准确性,并将其与以相同阶次进行的其他相关技术进行比较。
{"title":"An Efficient Third-Order Scheme Based on Runge–Kutta and Taylor Series Expansion for Solving Initial Value Problems","authors":"Noori Y. Abdul-Hassan, Zainab J. Kadum, Ali Hasan Ali","doi":"10.3390/a17030123","DOIUrl":"https://doi.org/10.3390/a17030123","url":null,"abstract":"In this paper, we propose a new numerical scheme based on a variation of the standard formulation of the Runge–Kutta method using Taylor series expansion for solving initial value problems (IVPs) in ordinary differential equations. Analytically, the accuracy, consistency, and absolute stability of the new method are discussed. It is established that the new method is consistent and stable and has third-order convergence. Numerically, we present two models involving applications from physics and engineering to illustrate the efficiency and accuracy of our new method and compare it with further pertinent techniques carried out in the same order.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"88 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140236234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highly Imbalanced Classification of Gout Using Data Resampling and Ensemble Method 利用数据重采样和集合方法对痛风进行高度不平衡分类
Pub Date : 2024-03-15 DOI: 10.3390/a17030122
Xiaonan Si, Lei Wang, Wenchang Xu, Biao Wang, Wenbo Cheng
Gout is one of the most painful diseases in the world. Accurate classification of gout is crucial for diagnosis and treatment which can potentially save lives. However, the current methods for classifying gout periods have demonstrated poor performance and have received little attention. This is due to a significant data imbalance problem that affects the learning attention for the majority and minority classes. To overcome this problem, a resampling method called ENaNSMOTE-Tomek link is proposed. It uses extended natural neighbors to generate samples that fall within the minority class and then applies the Tomek link technique to eliminate instances that contribute to noise. The model combines the ensemble ’bagging’ technique with the proposed resampling technique to improve the quality of generated samples. The performance of individual classifiers and hybrid models on an imbalanced gout dataset taken from the electronic medical records of a hospital is evaluated. The results of the classification demonstrate that the proposed strategy is more accurate than some imbalanced gout diagnosis techniques, with an accuracy of 80.87% and an AUC of 87.10%. This indicates that the proposed algorithm can alleviate the problems caused by imbalanced gout data and help experts better diagnose their patients.
痛风是世界上最痛苦的疾病之一。痛风的准确分类对诊断和治疗至关重要,有可能挽救生命。然而,目前用于痛风期分类的方法表现不佳,很少受到关注。这是由于严重的数据不平衡问题影响了多数类和少数类的学习注意力。为了克服这一问题,我们提出了一种名为 ENaNSMOTE-Tomek link 的重采样方法。该方法使用扩展自然邻接法生成属于少数类的样本,然后应用 Tomek 链接技术消除造成噪音的实例。该模型将集合 "装袋 "技术与建议的重采样技术相结合,以提高生成样本的质量。我们评估了单个分类器和混合模型在不平衡痛风数据集上的性能,该数据集来自一家医院的电子病历。分类结果表明,建议的策略比一些不平衡痛风诊断技术更准确,准确率为 80.87%,AUC 为 87.10%。这表明所提出的算法可以缓解痛风数据不平衡带来的问题,帮助专家更好地诊断患者。
{"title":"Highly Imbalanced Classification of Gout Using Data Resampling and Ensemble Method","authors":"Xiaonan Si, Lei Wang, Wenchang Xu, Biao Wang, Wenbo Cheng","doi":"10.3390/a17030122","DOIUrl":"https://doi.org/10.3390/a17030122","url":null,"abstract":"Gout is one of the most painful diseases in the world. Accurate classification of gout is crucial for diagnosis and treatment which can potentially save lives. However, the current methods for classifying gout periods have demonstrated poor performance and have received little attention. This is due to a significant data imbalance problem that affects the learning attention for the majority and minority classes. To overcome this problem, a resampling method called ENaNSMOTE-Tomek link is proposed. It uses extended natural neighbors to generate samples that fall within the minority class and then applies the Tomek link technique to eliminate instances that contribute to noise. The model combines the ensemble ’bagging’ technique with the proposed resampling technique to improve the quality of generated samples. The performance of individual classifiers and hybrid models on an imbalanced gout dataset taken from the electronic medical records of a hospital is evaluated. The results of the classification demonstrate that the proposed strategy is more accurate than some imbalanced gout diagnosis techniques, with an accuracy of 80.87% and an AUC of 87.10%. This indicates that the proposed algorithm can alleviate the problems caused by imbalanced gout data and help experts better diagnose their patients.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"107 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140238004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling of Some Classes of Extended Oscillators: Simulations, Algorithms, Generating Chaos, and Open Problems 几类扩展振荡器的建模:模拟、算法、产生混沌和未决问题
Pub Date : 2024-03-15 DOI: 10.3390/a17030121
N. Kyurkchiev, Tsvetelin S. Zaevski, A. Iliev, V. Kyurkchiev, A. Rahnev
In this article, we propose some extended oscillator models. Various experiments are performed. The models are studied using the Melnikov approach. We show some integral units for researching the behavior of these hypothetical oscillators. These will be implemented as add-on sections of a thoughtful main web-based application for researching computations. One of the main goals of the study is to share the difficulties that researchers (who are not necessarily professional mathematicians) encounter in using contemporary computer algebraic systems (CASs) for scientific research to examine in detail the dynamics of modifications of classical and newer models that are emerging in the literature (for the large values of the parameters of the models). The present article is a natural continuation of the research in the direction that has been indicated and discussed in our previous investigations. One possible application that the Melnikov function may find in the modeling of a radiating antenna diagram is also discussed. Some probability-based constructions are also presented. We hope that some of these notes will be reflected in upcoming registered rectifications of the CAS. The aim of studying the design realization (scheme, manufacture, output, etc.) of the explored differential models can be viewed as not yet being met.
本文提出了一些扩展振荡器模型。我们进行了各种实验。我们使用梅尔尼科夫方法对这些模型进行了研究。我们展示了一些用于研究这些假设振荡器行为的积分单元。这些单元将作为基于网络的研究计算应用程序的附加部分加以实施。这项研究的主要目标之一是分享研究人员(不一定是专业数学家)在使用当代计算机代数系统(CAS)进行科学研究时遇到的困难,以详细研究文献中出现的经典模型和更新模型(对于模型参数的较大值)的修改动态。本文是我们先前研究中指出和讨论的研究方向的自然延续。本文还讨论了梅尔尼科夫函数在辐射天线图建模中的一个可能应用。此外,还介绍了一些基于概率的构造。我们希望这些说明中的一些内容能在中国科学院即将推出的注册修正版中得到体现。研究已探索的差分模型的设计实现(方案、制造、输出等)的目的可以说尚未达到。
{"title":"Modeling of Some Classes of Extended Oscillators: Simulations, Algorithms, Generating Chaos, and Open Problems","authors":"N. Kyurkchiev, Tsvetelin S. Zaevski, A. Iliev, V. Kyurkchiev, A. Rahnev","doi":"10.3390/a17030121","DOIUrl":"https://doi.org/10.3390/a17030121","url":null,"abstract":"In this article, we propose some extended oscillator models. Various experiments are performed. The models are studied using the Melnikov approach. We show some integral units for researching the behavior of these hypothetical oscillators. These will be implemented as add-on sections of a thoughtful main web-based application for researching computations. One of the main goals of the study is to share the difficulties that researchers (who are not necessarily professional mathematicians) encounter in using contemporary computer algebraic systems (CASs) for scientific research to examine in detail the dynamics of modifications of classical and newer models that are emerging in the literature (for the large values of the parameters of the models). The present article is a natural continuation of the research in the direction that has been indicated and discussed in our previous investigations. One possible application that the Melnikov function may find in the modeling of a radiating antenna diagram is also discussed. Some probability-based constructions are also presented. We hope that some of these notes will be reflected in upcoming registered rectifications of the CAS. The aim of studying the design realization (scheme, manufacture, output, etc.) of the explored differential models can be viewed as not yet being met.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"25 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Preprocessing Method for Coronary Artery Stenosis Detection Based on Deep Learning 基于深度学习的冠状动脉狭窄检测预处理方法
Pub Date : 2024-03-13 DOI: 10.3390/a17030119
Yanjun Li, Takaaki Yoshimura, Yuto Horima, Hiroyuki Sugimori
The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging angle and contrast agent inhomogeneity. Traditional coronary artery stenosis localization algorithms often only detect aortic stenosis and ignore branch vessels that may also cause major health threats. Therefore, improving the localization of branch vessel stenosis in coronary angiographic images is a potential development property. In this study, we propose a preprocessing approach that combines vessel enhancement and image fusion as a prerequisite for deep learning. The sensitivity of the neural network to stenosis features is improved by enhancing the blurry features in coronary angiographic images. By validating five neural networks, such as YOLOv4 and R-FCN-Inceptionresnetv2, our proposed method can improve the performance of deep learning network applications on the images from six common imaging angles. The results showed that the proposed method is suitable as a preprocessing method for coronary angiographic image processing based on deep learning and can be used to amend the recognition ability of the deep model for fine vessel stenosis.
冠状动脉狭窄的检测是诊断冠状动脉疾病最重要的指标之一。然而,由于成像角度和造影剂不均匀性等多种因素,分支血管的狭窄通常很难通过计算机辅助系统甚至放射科医生检测出来。传统的冠状动脉狭窄定位算法往往只能检测到主动脉狭窄,而忽略了同样可能对健康造成重大威胁的分支血管。因此,改善冠状动脉造影图像中分支血管狭窄的定位是一个潜在的发展方向。在本研究中,我们提出了一种结合血管增强和图像融合的预处理方法,作为深度学习的前提。通过增强冠状动脉造影图像中的模糊特征,提高了神经网络对血管狭窄特征的灵敏度。通过验证 YOLOv4 和 R-FCN-Inceptionresnetv2 等五个神经网络,我们提出的方法可以提高深度学习网络应用在六个常见成像角度图像上的性能。结果表明,所提出的方法适合作为基于深度学习的冠状动脉造影图像处理的预处理方法,并可用于修正深度模型对细血管狭窄的识别能力。
{"title":"A Preprocessing Method for Coronary Artery Stenosis Detection Based on Deep Learning","authors":"Yanjun Li, Takaaki Yoshimura, Yuto Horima, Hiroyuki Sugimori","doi":"10.3390/a17030119","DOIUrl":"https://doi.org/10.3390/a17030119","url":null,"abstract":"The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging angle and contrast agent inhomogeneity. Traditional coronary artery stenosis localization algorithms often only detect aortic stenosis and ignore branch vessels that may also cause major health threats. Therefore, improving the localization of branch vessel stenosis in coronary angiographic images is a potential development property. In this study, we propose a preprocessing approach that combines vessel enhancement and image fusion as a prerequisite for deep learning. The sensitivity of the neural network to stenosis features is improved by enhancing the blurry features in coronary angiographic images. By validating five neural networks, such as YOLOv4 and R-FCN-Inceptionresnetv2, our proposed method can improve the performance of deep learning network applications on the images from six common imaging angles. The results showed that the proposed method is suitable as a preprocessing method for coronary angiographic image processing based on deep learning and can be used to amend the recognition ability of the deep model for fine vessel stenosis.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"2016 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140246202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Estimation of Generative Models Using Tukey Depth 利用图基深度高效估计生成模型
Pub Date : 2024-03-13 DOI: 10.3390/a17030120
Minh-Quan Vo, Thu Nguyen, M. Riegler, Hugo L. Hammer
Generative models have recently received a lot of attention. However, a challenge with such models is that it is usually not possible to compute the likelihood function, which makes parameter estimation or training of the models challenging. The most commonly used alternative strategy is called likelihood-free estimation, based on finding values of the model parameters such that a set of selected statistics have similar values in the dataset and in samples generated from the model. However, a challenge is how to select statistics that are efficient in estimating unknown parameters. The most commonly used statistics are the mean vector, variances, and correlations between variables, but they may be less relevant in estimating the unknown parameters. We suggest utilizing Tukey depth contours (TDCs) as statistics in likelihood-free estimation. TDCs are highly flexible and can capture almost any property of multivariate data, in addition, they seem to be as of yet unexplored for likelihood-free estimation. We demonstrate that TDC statistics are able to estimate the unknown parameters more efficiently than mean, variance, and correlation in likelihood-free estimation. We further apply the TDC statistics to estimate the properties of requests to a computer system, demonstrating their real-life applicability. The suggested method is able to efficiently find the unknown parameters of the request distribution and quantify the estimation uncertainty.
生成模型最近受到了广泛关注。然而,这类模型面临的一个挑战是,通常无法计算似然函数,这使得模型的参数估计或训练具有挑战性。最常用的替代策略称为无似然估计,其基础是找到模型参数值,使一组选定的统计量在数据集和模型生成的样本中具有相似的值。然而,如何选择能有效估计未知参数的统计量是一个难题。最常用的统计量是变量间的均值向量、方差和相关性,但它们在估计未知参数时的相关性可能较低。我们建议在无似然估计中使用图基深度等值线(TDC)作为统计量。TDC 非常灵活,几乎可以捕捉到多元数据的任何属性,此外,它们在无似然估计方面似乎尚未被开发。我们证明,在无似然估计中,TDC 统计能比均值、方差和相关性更有效地估计未知参数。我们进一步将 TDC 统计法应用于估计计算机系统请求的属性,证明了其在现实生活中的适用性。所建议的方法能够有效地找到请求分布的未知参数,并量化估计的不确定性。
{"title":"Efficient Estimation of Generative Models Using Tukey Depth","authors":"Minh-Quan Vo, Thu Nguyen, M. Riegler, Hugo L. Hammer","doi":"10.3390/a17030120","DOIUrl":"https://doi.org/10.3390/a17030120","url":null,"abstract":"Generative models have recently received a lot of attention. However, a challenge with such models is that it is usually not possible to compute the likelihood function, which makes parameter estimation or training of the models challenging. The most commonly used alternative strategy is called likelihood-free estimation, based on finding values of the model parameters such that a set of selected statistics have similar values in the dataset and in samples generated from the model. However, a challenge is how to select statistics that are efficient in estimating unknown parameters. The most commonly used statistics are the mean vector, variances, and correlations between variables, but they may be less relevant in estimating the unknown parameters. We suggest utilizing Tukey depth contours (TDCs) as statistics in likelihood-free estimation. TDCs are highly flexible and can capture almost any property of multivariate data, in addition, they seem to be as of yet unexplored for likelihood-free estimation. We demonstrate that TDC statistics are able to estimate the unknown parameters more efficiently than mean, variance, and correlation in likelihood-free estimation. We further apply the TDC statistics to estimate the properties of requests to a computer system, demonstrating their real-life applicability. The suggested method is able to efficiently find the unknown parameters of the request distribution and quantify the estimation uncertainty.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140245186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Data Selection and Information Seeking 主动数据选择和信息搜索
Pub Date : 2024-03-12 DOI: 10.3390/a17030118
Thomas Parr, K. Friston, P. Zeidman
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials.
贝叶斯推理通常关注两个问题。第一个问题是从数据中估计某个模型的参数,第二个问题是量化作为替代模型的替代假设的证据。本文关注第三个问题。我们感兴趣的是数据的选择--无论是通过从大型数据集中抽取数据子集,还是通过基于我们所掌握的数据生成模型优化实验设计。优化数据选择可确保我们用较少的数据实现良好的推断,从而节省计算和实验成本。本文旨在从动物探索的神经生物学研究和最优实验设计理论出发,解读数据主动采样的原理。我们概述了这些领域的要点,并以简单的玩具示例说明了这些要点的应用,从使用基集进行函数逼近到推断随时间演变的过程。最后,我们还考虑了如何将这种数据选择方法应用于(贝叶斯自适应)临床试验的设计。
{"title":"Active Data Selection and Information Seeking","authors":"Thomas Parr, K. Friston, P. Zeidman","doi":"10.3390/a17030118","DOIUrl":"https://doi.org/10.3390/a17030118","url":null,"abstract":"Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"50 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140250288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field Programmable Gate Array-Based Acceleration Algorithm Design for Dynamic Star Map Parallel Computing 基于现场可编程门阵列的动态星图并行计算加速算法设计
Pub Date : 2024-03-12 DOI: 10.3390/a17030117
Bo Cui, Lingyun Wang, Guangxi Li, Xian Ren
The dynamic star simulator is a commonly used ground-test calibration device for star sensors. For the problems of slow calculation speed, low integration, and high power consumption in the traditional star chart simulation method, this paper designs a FPGA-based star chart display algorithm for a dynamic star simulator. The design adopts the USB 2.0 protocol to obtain the attitude data, uses the SDRAM to cache the attitude data and video stream, extracts the effective navigation star points by searching the starry sky equidistant right ascension and declination partitions, and realizes the pipelined displaying of the star map by using the parallel computing capability of the FPGA. Test results show that under the conditions of chart field of view of Φ20° and simulated magnitude of 2.0∼6.0 Mv, the longest time for calculating a chart is 72 μs under the clock of 148.5 MHz, which effectively improves the chart display speed of the dynamic star simulator. The FPGA-based star map display algorithm gets rid of the dependence of the existing algorithm on the computer, reduces the volume and power consumption of the dynamic star simulator, and realizes the miniaturization and portable demand of the dynamic star simulator.
动态星图模拟器是星形传感器常用的地面测试校准设备。针对传统星图模拟方法计算速度慢、集成度低、功耗高等问题,本文设计了一种基于 FPGA 的动态星图模拟器星图显示算法。该设计采用 USB 2.0 协议获取姿态数据,利用 SDRAM 缓存姿态数据和视频流,通过搜索星空等距赤经和赤纬分区提取有效导航星点,并利用 FPGA 的并行计算能力实现星图的流水线显示。测试结果表明,在星图视场为Φ20°、模拟星等为2.0∼6.0 Mv的条件下,时钟频率为148.5 MHz时,计算一张星图的最长时间为72 μs,有效提高了动态星空模拟器的星图显示速度。基于FPGA的星图显示算法摆脱了现有算法对计算机的依赖,减小了动态星空模拟器的体积和功耗,实现了动态星空模拟器小型化和便携化的需求。
{"title":"Field Programmable Gate Array-Based Acceleration Algorithm Design for Dynamic Star Map Parallel Computing","authors":"Bo Cui, Lingyun Wang, Guangxi Li, Xian Ren","doi":"10.3390/a17030117","DOIUrl":"https://doi.org/10.3390/a17030117","url":null,"abstract":"The dynamic star simulator is a commonly used ground-test calibration device for star sensors. For the problems of slow calculation speed, low integration, and high power consumption in the traditional star chart simulation method, this paper designs a FPGA-based star chart display algorithm for a dynamic star simulator. The design adopts the USB 2.0 protocol to obtain the attitude data, uses the SDRAM to cache the attitude data and video stream, extracts the effective navigation star points by searching the starry sky equidistant right ascension and declination partitions, and realizes the pipelined displaying of the star map by using the parallel computing capability of the FPGA. Test results show that under the conditions of chart field of view of Φ20° and simulated magnitude of 2.0∼6.0 Mv, the longest time for calculating a chart is 72 μs under the clock of 148.5 MHz, which effectively improves the chart display speed of the dynamic star simulator. The FPGA-based star map display algorithm gets rid of the dependence of the existing algorithm on the computer, reduces the volume and power consumption of the dynamic star simulator, and realizes the miniaturization and portable demand of the dynamic star simulator.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"63 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140248370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Algorithms
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1