The paper is devoted to the theoretical and numerical analysis of the two-step method, constructed as a modification of Polyak’s heavy ball method with the inclusion of an additional momentum parameter. For the quadratic case, the convergence conditions are obtained with the use of the first Lyapunov method. For the non-quadratic case, sufficiently smooth strongly convex functions are obtained, and these conditions guarantee local convergence.An approach to finding optimal parameter values based on the solution of a constrained optimization problem is proposed. The effect of an additional parameter on the convergence rate is analyzed. With the use of an ordinary differential equation, equivalent to the method, the damping effect of this parameter on the oscillations, which is typical for the non-monotonic convergence of the heavy ball method, is demonstrated. In different numerical examples for non-quadratic convex and non-convex test functions and machine learning problems (regularized smoothed elastic net regression, logistic regression, and recurrent neural network training), the positive influence of an additional parameter value on the convergence process is demonstrated.
{"title":"Analysis of a Two-Step Gradient Method with Two Momentum Parameters for Strongly Convex Unconstrained Optimization","authors":"G. Krivovichev, Valentina Yu. Sergeeva","doi":"10.3390/a17030126","DOIUrl":"https://doi.org/10.3390/a17030126","url":null,"abstract":"The paper is devoted to the theoretical and numerical analysis of the two-step method, constructed as a modification of Polyak’s heavy ball method with the inclusion of an additional momentum parameter. For the quadratic case, the convergence conditions are obtained with the use of the first Lyapunov method. For the non-quadratic case, sufficiently smooth strongly convex functions are obtained, and these conditions guarantee local convergence.An approach to finding optimal parameter values based on the solution of a constrained optimization problem is proposed. The effect of an additional parameter on the convergence rate is analyzed. With the use of an ordinary differential equation, equivalent to the method, the damping effect of this parameter on the oscillations, which is typical for the non-monotonic convergence of the heavy ball method, is demonstrated. In different numerical examples for non-quadratic convex and non-convex test functions and machine learning problems (regularized smoothed elastic net regression, logistic regression, and recurrent neural network training), the positive influence of an additional parameter value on the convergence process is demonstrated.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"30 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140234650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The diffusion model has made progress in the field of image synthesis, especially in the area of conditional image synthesis. However, this improvement is highly dependent on large annotated datasets. To tackle this challenge, we present the Guided Diffusion model for Unlabeled Images (GDUI) framework in this article. It utilizes the inherent feature similarity and semantic differences in the data, as well as the downstream transferability of Contrastive Language-Image Pretraining (CLIP), to guide the diffusion model in generating high-quality images. We design two semantic-aware algorithms, namely, the pseudo-label-matching algorithm and label-matching refinement algorithm, to match the clustering results with the true semantic information and provide more accurate guidance for the diffusion model. First, GDUI encodes the image into a semantically meaningful latent vector through clustering. Then, pseudo-label matching is used to complete the matching of the true semantic information of the image. Finally, the label-matching refinement algorithm is used to adjust the irrelevant semantic information in the data, thereby improving the quality of the guided diffusion model image generation. Our experiments on labeled datasets show that GDUI outperforms diffusion models without any guidance and significantly reduces the gap between it and models guided by ground-truth labels.
{"title":"GDUI: Guided Diffusion Model for Unlabeled Images","authors":"Xuanyuan Xie, Jieyu Zhao","doi":"10.3390/a17030125","DOIUrl":"https://doi.org/10.3390/a17030125","url":null,"abstract":"The diffusion model has made progress in the field of image synthesis, especially in the area of conditional image synthesis. However, this improvement is highly dependent on large annotated datasets. To tackle this challenge, we present the Guided Diffusion model for Unlabeled Images (GDUI) framework in this article. It utilizes the inherent feature similarity and semantic differences in the data, as well as the downstream transferability of Contrastive Language-Image Pretraining (CLIP), to guide the diffusion model in generating high-quality images. We design two semantic-aware algorithms, namely, the pseudo-label-matching algorithm and label-matching refinement algorithm, to match the clustering results with the true semantic information and provide more accurate guidance for the diffusion model. First, GDUI encodes the image into a semantically meaningful latent vector through clustering. Then, pseudo-label matching is used to complete the matching of the true semantic information of the image. Finally, the label-matching refinement algorithm is used to adjust the irrelevant semantic information in the data, thereby improving the quality of the guided diffusion model image generation. Our experiments on labeled datasets show that GDUI outperforms diffusion models without any guidance and significantly reduces the gap between it and models guided by ground-truth labels.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"70 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140234270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Belaroussi, Elie Issa, Leonardo Cameli, C. Lantieri, Sonia Adelé
Human impression plays a crucial role in effectively designing infrastructures that support active mobility such as walking and cycling. By involving users early in the design process, valuable insights can be gathered before physical environments are constructed. This proactive approach enhances the attractiveness and safety of designed spaces for users. This study conducts an experiment comparing real street observations with immersive virtual reality (VR) visits to evaluate user perceptions and assess the quality of public spaces. For this experiment, a high-resolution 3D city model of a large-scale neighborhood was created, utilizing Building Information Modeling (BIM) and Geographic Information System (GIS) data. The model incorporated dynamic elements representing various urban environments: a public area with a tramway station, a commercial street with a road, and a residential playground with green spaces. Participants were presented with identical views of existing urban scenes, both in reality and through reconstructed 3D scenes using a Head-Mounted Display (HMD). They were asked questions related to the quality of the streetscape, its walkability, and cyclability. From the questionnaire, algorithms for assessing public spaces were computed, namely Sustainable Mobility Indicators (SUMI) and Pedestrian Level of Service (PLOS). The study quantifies the relevance of these indicators in a VR setup and correlates them with critical factors influencing the experience of using and spending time on a street. This research contributes to understanding the suitability of these algorithms in a VR environment for predicting the quality of future spaces before occupancy.
{"title":"Exploring Virtual Environments to Assess the Quality of Public Spaces","authors":"R. Belaroussi, Elie Issa, Leonardo Cameli, C. Lantieri, Sonia Adelé","doi":"10.3390/a17030124","DOIUrl":"https://doi.org/10.3390/a17030124","url":null,"abstract":"Human impression plays a crucial role in effectively designing infrastructures that support active mobility such as walking and cycling. By involving users early in the design process, valuable insights can be gathered before physical environments are constructed. This proactive approach enhances the attractiveness and safety of designed spaces for users. This study conducts an experiment comparing real street observations with immersive virtual reality (VR) visits to evaluate user perceptions and assess the quality of public spaces. For this experiment, a high-resolution 3D city model of a large-scale neighborhood was created, utilizing Building Information Modeling (BIM) and Geographic Information System (GIS) data. The model incorporated dynamic elements representing various urban environments: a public area with a tramway station, a commercial street with a road, and a residential playground with green spaces. Participants were presented with identical views of existing urban scenes, both in reality and through reconstructed 3D scenes using a Head-Mounted Display (HMD). They were asked questions related to the quality of the streetscape, its walkability, and cyclability. From the questionnaire, algorithms for assessing public spaces were computed, namely Sustainable Mobility Indicators (SUMI) and Pedestrian Level of Service (PLOS). The study quantifies the relevance of these indicators in a VR setup and correlates them with critical factors influencing the experience of using and spending time on a street. This research contributes to understanding the suitability of these algorithms in a VR environment for predicting the quality of future spaces before occupancy.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"85 S2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140236408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noori Y. Abdul-Hassan, Zainab J. Kadum, Ali Hasan Ali
In this paper, we propose a new numerical scheme based on a variation of the standard formulation of the Runge–Kutta method using Taylor series expansion for solving initial value problems (IVPs) in ordinary differential equations. Analytically, the accuracy, consistency, and absolute stability of the new method are discussed. It is established that the new method is consistent and stable and has third-order convergence. Numerically, we present two models involving applications from physics and engineering to illustrate the efficiency and accuracy of our new method and compare it with further pertinent techniques carried out in the same order.
{"title":"An Efficient Third-Order Scheme Based on Runge–Kutta and Taylor Series Expansion for Solving Initial Value Problems","authors":"Noori Y. Abdul-Hassan, Zainab J. Kadum, Ali Hasan Ali","doi":"10.3390/a17030123","DOIUrl":"https://doi.org/10.3390/a17030123","url":null,"abstract":"In this paper, we propose a new numerical scheme based on a variation of the standard formulation of the Runge–Kutta method using Taylor series expansion for solving initial value problems (IVPs) in ordinary differential equations. Analytically, the accuracy, consistency, and absolute stability of the new method are discussed. It is established that the new method is consistent and stable and has third-order convergence. Numerically, we present two models involving applications from physics and engineering to illustrate the efficiency and accuracy of our new method and compare it with further pertinent techniques carried out in the same order.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"88 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140236234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaonan Si, Lei Wang, Wenchang Xu, Biao Wang, Wenbo Cheng
Gout is one of the most painful diseases in the world. Accurate classification of gout is crucial for diagnosis and treatment which can potentially save lives. However, the current methods for classifying gout periods have demonstrated poor performance and have received little attention. This is due to a significant data imbalance problem that affects the learning attention for the majority and minority classes. To overcome this problem, a resampling method called ENaNSMOTE-Tomek link is proposed. It uses extended natural neighbors to generate samples that fall within the minority class and then applies the Tomek link technique to eliminate instances that contribute to noise. The model combines the ensemble ’bagging’ technique with the proposed resampling technique to improve the quality of generated samples. The performance of individual classifiers and hybrid models on an imbalanced gout dataset taken from the electronic medical records of a hospital is evaluated. The results of the classification demonstrate that the proposed strategy is more accurate than some imbalanced gout diagnosis techniques, with an accuracy of 80.87% and an AUC of 87.10%. This indicates that the proposed algorithm can alleviate the problems caused by imbalanced gout data and help experts better diagnose their patients.
{"title":"Highly Imbalanced Classification of Gout Using Data Resampling and Ensemble Method","authors":"Xiaonan Si, Lei Wang, Wenchang Xu, Biao Wang, Wenbo Cheng","doi":"10.3390/a17030122","DOIUrl":"https://doi.org/10.3390/a17030122","url":null,"abstract":"Gout is one of the most painful diseases in the world. Accurate classification of gout is crucial for diagnosis and treatment which can potentially save lives. However, the current methods for classifying gout periods have demonstrated poor performance and have received little attention. This is due to a significant data imbalance problem that affects the learning attention for the majority and minority classes. To overcome this problem, a resampling method called ENaNSMOTE-Tomek link is proposed. It uses extended natural neighbors to generate samples that fall within the minority class and then applies the Tomek link technique to eliminate instances that contribute to noise. The model combines the ensemble ’bagging’ technique with the proposed resampling technique to improve the quality of generated samples. The performance of individual classifiers and hybrid models on an imbalanced gout dataset taken from the electronic medical records of a hospital is evaluated. The results of the classification demonstrate that the proposed strategy is more accurate than some imbalanced gout diagnosis techniques, with an accuracy of 80.87% and an AUC of 87.10%. This indicates that the proposed algorithm can alleviate the problems caused by imbalanced gout data and help experts better diagnose their patients.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"107 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140238004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Kyurkchiev, Tsvetelin S. Zaevski, A. Iliev, V. Kyurkchiev, A. Rahnev
In this article, we propose some extended oscillator models. Various experiments are performed. The models are studied using the Melnikov approach. We show some integral units for researching the behavior of these hypothetical oscillators. These will be implemented as add-on sections of a thoughtful main web-based application for researching computations. One of the main goals of the study is to share the difficulties that researchers (who are not necessarily professional mathematicians) encounter in using contemporary computer algebraic systems (CASs) for scientific research to examine in detail the dynamics of modifications of classical and newer models that are emerging in the literature (for the large values of the parameters of the models). The present article is a natural continuation of the research in the direction that has been indicated and discussed in our previous investigations. One possible application that the Melnikov function may find in the modeling of a radiating antenna diagram is also discussed. Some probability-based constructions are also presented. We hope that some of these notes will be reflected in upcoming registered rectifications of the CAS. The aim of studying the design realization (scheme, manufacture, output, etc.) of the explored differential models can be viewed as not yet being met.
{"title":"Modeling of Some Classes of Extended Oscillators: Simulations, Algorithms, Generating Chaos, and Open Problems","authors":"N. Kyurkchiev, Tsvetelin S. Zaevski, A. Iliev, V. Kyurkchiev, A. Rahnev","doi":"10.3390/a17030121","DOIUrl":"https://doi.org/10.3390/a17030121","url":null,"abstract":"In this article, we propose some extended oscillator models. Various experiments are performed. The models are studied using the Melnikov approach. We show some integral units for researching the behavior of these hypothetical oscillators. These will be implemented as add-on sections of a thoughtful main web-based application for researching computations. One of the main goals of the study is to share the difficulties that researchers (who are not necessarily professional mathematicians) encounter in using contemporary computer algebraic systems (CASs) for scientific research to examine in detail the dynamics of modifications of classical and newer models that are emerging in the literature (for the large values of the parameters of the models). The present article is a natural continuation of the research in the direction that has been indicated and discussed in our previous investigations. One possible application that the Melnikov function may find in the modeling of a radiating antenna diagram is also discussed. Some probability-based constructions are also presented. We hope that some of these notes will be reflected in upcoming registered rectifications of the CAS. The aim of studying the design realization (scheme, manufacture, output, etc.) of the explored differential models can be viewed as not yet being met.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"25 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging angle and contrast agent inhomogeneity. Traditional coronary artery stenosis localization algorithms often only detect aortic stenosis and ignore branch vessels that may also cause major health threats. Therefore, improving the localization of branch vessel stenosis in coronary angiographic images is a potential development property. In this study, we propose a preprocessing approach that combines vessel enhancement and image fusion as a prerequisite for deep learning. The sensitivity of the neural network to stenosis features is improved by enhancing the blurry features in coronary angiographic images. By validating five neural networks, such as YOLOv4 and R-FCN-Inceptionresnetv2, our proposed method can improve the performance of deep learning network applications on the images from six common imaging angles. The results showed that the proposed method is suitable as a preprocessing method for coronary angiographic image processing based on deep learning and can be used to amend the recognition ability of the deep model for fine vessel stenosis.
{"title":"A Preprocessing Method for Coronary Artery Stenosis Detection Based on Deep Learning","authors":"Yanjun Li, Takaaki Yoshimura, Yuto Horima, Hiroyuki Sugimori","doi":"10.3390/a17030119","DOIUrl":"https://doi.org/10.3390/a17030119","url":null,"abstract":"The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging angle and contrast agent inhomogeneity. Traditional coronary artery stenosis localization algorithms often only detect aortic stenosis and ignore branch vessels that may also cause major health threats. Therefore, improving the localization of branch vessel stenosis in coronary angiographic images is a potential development property. In this study, we propose a preprocessing approach that combines vessel enhancement and image fusion as a prerequisite for deep learning. The sensitivity of the neural network to stenosis features is improved by enhancing the blurry features in coronary angiographic images. By validating five neural networks, such as YOLOv4 and R-FCN-Inceptionresnetv2, our proposed method can improve the performance of deep learning network applications on the images from six common imaging angles. The results showed that the proposed method is suitable as a preprocessing method for coronary angiographic image processing based on deep learning and can be used to amend the recognition ability of the deep model for fine vessel stenosis.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"2016 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140246202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minh-Quan Vo, Thu Nguyen, M. Riegler, Hugo L. Hammer
Generative models have recently received a lot of attention. However, a challenge with such models is that it is usually not possible to compute the likelihood function, which makes parameter estimation or training of the models challenging. The most commonly used alternative strategy is called likelihood-free estimation, based on finding values of the model parameters such that a set of selected statistics have similar values in the dataset and in samples generated from the model. However, a challenge is how to select statistics that are efficient in estimating unknown parameters. The most commonly used statistics are the mean vector, variances, and correlations between variables, but they may be less relevant in estimating the unknown parameters. We suggest utilizing Tukey depth contours (TDCs) as statistics in likelihood-free estimation. TDCs are highly flexible and can capture almost any property of multivariate data, in addition, they seem to be as of yet unexplored for likelihood-free estimation. We demonstrate that TDC statistics are able to estimate the unknown parameters more efficiently than mean, variance, and correlation in likelihood-free estimation. We further apply the TDC statistics to estimate the properties of requests to a computer system, demonstrating their real-life applicability. The suggested method is able to efficiently find the unknown parameters of the request distribution and quantify the estimation uncertainty.
{"title":"Efficient Estimation of Generative Models Using Tukey Depth","authors":"Minh-Quan Vo, Thu Nguyen, M. Riegler, Hugo L. Hammer","doi":"10.3390/a17030120","DOIUrl":"https://doi.org/10.3390/a17030120","url":null,"abstract":"Generative models have recently received a lot of attention. However, a challenge with such models is that it is usually not possible to compute the likelihood function, which makes parameter estimation or training of the models challenging. The most commonly used alternative strategy is called likelihood-free estimation, based on finding values of the model parameters such that a set of selected statistics have similar values in the dataset and in samples generated from the model. However, a challenge is how to select statistics that are efficient in estimating unknown parameters. The most commonly used statistics are the mean vector, variances, and correlations between variables, but they may be less relevant in estimating the unknown parameters. We suggest utilizing Tukey depth contours (TDCs) as statistics in likelihood-free estimation. TDCs are highly flexible and can capture almost any property of multivariate data, in addition, they seem to be as of yet unexplored for likelihood-free estimation. We demonstrate that TDC statistics are able to estimate the unknown parameters more efficiently than mean, variance, and correlation in likelihood-free estimation. We further apply the TDC statistics to estimate the properties of requests to a computer system, demonstrating their real-life applicability. The suggested method is able to efficiently find the unknown parameters of the request distribution and quantify the estimation uncertainty.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140245186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials.
{"title":"Active Data Selection and Information Seeking","authors":"Thomas Parr, K. Friston, P. Zeidman","doi":"10.3390/a17030118","DOIUrl":"https://doi.org/10.3390/a17030118","url":null,"abstract":"Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"50 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140250288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dynamic star simulator is a commonly used ground-test calibration device for star sensors. For the problems of slow calculation speed, low integration, and high power consumption in the traditional star chart simulation method, this paper designs a FPGA-based star chart display algorithm for a dynamic star simulator. The design adopts the USB 2.0 protocol to obtain the attitude data, uses the SDRAM to cache the attitude data and video stream, extracts the effective navigation star points by searching the starry sky equidistant right ascension and declination partitions, and realizes the pipelined displaying of the star map by using the parallel computing capability of the FPGA. Test results show that under the conditions of chart field of view of Φ20° and simulated magnitude of 2.0∼6.0 Mv, the longest time for calculating a chart is 72 μs under the clock of 148.5 MHz, which effectively improves the chart display speed of the dynamic star simulator. The FPGA-based star map display algorithm gets rid of the dependence of the existing algorithm on the computer, reduces the volume and power consumption of the dynamic star simulator, and realizes the miniaturization and portable demand of the dynamic star simulator.
{"title":"Field Programmable Gate Array-Based Acceleration Algorithm Design for Dynamic Star Map Parallel Computing","authors":"Bo Cui, Lingyun Wang, Guangxi Li, Xian Ren","doi":"10.3390/a17030117","DOIUrl":"https://doi.org/10.3390/a17030117","url":null,"abstract":"The dynamic star simulator is a commonly used ground-test calibration device for star sensors. For the problems of slow calculation speed, low integration, and high power consumption in the traditional star chart simulation method, this paper designs a FPGA-based star chart display algorithm for a dynamic star simulator. The design adopts the USB 2.0 protocol to obtain the attitude data, uses the SDRAM to cache the attitude data and video stream, extracts the effective navigation star points by searching the starry sky equidistant right ascension and declination partitions, and realizes the pipelined displaying of the star map by using the parallel computing capability of the FPGA. Test results show that under the conditions of chart field of view of Φ20° and simulated magnitude of 2.0∼6.0 Mv, the longest time for calculating a chart is 72 μs under the clock of 148.5 MHz, which effectively improves the chart display speed of the dynamic star simulator. The FPGA-based star map display algorithm gets rid of the dependence of the existing algorithm on the computer, reduces the volume and power consumption of the dynamic star simulator, and realizes the miniaturization and portable demand of the dynamic star simulator.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"63 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140248370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}