Pub Date : 2023-12-18DOI: 10.3389/fams.2023.1261270
A. Appadu, A. Kelil
The time-fractional Korteweg de Vries equation can be viewed as a generalization of the classical KdV equation. The KdV equations can be applied in modeling tsunami propagation, coastal wave dynamics, and oceanic wave interactions. In this study, we construct two standard finite difference methods using finite difference methods with conformable and Caputo approximations to solve a time-fractional Korteweg-de Vries (KdV) equation. These two methods are named as FDMCA and FDMCO. FDMCA utilizes Caputo's derivative and a finite-forward difference approach for discretization, while FDMCO employs conformable discretization. To study the stability, we use the Von Neumann Stability Analysis for some fractional parameter values. We perform error analysis using L1 & L∞ norms and relative errors, and we present results through graphical representations and tables. Our obtained results demonstrate strong agreement between numerical and exact solutions when the fractional operator is close to 1.0 for both methods. Generally, this study enhances our comprehension of the capabilities and constraints of FDMCO and FDMCA when used to solve such types of partial differential equations laying some ground for further research.
{"title":"Some finite difference methods for solving linear fractional KdV equation","authors":"A. Appadu, A. Kelil","doi":"10.3389/fams.2023.1261270","DOIUrl":"https://doi.org/10.3389/fams.2023.1261270","url":null,"abstract":"The time-fractional Korteweg de Vries equation can be viewed as a generalization of the classical KdV equation. The KdV equations can be applied in modeling tsunami propagation, coastal wave dynamics, and oceanic wave interactions. In this study, we construct two standard finite difference methods using finite difference methods with conformable and Caputo approximations to solve a time-fractional Korteweg-de Vries (KdV) equation. These two methods are named as FDMCA and FDMCO. FDMCA utilizes Caputo's derivative and a finite-forward difference approach for discretization, while FDMCO employs conformable discretization. To study the stability, we use the Von Neumann Stability Analysis for some fractional parameter values. We perform error analysis using L1 & L∞ norms and relative errors, and we present results through graphical representations and tables. Our obtained results demonstrate strong agreement between numerical and exact solutions when the fractional operator is close to 1.0 for both methods. Generally, this study enhances our comprehension of the capabilities and constraints of FDMCO and FDMCA when used to solve such types of partial differential equations laying some ground for further research.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":" 20","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138995128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.3389/fams.2023.1260187
Euis Asriani, I. Muchtadi-Alamsyah, Ayu Purwarianti
In the encoding and decoding process of transformer neural networks, a weight matrix-vector multiplication occurs in each multihead attention and feed forward sublayer. Assigning the appropriate weight matrix and algorithm can improve transformer performance, especially for machine translation tasks. In this study, we investigate the use of the real block-circulant matrices and an alternative to the commonly used fast Fourier transform (FFT) algorithm, namely, the discrete cosine transform–discrete sine transform (DCT-DST) algorithm, to be implemented in a transformer. We explore three transformer models that combine the use of real block-circulant matrices with different algorithms. We start from generating two orthogonal matrices, U and Q. The matrix U is spanned by the combination of the reals and imaginary parts of eigenvectors of the real block-circulant matrix, whereas Q is defined such that the matrix multiplication QU can be represented in the shape of a DCT-DST matrix. The final step is defining the Schur form of the real block-circulant matrix. We find that the matrix-vector multiplication using the DCT-DST algorithm can be defined by assigning the Kronecker product between the DCT-DST matrix and an orthogonal matrix in the same order as the dimension of the circulant matrix that spanned the real block circulant. According to the experiment's findings, the dense-real block circulant DCT-DST model with largest matrix dimension was able to reduce the number of model parameters up to 41%. The same model of 128 matrix dimension gained 26.47 of BLEU score, higher compared to the other two models on the same matrix dimensions.
{"title":"Real block-circulant matrices and DCT-DST algorithm for transformer neural network","authors":"Euis Asriani, I. Muchtadi-Alamsyah, Ayu Purwarianti","doi":"10.3389/fams.2023.1260187","DOIUrl":"https://doi.org/10.3389/fams.2023.1260187","url":null,"abstract":"In the encoding and decoding process of transformer neural networks, a weight matrix-vector multiplication occurs in each multihead attention and feed forward sublayer. Assigning the appropriate weight matrix and algorithm can improve transformer performance, especially for machine translation tasks. In this study, we investigate the use of the real block-circulant matrices and an alternative to the commonly used fast Fourier transform (FFT) algorithm, namely, the discrete cosine transform–discrete sine transform (DCT-DST) algorithm, to be implemented in a transformer. We explore three transformer models that combine the use of real block-circulant matrices with different algorithms. We start from generating two orthogonal matrices, U and Q. The matrix U is spanned by the combination of the reals and imaginary parts of eigenvectors of the real block-circulant matrix, whereas Q is defined such that the matrix multiplication QU can be represented in the shape of a DCT-DST matrix. The final step is defining the Schur form of the real block-circulant matrix. We find that the matrix-vector multiplication using the DCT-DST algorithm can be defined by assigning the Kronecker product between the DCT-DST matrix and an orthogonal matrix in the same order as the dimension of the circulant matrix that spanned the real block circulant. According to the experiment's findings, the dense-real block circulant DCT-DST model with largest matrix dimension was able to reduce the number of model parameters up to 41%. The same model of 128 matrix dimension gained 26.47 of BLEU score, higher compared to the other two models on the same matrix dimensions.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"52 14","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139007024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.3389/fams.2023.1144142
Luca Mechelli, Jan Rohleff, Stefan Volkwein
In the present article, optimal control problems for linear parabolic partial differential equations (PDEs) with time-dependent coefficient functions are considered. One of the common approach in literature is to derive the first-order sufficient optimality system and to apply a finite element (FE) discretization. This leads to a specific linear but high-dimensional time variant (LTV) dynamical system. To reduce the size of the LTV system, we apply a tailored reduced order modeling technique based on empirical gramians and derived directly from the first-order optimality system. For testing purpose, we focus on two specific examples: a multiobjective optimization and a closed-loop optimal control problem. Our proposed methodology results to be better performing than a standard proper orthogonal decomposition (POD) approach for the above mentioned examples.
{"title":"Model order reduction for optimality systems through empirical gramians","authors":"Luca Mechelli, Jan Rohleff, Stefan Volkwein","doi":"10.3389/fams.2023.1144142","DOIUrl":"https://doi.org/10.3389/fams.2023.1144142","url":null,"abstract":"In the present article, optimal control problems for linear parabolic partial differential equations (PDEs) with time-dependent coefficient functions are considered. One of the common approach in literature is to derive the first-order sufficient optimality system and to apply a finite element (FE) discretization. This leads to a specific linear but high-dimensional time variant (LTV) dynamical system. To reduce the size of the LTV system, we apply a tailored reduced order modeling technique based on empirical gramians and derived directly from the first-order optimality system. For testing purpose, we focus on two specific examples: a multiobjective optimization and a closed-loop optimal control problem. Our proposed methodology results to be better performing than a standard proper orthogonal decomposition (POD) approach for the above mentioned examples.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"11 2","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138979390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-08DOI: 10.3389/fams.2023.1274846
Masahiro Okada, Hiroshi Ito
Speaker recognition has been performed by considering individual variations in the power spectrograms of speech, which reflect the resonance phenomena in the speaker's vocal tract filter. In recent years, phase-based features have been used for speaker recognition. However, the phase-based features are not in a raw form of the phase but are crafted by humans, suggesting that the role of the raw phase is less interpretable. This study used phase spectrograms, which are calculated by subtracting the phase in the time-frequency domain of the electroglottograph signal from that of speech. The phase spectrograms represent the non-modified phase characteristics of the vocal tract filter.The phase spectrograms were obtained from five Japanese participants. Phase spectrograms corresponding to vowels, called phase spectra, were then extracted and circular-averaged for each vowel. The speakers were determined based on the degree of similarity of the averaged spectra.The accuracy of discriminating speakers using the averaged phase spectra was observed to be high although speakers were discriminated using only phase information without power. In particular, the averaged phase spectra showed different shapes for different speakers, resulting in the similarity between the different speaker spectrum pairs being lower. Therefore, the speakers were distinguished by using phase spectra.This predominance of phase spectra suggested that the phase characteristics of the vocal tract filter reflect the individuality of speakers.
{"title":"Phase characteristics of vocal tract filter can distinguish speakers","authors":"Masahiro Okada, Hiroshi Ito","doi":"10.3389/fams.2023.1274846","DOIUrl":"https://doi.org/10.3389/fams.2023.1274846","url":null,"abstract":"Speaker recognition has been performed by considering individual variations in the power spectrograms of speech, which reflect the resonance phenomena in the speaker's vocal tract filter. In recent years, phase-based features have been used for speaker recognition. However, the phase-based features are not in a raw form of the phase but are crafted by humans, suggesting that the role of the raw phase is less interpretable. This study used phase spectrograms, which are calculated by subtracting the phase in the time-frequency domain of the electroglottograph signal from that of speech. The phase spectrograms represent the non-modified phase characteristics of the vocal tract filter.The phase spectrograms were obtained from five Japanese participants. Phase spectrograms corresponding to vowels, called phase spectra, were then extracted and circular-averaged for each vowel. The speakers were determined based on the degree of similarity of the averaged spectra.The accuracy of discriminating speakers using the averaged phase spectra was observed to be high although speakers were discriminated using only phase information without power. In particular, the averaged phase spectra showed different shapes for different speakers, resulting in the similarity between the different speaker spectrum pairs being lower. Therefore, the speakers were distinguished by using phase spectra.This predominance of phase spectra suggested that the phase characteristics of the vocal tract filter reflect the individuality of speakers.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"30 38","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138588856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-07DOI: 10.3389/fams.2023.1292443
Isaac Mwangi Wangari, Samson Olaniyi, R. Lebelo, K. Okosun
The unexpected emergence of novel coronavirus identified as SAR-CoV-2 virus (severe acute respiratory syndrome corona virus 2) disrupted the world order to an extent that the human activities that are core to survival came almost to a halt. The COVID-19 pandemic created an insurmountable global health crisis that led to a united front among all nations to research on effective pharmaceutical measures that could stop COVID-19 proliferation. Consequently, different types of vaccines were discovered (single-dose and double-dose vaccines). However, the speed at which these vaccines were developed and approved to be administered created other challenges (vaccine skepticism and hesitancy).This paper therefore tracks the transmission dynamics of COVID-19 using a non-linear deterministic system that accounts for the unwillingness of both susceptible and partially vaccinated individuals to receive either single-dose or double-dose vaccines (vaccine hesitancy). Further the model is extended to incorporate three time-dependent non-pharmaceutical and pharmaceutical intervention controls, namely preventive control, control associated with screening-management of both truly asymptomatic and symptomatic infectious individuals and control associated with vaccination of susceptible individuals with a single dose vaccine. The Pontryagin's Maximum Principle is applied to establish the optimality conditions associated with the optimal controls.If COVID-19 vaccines administered are imperfect and transient then there exist a parameter space where backward bifurcation occurs. Time profile projections depict that in a setting where vaccine hesitancy is present, administering single dose vaccines leads to a significant reduction of COVID-19 prevalence than when double dose vaccines are administered. Comparison of the impact of vaccine hesitancy against either single dose or double dose on COVID-19 prevalence reveals that vaccine hesitancy against single dose is more detrimental than vaccine hesitancy against a double dose vaccine. Optimal analysis results reveal that non-pharmaceutical time-dependent control significantly flattens the COVID-19 epidemic curve when compared with pharmaceutical controls. Cost-effectiveness assessment suggest that non-pharmaceutical control is the most cost-effective COVID-19 mitigation strategy that should be implemented in a setting where resources are limited.Policy makers and medical practitioners should assess the level of COVID-19 vaccine hesitancy inorder to decide on the type of vaccine (single-dose or double-dose) to administer to the population.
{"title":"Transmission of COVID-19 in the presence of single-dose and double-dose vaccines with hesitancy: mathematical modeling and optimal control analysis","authors":"Isaac Mwangi Wangari, Samson Olaniyi, R. Lebelo, K. Okosun","doi":"10.3389/fams.2023.1292443","DOIUrl":"https://doi.org/10.3389/fams.2023.1292443","url":null,"abstract":"The unexpected emergence of novel coronavirus identified as SAR-CoV-2 virus (severe acute respiratory syndrome corona virus 2) disrupted the world order to an extent that the human activities that are core to survival came almost to a halt. The COVID-19 pandemic created an insurmountable global health crisis that led to a united front among all nations to research on effective pharmaceutical measures that could stop COVID-19 proliferation. Consequently, different types of vaccines were discovered (single-dose and double-dose vaccines). However, the speed at which these vaccines were developed and approved to be administered created other challenges (vaccine skepticism and hesitancy).This paper therefore tracks the transmission dynamics of COVID-19 using a non-linear deterministic system that accounts for the unwillingness of both susceptible and partially vaccinated individuals to receive either single-dose or double-dose vaccines (vaccine hesitancy). Further the model is extended to incorporate three time-dependent non-pharmaceutical and pharmaceutical intervention controls, namely preventive control, control associated with screening-management of both truly asymptomatic and symptomatic infectious individuals and control associated with vaccination of susceptible individuals with a single dose vaccine. The Pontryagin's Maximum Principle is applied to establish the optimality conditions associated with the optimal controls.If COVID-19 vaccines administered are imperfect and transient then there exist a parameter space where backward bifurcation occurs. Time profile projections depict that in a setting where vaccine hesitancy is present, administering single dose vaccines leads to a significant reduction of COVID-19 prevalence than when double dose vaccines are administered. Comparison of the impact of vaccine hesitancy against either single dose or double dose on COVID-19 prevalence reveals that vaccine hesitancy against single dose is more detrimental than vaccine hesitancy against a double dose vaccine. Optimal analysis results reveal that non-pharmaceutical time-dependent control significantly flattens the COVID-19 epidemic curve when compared with pharmaceutical controls. Cost-effectiveness assessment suggest that non-pharmaceutical control is the most cost-effective COVID-19 mitigation strategy that should be implemented in a setting where resources are limited.Policy makers and medical practitioners should assess the level of COVID-19 vaccine hesitancy inorder to decide on the type of vaccine (single-dose or double-dose) to administer to the population.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"16 7","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138591655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-07DOI: 10.3389/fams.2023.1274787
Yuhe Wang, Eugene Pinsky
Triangular distributions are widely used in many applications with limited sample data, business simulations, and project management. As with other distributions, a standard way to measure deviations is to compute the standard deviation. However, the standard deviation is sensitive to outliers. In this paper, we consider and compare other deviation metrics, namely the mean absolute deviation from the mean, the median, and the quantile-based deviation. We show the simple geometric interpretations for these deviation measures and how to construct them using a compass and a straightedge. The explicit formula of mean absolute deviation from the median for triangular distribution is derived in this paper for the first time. It has a simple geometric interpretation. It is the least volatile and is always better than the standard or mean absolute deviation from the mean. Although greater than the quantile deviation, it is easier to compute with limited sample data. We present a new procedure to estimate the parameters of this distribution in terms of this deviation. This procedure is computationally simple and may be superior to other methods when dealing with limited sample data, as is often the case with triangle distributions.
{"title":"Geometry of deviation measures for triangular distributions","authors":"Yuhe Wang, Eugene Pinsky","doi":"10.3389/fams.2023.1274787","DOIUrl":"https://doi.org/10.3389/fams.2023.1274787","url":null,"abstract":"Triangular distributions are widely used in many applications with limited sample data, business simulations, and project management. As with other distributions, a standard way to measure deviations is to compute the standard deviation. However, the standard deviation is sensitive to outliers. In this paper, we consider and compare other deviation metrics, namely the mean absolute deviation from the mean, the median, and the quantile-based deviation. We show the simple geometric interpretations for these deviation measures and how to construct them using a compass and a straightedge. The explicit formula of mean absolute deviation from the median for triangular distribution is derived in this paper for the first time. It has a simple geometric interpretation. It is the least volatile and is always better than the standard or mean absolute deviation from the mean. Although greater than the quantile deviation, it is easier to compute with limited sample data. We present a new procedure to estimate the parameters of this distribution in terms of this deviation. This procedure is computationally simple and may be superior to other methods when dealing with limited sample data, as is often the case with triangle distributions.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"23 5","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138594160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-06DOI: 10.3389/fams.2023.1324054
F. Khan, Yonis Gulzar, Shahnawaz Ayoub, Muneer Majid, Mohammad Shuaib Mir, Arjumand Bano Soomro
Radiologists confront formidable challenges when confronted with the intricate task of classifying brain tumors through the analysis of MRI images. Our forthcoming manuscript introduces an innovative and highly effective methodology that capitalizes on the capabilities of Least Squares Support Vector Machines (LS-SVM) in tandem with the rich insights drawn from Multi-Scale Morphological Texture Features (MMTF) extracted from T1-weighted MR images. Our methodology underwent meticulous evaluation on a substantial dataset encompassing 139 cases, consisting of 119 cases of aberrant tumors and 20 cases of normal brain images. The outcomes we achieved are nothing short of extraordinary. Our LS-SVM-based approach vastly outperforms competing classifiers, demonstrating its dominance with an exceptional accuracy rate of 98.97%. This represents a substantial 3.97% improvement over alternative methods, accompanied by a notable 2.48% enhancement in Sensitivity and a substantial 10% increase in Specificity. These results conclusively surpass the performance of traditional classifiers such as Support Vector Machines (SVM), Radial Basis Function (RBF), and Artificial Neural Networks (ANN) in terms of classification accuracy. The outstanding performance of our model in the realm of brain tumor diagnosis signifies a substantial leap forward in the field, holding the promise of delivering more precise and dependable tools for radiologists and healthcare professionals in their pivotal role of identifying and classifying brain tumors using MRI imaging techniques.
{"title":"Least square-support vector machine based brain tumor classification system with multi model texture features","authors":"F. Khan, Yonis Gulzar, Shahnawaz Ayoub, Muneer Majid, Mohammad Shuaib Mir, Arjumand Bano Soomro","doi":"10.3389/fams.2023.1324054","DOIUrl":"https://doi.org/10.3389/fams.2023.1324054","url":null,"abstract":"Radiologists confront formidable challenges when confronted with the intricate task of classifying brain tumors through the analysis of MRI images. Our forthcoming manuscript introduces an innovative and highly effective methodology that capitalizes on the capabilities of Least Squares Support Vector Machines (LS-SVM) in tandem with the rich insights drawn from Multi-Scale Morphological Texture Features (MMTF) extracted from T1-weighted MR images. Our methodology underwent meticulous evaluation on a substantial dataset encompassing 139 cases, consisting of 119 cases of aberrant tumors and 20 cases of normal brain images. The outcomes we achieved are nothing short of extraordinary. Our LS-SVM-based approach vastly outperforms competing classifiers, demonstrating its dominance with an exceptional accuracy rate of 98.97%. This represents a substantial 3.97% improvement over alternative methods, accompanied by a notable 2.48% enhancement in Sensitivity and a substantial 10% increase in Specificity. These results conclusively surpass the performance of traditional classifiers such as Support Vector Machines (SVM), Radial Basis Function (RBF), and Artificial Neural Networks (ANN) in terms of classification accuracy. The outstanding performance of our model in the realm of brain tumor diagnosis signifies a substantial leap forward in the field, holding the promise of delivering more precise and dependable tools for radiologists and healthcare professionals in their pivotal role of identifying and classifying brain tumors using MRI imaging techniques.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"91 12","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138596105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05DOI: 10.3389/fams.2023.1275588
Zhitao Wang, Nana Li, Quan Zhang, Jin Wei, Lei Zhang, Yuanquan Wang
The active contour model, also known as the snake model, is an elegant approach for image segmentation and motion tracking. The gradient vector flow (GVF) is an effective external force for active contours. However, the GVF model is based on isotropic diffusion and does not take the image structure into account. The GVF snake cannot converge to very deep concavities and blob-like concavities and fails to preserve weak edges neighboring strong ones. To address these limitations, we first propose the directionally weakened diffusion (DWD), which is anisotropic by incorporating the image structure in a subtle way. Using the DWD, a novel external force called directionally weakened gradient vector flow (DWGVF) is proposed for active contours. In addition, two spatiotemporally varying weights are employed to make the DWGVF robust to noise. The DWGVF snake has been assessed on both synthetic and real images. Experimental results show that the DWGVF snake provides much better results in terms of noise robustness, weak edge preserving, and convergence of various concavities when compared with the well-known GVF, the generalized GVF (GGVF) snake.
{"title":"Directionally weakened diffusion for image segmentation using active contours","authors":"Zhitao Wang, Nana Li, Quan Zhang, Jin Wei, Lei Zhang, Yuanquan Wang","doi":"10.3389/fams.2023.1275588","DOIUrl":"https://doi.org/10.3389/fams.2023.1275588","url":null,"abstract":"The active contour model, also known as the snake model, is an elegant approach for image segmentation and motion tracking. The gradient vector flow (GVF) is an effective external force for active contours. However, the GVF model is based on isotropic diffusion and does not take the image structure into account. The GVF snake cannot converge to very deep concavities and blob-like concavities and fails to preserve weak edges neighboring strong ones. To address these limitations, we first propose the directionally weakened diffusion (DWD), which is anisotropic by incorporating the image structure in a subtle way. Using the DWD, a novel external force called directionally weakened gradient vector flow (DWGVF) is proposed for active contours. In addition, two spatiotemporally varying weights are employed to make the DWGVF robust to noise. The DWGVF snake has been assessed on both synthetic and real images. Experimental results show that the DWGVF snake provides much better results in terms of noise robustness, weak edge preserving, and convergence of various concavities when compared with the well-known GVF, the generalized GVF (GGVF) snake.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"54 3","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138600477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main goal of this study is to examine the return explanation strengths of the Carhart four-factor, the Fama–French three-factor, and the single-factor models in the context of the Bangladeshi stock market. We, therefore, reveal the risk-adjusted returns, test the valuation capability of multi-factor models, and estimate optimal portfolio weights of stocks listed in DSE under the DSE30 index. Our findings demonstrate that large capitalization firms that have low or medium book-to-market (B/M) ratios produce more concentrated returns than their counterparts, resulting in greater earnings per unit of total, systematic, and downside risks. Furthermore, we discover that each factorial value has an impressive capacity to explain the market excess returns; however, the influence of factor values on the cross-section of stock returns is somewhat contradictory. In particular, the momentum factor is unable to describe the cross-section excess returns, whereas the risk premium, size, and value factors have a significant impact on the cross-section excess returns. Finally, we find that a large-cap firm with a low B/M ratio is suitable for risk-seeking investors; in contrast, a small-cap firm with a low B/M ratio is appropriate for lower risk tolerance investors. Moreover, our empirical outcomes have noteworthy implications for private companies, investors, and policymakers.
{"title":"Portfolio optimization and valuation capability of multi-factor models: an observational evidence from Dhaka stock exchange","authors":"Md. Ahsan Kabir, Liping Yu, Sanjoy Kumar Sarker, Md. Nahiduzzaman, Tanmay Borman","doi":"10.3389/fams.2023.1271485","DOIUrl":"https://doi.org/10.3389/fams.2023.1271485","url":null,"abstract":"The main goal of this study is to examine the return explanation strengths of the Carhart four-factor, the Fama–French three-factor, and the single-factor models in the context of the Bangladeshi stock market. We, therefore, reveal the risk-adjusted returns, test the valuation capability of multi-factor models, and estimate optimal portfolio weights of stocks listed in DSE under the DSE30 index. Our findings demonstrate that large capitalization firms that have low or medium book-to-market (B/M) ratios produce more concentrated returns than their counterparts, resulting in greater earnings per unit of total, systematic, and downside risks. Furthermore, we discover that each factorial value has an impressive capacity to explain the market excess returns; however, the influence of factor values on the cross-section of stock returns is somewhat contradictory. In particular, the momentum factor is unable to describe the cross-section excess returns, whereas the risk premium, size, and value factors have a significant impact on the cross-section excess returns. Finally, we find that a large-cap firm with a low B/M ratio is suitable for risk-seeking investors; in contrast, a small-cap firm with a low B/M ratio is appropriate for lower risk tolerance investors. Moreover, our empirical outcomes have noteworthy implications for private companies, investors, and policymakers.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"124 51","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.3389/fams.2023.1267034
Ori Becher, Mira Marcus-Kalish, David M. Steinberg
The age of big data has fueled expectations for accelerating learning. The availability of large data sets enables researchers to achieve more powerful statistical analyses and enhances the reliability of conclusions, which can be based on a broad collection of subjects. Often such data sets can be assembled only with access to diverse sources; for example, medical research that combines data from multiple centers in a federated analysis. However these hopes must be balanced against data privacy concerns, which hinder sharing raw data among centers. Consequently, federated analyses typically resort to sharing data summaries from each center. The limitation to summaries carries the risk that it will impair the efficiency of statistical analysis procedures. In this work, we take a close look at the effects of federated analysis on two very basic problems, non-parametric comparison of two groups and quantile estimation to describe the corresponding distributions. We also propose a specific privacy-preserving data release policy for federated analysis with the K -anonymity criterion, which has been adopted by the Medical Informatics Platform of the European Human Brain Project. Our results show that, for our tasks, there is only a modest loss of statistical efficiency.
{"title":"Federated statistical analysis: non-parametric testing and quantile estimation","authors":"Ori Becher, Mira Marcus-Kalish, David M. Steinberg","doi":"10.3389/fams.2023.1267034","DOIUrl":"https://doi.org/10.3389/fams.2023.1267034","url":null,"abstract":"The age of big data has fueled expectations for accelerating learning. The availability of large data sets enables researchers to achieve more powerful statistical analyses and enhances the reliability of conclusions, which can be based on a broad collection of subjects. Often such data sets can be assembled only with access to diverse sources; for example, medical research that combines data from multiple centers in a federated analysis. However these hopes must be balanced against data privacy concerns, which hinder sharing raw data among centers. Consequently, federated analyses typically resort to sharing data summaries from each center. The limitation to summaries carries the risk that it will impair the efficiency of statistical analysis procedures. In this work, we take a close look at the effects of federated analysis on two very basic problems, non-parametric comparison of two groups and quantile estimation to describe the corresponding distributions. We also propose a specific privacy-preserving data release policy for federated analysis with the K -anonymity criterion, which has been adopted by the Medical Informatics Platform of the European Human Brain Project. Our results show that, for our tasks, there is only a modest loss of statistical efficiency.","PeriodicalId":36662,"journal":{"name":"Frontiers in Applied Mathematics and Statistics","volume":"47 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136352045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}