Pub Date : 2025-12-05DOI: 10.1016/j.jcmds.2025.100130
Akhila Mariya Regal, Dinesh Kumar S
This article presents a central difference numerical approximation for solving singularly perturbed delay differential equations of reaction–diffusion type. The proposed scheme includes support for higher order convergence on the uniform mesh. The suggested numerical scheme is solved using Thomas Algorithm in MATLAB R2022a. Both theoretical and numerical results of convergence have been shown and found to be consistent with the proposed scheme. The results of theoretical analysis are computed and illustrated by few examples presented in tables and plots. Our findings are compared with already published works and our method found to give a good approximation with less errors for the problem.
{"title":"Computational technique for solving small delayed singularly perturbed reaction–diffusion problem","authors":"Akhila Mariya Regal, Dinesh Kumar S","doi":"10.1016/j.jcmds.2025.100130","DOIUrl":"10.1016/j.jcmds.2025.100130","url":null,"abstract":"<div><div>This article presents a central difference numerical approximation for solving singularly perturbed delay differential equations of reaction–diffusion type. The proposed scheme includes support for higher order convergence on the uniform mesh. The suggested numerical scheme is solved using Thomas Algorithm in MATLAB R2022a. Both theoretical and numerical results of convergence have been shown and found to be consistent with the proposed scheme. The results of theoretical analysis are computed and illustrated by few examples presented in tables and plots. Our findings are compared with already published works and our method found to give a good approximation with less errors for the problem.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"18 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alzheimer’s disease is a global health challenge with an increasing number of cases, particularly in developing countries such as Indonesia. Early diagnosis is crucial to slowing the progression of this disease. This study evaluates the impact of Contrast Limited Adaptive Histogram Equalization (CLAHE) on the quality of Magnetic resonance imaging (MRI) images to enhance the performance of deep learning models, namely EfficientNet-B3 and EfficientNetV2-B3, in classifying Alzheimer’s disease into four categories: Moderate Demented, Mild Demented, Very Mild Demented, and Non-Demented. CLAHE is applied to enhance the local contrast of MRI images, making important features more visible. The results show that the EfficientNetV2-B3 model with CLAHE achieves 99% precision, 99% F1-score, and 98% accuracy, while EfficientNet-B3 with CLAHE also shows significant improvements compared to models without preprocessing and those using Histogram Equalization (HE). CLAHE has proven not only to improve accuracy but also to stabilize classification, particularly for minority classes such as Moderate Demented, which are difficult to detect using conventional methods. This study highlights the importance of CLAHE as part of the development of AI-based diagnostic tools for Alzheimer’s, especially in clinical environments with limited resources. The main contribution of this research is demonstrating how CLAHE, when integrated with modern architectures such as EfficientNet-B3 and EfficientNetV2-B3, not only enhances the model’s sensitivity to critical features in MRI data but also establishes a new approach to improving classification outcomes in real-world scenarios with resource constraints.
{"title":"Alzheimer’s diagnosis transformation: Evaluation of the effect of CLAHE on the effectiveness of EfficientNet architecture in MRI image classification","authors":"Navira Rahma Salsabila, Adela Regita Azzahra, Siti Zakiah, Anindya Zulva Larasati, Novanto Yudistira, Lailil Muflikhah","doi":"10.1016/j.jcmds.2025.100129","DOIUrl":"10.1016/j.jcmds.2025.100129","url":null,"abstract":"<div><div>Alzheimer’s disease is a global health challenge with an increasing number of cases, particularly in developing countries such as Indonesia. Early diagnosis is crucial to slowing the progression of this disease. This study evaluates the impact of Contrast Limited Adaptive Histogram Equalization (CLAHE) on the quality of Magnetic resonance imaging (MRI) images to enhance the performance of deep learning models, namely EfficientNet-B3 and EfficientNetV2-B3, in classifying Alzheimer’s disease into four categories: Moderate Demented, Mild Demented, Very Mild Demented, and Non-Demented. CLAHE is applied to enhance the local contrast of MRI images, making important features more visible. The results show that the EfficientNetV2-B3 model with CLAHE achieves 99% precision, 99% F1-score, and 98% accuracy, while EfficientNet-B3 with CLAHE also shows significant improvements compared to models without preprocessing and those using Histogram Equalization (HE). CLAHE has proven not only to improve accuracy but also to stabilize classification, particularly for minority classes such as Moderate Demented, which are difficult to detect using conventional methods. This study highlights the importance of CLAHE as part of the development of AI-based diagnostic tools for Alzheimer’s, especially in clinical environments with limited resources. The main contribution of this research is demonstrating how CLAHE, when integrated with modern architectures such as EfficientNet-B3 and EfficientNetV2-B3, not only enhances the model’s sensitivity to critical features in MRI data but also establishes a new approach to improving classification outcomes in real-world scenarios with resource constraints.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"17 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1016/j.jcmds.2025.100128
Pablo Soto-Quiros
In this paper, we propose the generalized low-tubal-rank tensor approximation (GLTRTA) problem, which extends the generalized low-rank matrix approximation problem from matrices to third-order tensors using the -product, where the -product is a specific type of tensor multiplication. The GLTRTA problem is introduced as an extension of the existing low-tubal-rank tensor problem. We also develop a novel iterative algorithm, so-called the PGS method, to estimate a solution for the GLTRTA problem. The PGS method is based on a proximal point modification of the Gauss–Seidel algorithm. It is shown that the limit points of the sequence produced by the PGS method correspond to critical points of the objective function. Three numerical experiments are presented to illustrate the effectiveness of the PGS method, including its application to color image denoising.
{"title":"A proximal Gauss–Seidel algorithm for solving a generalized low-tubal-rank tensor approximation problem based on the t-product","authors":"Pablo Soto-Quiros","doi":"10.1016/j.jcmds.2025.100128","DOIUrl":"10.1016/j.jcmds.2025.100128","url":null,"abstract":"<div><div>In this paper, we propose the generalized low-tubal-rank tensor approximation (GLTRTA) problem, which extends the generalized low-rank matrix approximation problem from matrices to third-order tensors using the <span><math><mi>t</mi></math></span>-product, where the <span><math><mi>t</mi></math></span>-product is a specific type of tensor multiplication. The GLTRTA problem is introduced as an extension of the existing low-tubal-rank tensor problem. We also develop a novel iterative algorithm, so-called the PGS method, to estimate a solution for the GLTRTA problem. The PGS method is based on a proximal point modification of the Gauss–Seidel algorithm. It is shown that the limit points of the sequence produced by the PGS method correspond to critical points of the objective function. Three numerical experiments are presented to illustrate the effectiveness of the PGS method, including its application to color image denoising.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"17 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1016/j.jcmds.2025.100127
Christian Chan , Xiaotian Dai , Thierry Chekouo , Quan Long , Xuewen Lu
Motivated by the CATHGEN data, we develop a new statistical method for simultaneous variable selection and parameter estimation in the context of generalized partly linear models for data with high-dimensional covariates. The method is referred to as the broken adaptive ridge (BAR) estimator, which is an approximation of the -penalized regression by iteratively performing reweighted squared -penalized regression. The generalized partly linear model extends the generalized linear model by incorporating a non-parametric component, allowing for the construction of a flexible model to capture various types of covariate effects. We employ the Bernstein polynomials as the sieve space to approximate the non-parametric functions so that our method can be implemented easily using the existing R packages. Extensive simulation studies suggest that the proposed method performs better than other commonly used penalty-based variable selection methods. We apply the method to the CATHGEN data with a binary response from a coronary artery disease study, which motivated our research, and obtained new findings in both high-dimensional genetic and low-dimensional non-genetic covariates.
{"title":"Broken adaptive ridge method for variable selection in generalized partly linear models with application to the coronary artery disease data","authors":"Christian Chan , Xiaotian Dai , Thierry Chekouo , Quan Long , Xuewen Lu","doi":"10.1016/j.jcmds.2025.100127","DOIUrl":"10.1016/j.jcmds.2025.100127","url":null,"abstract":"<div><div>Motivated by the CATHGEN data, we develop a new statistical method for simultaneous variable selection and parameter estimation in the context of generalized partly linear models for data with high-dimensional covariates. The method is referred to as the broken adaptive ridge (BAR) estimator, which is an approximation of the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span>-penalized regression by iteratively performing reweighted squared <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-penalized regression. The generalized partly linear model extends the generalized linear model by incorporating a non-parametric component, allowing for the construction of a flexible model to capture various types of covariate effects. We employ the Bernstein polynomials as the sieve space to approximate the non-parametric functions so that our method can be implemented easily using the existing R packages. Extensive simulation studies suggest that the proposed method performs better than other commonly used penalty-based variable selection methods. We apply the method to the CATHGEN data with a binary response from a coronary artery disease study, which motivated our research, and obtained new findings in both high-dimensional genetic and low-dimensional non-genetic covariates.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"17 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.jcmds.2025.100126
Meenakshi Gupta , Atul Garg
The reliance of users on the Internet or web and need of fast access for official or personal use is increasing load on servers which also increases the challenges for developers to provide fast access. Content Delivery Networks (CDNs) playing a crucial role to overcome these challenges by helping content providers deliver web content efficiently to end-users through geographically distributed surrogate servers (SS). This requires selection of effective web contents from the origin server (OS) for replication on surrogate servers. In this work, optimizing content replication techniques named Circular Replication among Neighbor Surrogate Servers (CRANSS) is proposed. This technique considers access level of surrogate servers (SS) based on their association with neighbor SS for contents replication decision also CRANSS evaluates the access levels of surrogate servers based on their association with neighboring SS. It also allows for strategic content replication decisions and considers storage capacity of SS and requests pattern of end-users. For evaluation the proposed technique, Network simulator — ns-2 used, and 10 surrogate servers (SS) with one origin server (OS) were set. The results of CRANSS are compared with random, round-robin and popularity-based methods. The simulation results show that average response time (1.65 to 1.99), completed response requests (95.06 to 95.43) and load imbalance index (14.27 to 17.25) is better in proposed system. This proposed technique ensures enhancing the overall web experience by providing faster, optimal use of resources and more reliable access to web content. The aim is to ensure efficient utilization of resources keeping in view end-users perceived Quality of Service (QoS) of accessing web content as per the needs of today’s digital landscape.
{"title":"Enhanced access level-based Circular Replication for CDN performance using CRANNS","authors":"Meenakshi Gupta , Atul Garg","doi":"10.1016/j.jcmds.2025.100126","DOIUrl":"10.1016/j.jcmds.2025.100126","url":null,"abstract":"<div><div>The reliance of users on the Internet or web and need of fast access for official or personal use is increasing load on servers which also increases the challenges for developers to provide fast access. Content Delivery Networks (CDNs) playing a crucial role to overcome these challenges by helping content providers deliver web content efficiently to end-users through geographically distributed surrogate servers (SS). This requires selection of effective web contents from the origin server (OS) for replication on surrogate servers. In this work, optimizing content replication techniques named Circular Replication among Neighbor Surrogate Servers (CRANSS) is proposed. This technique considers access level of surrogate servers (SS) based on their association with neighbor SS for contents replication decision also CRANSS evaluates the access levels of surrogate servers based on their association with neighboring SS. It also allows for strategic content replication decisions and considers storage capacity of SS and requests pattern of end-users. For evaluation the proposed technique, Network simulator — ns-2 used, and 10 surrogate servers (SS) with one origin server (OS) were set. The results of CRANSS are compared with random, round-robin and popularity-based methods. The simulation results show that average response time (1.65 to 1.99), completed response requests (95.06 to 95.43) and load imbalance index (14.27 to 17.25) is better in proposed system. This proposed technique ensures enhancing the overall web experience by providing faster, optimal use of resources and more reliable access to web content. The aim is to ensure efficient utilization of resources keeping in view end-users perceived Quality of Service (QoS) of accessing web content as per the needs of today’s digital landscape.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"16 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145026827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-26DOI: 10.1016/j.jcmds.2025.100124
Soumyarashmi Panigrahi, Dibya Ranjan Das Adhikary, Binod Kumar Pattanayak
Artificial Intelligence (AI)-powered Computer vision techniques have revolutionized Medical Image Analysis (MIA), enabling accurate detection, diagnosis, and treatment of various disorders such as brain tumors. Brain tumors are a worldwide primary health concern that affects thousands of people. Precisely identifying and diagnosing brain tumors is vital for effective management and life expectancy. Current advances in AI, particularly in Deep Learning (DL) methods have shown immense possibilities to analyze medical images, including MRI. However, the quality of the MRI images significantly impact the overall accuracy of the classification framework. To tackle this issue, we investigated the effect of various Interpolation Techniques (IT) on enhancing Magnetic Resonance Imaging (MRI) image quality, including Nearest Neighbour IT, Bilinear IT, Bicubic IT, and Lanczos IT. Furthermore, we employed Transfer Learning to leverage pre-trained Convolutional Neural Networks (CNNs) architectures, specifically DenseNet201. We proposed a modified DenseNet201 model by adding additional layers and extracting features from the interpolated brain MRI images. We used two publicly available brain tumor datasets. Our experimental results illustrated that the combination of Lanczos IT and fine-tuned DenseNet201 attained the highest accuracy of 99.21% and 99.60% in Dataset-1 and Dataset-2, respectively, for brain tumor classification. Our analysis highlights the importance of image interpolation techniques in improving medical image quality and ultimately improving diagnostic accuracy. Our findings have significant implications for the development of AI-powered decision support systems in medical imaging, enabling healthcare professionals to make more accurate diagnoses and informed treatment decisions.
{"title":"Integrating interpolation techniques with deep learning for accurate brain tumor classification","authors":"Soumyarashmi Panigrahi, Dibya Ranjan Das Adhikary, Binod Kumar Pattanayak","doi":"10.1016/j.jcmds.2025.100124","DOIUrl":"10.1016/j.jcmds.2025.100124","url":null,"abstract":"<div><div>Artificial Intelligence (AI)-powered Computer vision techniques have revolutionized Medical Image Analysis (MIA), enabling accurate detection, diagnosis, and treatment of various disorders such as brain tumors. Brain tumors are a worldwide primary health concern that affects thousands of people. Precisely identifying and diagnosing brain tumors is vital for effective management and life expectancy. Current advances in AI, particularly in Deep Learning (DL) methods have shown immense possibilities to analyze medical images, including MRI. However, the quality of the MRI images significantly impact the overall accuracy of the classification framework. To tackle this issue, we investigated the effect of various Interpolation Techniques (IT) on enhancing Magnetic Resonance Imaging (MRI) image quality, including Nearest Neighbour IT, Bilinear IT, Bicubic IT, and Lanczos IT. Furthermore, we employed Transfer Learning to leverage pre-trained Convolutional Neural Networks (CNNs) architectures, specifically DenseNet201. We proposed a modified DenseNet201 model by adding additional layers and extracting features from the interpolated brain MRI images. We used two publicly available brain tumor datasets. Our experimental results illustrated that the combination of Lanczos IT and fine-tuned DenseNet201 attained the highest accuracy of 99.21% and 99.60% in Dataset-1 and Dataset-2, respectively, for brain tumor classification. Our analysis highlights the importance of image interpolation techniques in improving medical image quality and ultimately improving diagnostic accuracy. Our findings have significant implications for the development of AI-powered decision support systems in medical imaging, enabling healthcare professionals to make more accurate diagnoses and informed treatment decisions.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"16 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144724957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23DOI: 10.1016/j.jcmds.2025.100125
Abhay B. Upadhyay, Saurin R. Shah, Rajesh A. Thakker
This paper introduces an innovative method for rainfall nowcasting using a deep learning model that combines 3D Convolutional Neural Networks (3D-CNN) with Long Short-Term Memory (LSTM) model. The primary objective is to improve the accuracy and timeliness of short-term rainfall predictions. The 3D-CNN component is responsible for extracting spatial features from complex weather data, while the LSTM component captures temporal dependencies across time steps. This hybrid architecture, referred to as the 3D-Conv-LSTM model, has demonstrated high effectiveness for nowcasting applications. The model processes weather data stored in Network Common Data Form (NetCDF) files and integrates satellite imagery to enhance forecast precision. This dual-data approach enables the model to learn intricate spatiotemporal patterns and relationships often missed by traditional techniques. Through extensive experimentation and validation, the proposed model exhibits superior performance in predicting precipitation events compared to conventional methods. The model achieved a Mean Squared Error (MSE) of 0.0003, Peak Signal-to-Noise Ratio (PSNR) of 42.11, Root Mean Square Error (RMSE) of 0.019, and a Structural Similarity Index Measure (SSIM) of 0.99, indicating excellent prediction quality. Furthermore, the computation time for training and inference was recorded 18 min, demonstrating the model’s efficiency. These results confirm a significant improvement in forecast accuracy, which is critical for disaster preparedness and resource management in weather-sensitive regions.
{"title":"Advanced rainfall nowcasting using 3D convolutional LSTM networks on satellite data","authors":"Abhay B. Upadhyay, Saurin R. Shah, Rajesh A. Thakker","doi":"10.1016/j.jcmds.2025.100125","DOIUrl":"10.1016/j.jcmds.2025.100125","url":null,"abstract":"<div><div>This paper introduces an innovative method for rainfall nowcasting using a deep learning model that combines 3D Convolutional Neural Networks (3D-CNN) with Long Short-Term Memory (LSTM) model. The primary objective is to improve the accuracy and timeliness of short-term rainfall predictions. The 3D-CNN component is responsible for extracting spatial features from complex weather data, while the LSTM component captures temporal dependencies across time steps. This hybrid architecture, referred to as the 3D-Conv-LSTM model, has demonstrated high effectiveness for nowcasting applications. The model processes weather data stored in Network Common Data Form (NetCDF) files and integrates satellite imagery to enhance forecast precision. This dual-data approach enables the model to learn intricate spatiotemporal patterns and relationships often missed by traditional techniques. Through extensive experimentation and validation, the proposed model exhibits superior performance in predicting precipitation events compared to conventional methods. The model achieved a Mean Squared Error (MSE) of 0.0003, Peak Signal-to-Noise Ratio (PSNR) of 42.11, Root Mean Square Error (RMSE) of 0.019, and a Structural Similarity Index Measure (SSIM) of 0.99, indicating excellent prediction quality. Furthermore, the computation time for training and inference was recorded 18 min, demonstrating the model’s efficiency. These results confirm a significant improvement in forecast accuracy, which is critical for disaster preparedness and resource management in weather-sensitive regions.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"16 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144703263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-27DOI: 10.1016/j.jcmds.2025.100122
Anton Galich
In contrast to daily travel behaviour, long-distance mobility constitutes a poorly understood area in transport research. Only few national household travel surveys include sections on long-distance travel and these usually focus on the trip to the destination without gathering information about mobility behaviour at the destination. Other sources of data on mobility are either restricted to the national level such as cell phone data or to specific modes of transport such as international flight statistics or floating car data. In addition, the outbreak of the COVID-19 pandemic in 2020 has illustrated how difficult it is to grasp abrupt changes in mobility behaviour.
Against this background this paper investigates the potential of Flickr data for capturing patterns and radical changes in long-distance mobility. Flickr is a social media online platform allowing its users to upload photos and to comment on their own and other users’ photos. It is mainly used for sharing holiday and travel experiences. The results show that Flickr constitutes a viable source of data for capturing patterns and radical changes in long-distance mobility. The distribution of the travel distances, the travel destinations as well as reduction of the mileage of all holiday trips in 2020 in comparison to 2019 due to the pandemic calculated on the basis of the Flickr data is very similar to the same indicators determined on the basis of a national household travel survey, official passenger flight statistics, and other official transportation statistics.
{"title":"Capturing patterns and radical changes in long-distance mobility by Flickr data","authors":"Anton Galich","doi":"10.1016/j.jcmds.2025.100122","DOIUrl":"10.1016/j.jcmds.2025.100122","url":null,"abstract":"<div><div>In contrast to daily travel behaviour, long-distance mobility constitutes a poorly understood area in transport research. Only few national household travel surveys include sections on long-distance travel and these usually focus on the trip to the destination without gathering information about mobility behaviour at the destination. Other sources of data on mobility are either restricted to the national level such as cell phone data or to specific modes of transport such as international flight statistics or floating car data. In addition, the outbreak of the COVID-19 pandemic in 2020 has illustrated how difficult it is to grasp abrupt changes in mobility behaviour.</div><div>Against this background this paper investigates the potential of Flickr data for capturing patterns and radical changes in long-distance mobility. Flickr is a social media online platform allowing its users to upload photos and to comment on their own and other users’ photos. It is mainly used for sharing holiday and travel experiences. The results show that Flickr constitutes a viable source of data for capturing patterns and radical changes in long-distance mobility. The distribution of the travel distances, the travel destinations as well as reduction of the mileage of all holiday trips in 2020 in comparison to 2019 due to the pandemic calculated on the basis of the Flickr data is very similar to the same indicators determined on the basis of a national household travel survey, official passenger flight statistics, and other official transportation statistics.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"16 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144513831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1016/j.jcmds.2025.100123
Abdullah Dawar , Hamid Khan , Muhammad Ullah
In this article, a comparative analysis of the IRPSM-Padé and DTM-Padé methods has been conducted by solving the fluid flow problem over a bi-directional extending sheet. The fluid flow is expressed by the partial differential equations (PDEs) which are then converted to ordinary differential equations (ODEs) by mean of similarity variables. Both the IRPSM-Padé and DTM-Padé methods are tested at [3,3] and [6,6] Padé approximants. Tables and Figures are used to examine the outcomes and show the consistency and accuracy of both approaches. The outcomes of IRPSM-Padé [3,3] and [6,6] with the same order of approximations closely match the outcomes of DTM-Padé [3,3] and [6,6] using Padé approximants. The significant degree of agreement between the two methods indicates that IRPSM-Padé and DTM-Padé handle the fluid flow problem in a comparable manner. The findings of the IRPSM-Padé and DTM-Padé methods show a strong degree of agreement, indicating the accuracy and dependability of the more recent technique (IRPSM-Padé). The obtained CPU time shows that the DTM consistently perform better that IRPSM in terms of computational efficiency. The total CPU time for IRPSM is nearly three-times greater than that of DTM, indicating that IRPSM demands more computational effort. The recorded times accurately reflect the computational efficiency of IRPSM and DTM because the Padé approximation simply improves the results rationalization and has no influence on CPU time. The residual errors analysis demonstrates that the IRPSM-Padé technique produces exceptionally precise approximations, with errors decreasing as the Padé order increases. Furthermore, the numerical assessment demonstrates that higher Padé orders improve the accuracy and stability of the IRPSM-Padé.
Computational Implementation:
Mathematica 14.1 was used to carry out numerical simulations, the DTM-Padé method, and the IRPSM-Padé method. Mathematica’s integrated symbolic and numerical solvers, including the ND Solve function for numerical validation, were used to solve the governing equations. Additionally, plots, such as mesh visualizations and absolute error graphs, were created using Mathematica’s built-in plotting capabilities without the usage of third-party programs.
{"title":"An in-depth analysis of the IRPSM-Padé algorithm for solving three-dimensional fluid flow problems","authors":"Abdullah Dawar , Hamid Khan , Muhammad Ullah","doi":"10.1016/j.jcmds.2025.100123","DOIUrl":"10.1016/j.jcmds.2025.100123","url":null,"abstract":"<div><div>In this article, a comparative analysis of the IRPSM-Padé and DTM-Padé methods has been conducted by solving the fluid flow problem over a bi-directional extending sheet. The fluid flow is expressed by the partial differential equations (PDEs) which are then converted to ordinary differential equations (ODEs) by mean of similarity variables. Both the IRPSM-Padé and DTM-Padé methods are tested at [3,3] and [6,6] Padé approximants. Tables and Figures are used to examine the outcomes and show the consistency and accuracy of both approaches. The outcomes of IRPSM-Padé [3,3] and [6,6] with the same order of approximations closely match the outcomes of DTM-Padé [3,3] and [6,6] using Padé approximants. The significant degree of agreement between the two methods indicates that IRPSM-Padé and DTM-Padé handle the fluid flow problem in a comparable manner. The findings of the IRPSM-Padé and DTM-Padé methods show a strong degree of agreement, indicating the accuracy and dependability of the more recent technique (IRPSM-Padé). The obtained CPU time shows that the DTM consistently perform better that IRPSM in terms of computational efficiency. The total CPU time for IRPSM is nearly three-times greater than that of DTM, indicating that IRPSM demands more computational effort. The recorded times accurately reflect the computational efficiency of IRPSM and DTM because the Padé approximation simply improves the results rationalization and has no influence on CPU time. The residual errors analysis demonstrates that the IRPSM-Padé technique produces exceptionally precise approximations, with errors decreasing as the Padé order increases. Furthermore, the numerical assessment demonstrates that higher Padé orders improve the accuracy and stability of the IRPSM-Padé.</div></div><div><h3>Computational Implementation:</h3><div>Mathematica 14.1 was used to carry out numerical simulations, the DTM-Padé method, and the IRPSM-Padé method. Mathematica’s integrated symbolic and numerical solvers, including the ND Solve function for numerical validation, were used to solve the governing equations. Additionally, plots, such as mesh visualizations and absolute error graphs, were created using Mathematica’s built-in plotting capabilities without the usage of third-party programs.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"16 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.1016/j.jcmds.2025.100120
Chacha Stephen Chacha
In this work, we investigate the elementwise minimal non-negative (EMN) solution of the matrix polynomial equation using an exact line search (ELS) technique to enhance the convergence of the Newton method. Nonnegative solutions to matrix equations are essential in engineering, optimization, signal processing, and data mining, driving advancements and improving efficiency in these fields. While recent advancements in solving matrix equations with nonnegative constraints have emphasized iterative methods, optimization strategies, and theoretical developments, efficiently finding the EMN solution remains a significant challenge. The proposed method integrates the Newton method with an exact line search (ELS) strategy to accelerate convergence and improve solution accuracy. Numerical experiments demonstrate that this approach requires fewer iterations to reach the EMN solution compared to the standard Newton method. Moreover, the method shows improved stability, particularly when dealing with ill-conditioned input matrices and very small tolerance errors.
{"title":"On exact line search method for a polynomial matrix equation","authors":"Chacha Stephen Chacha","doi":"10.1016/j.jcmds.2025.100120","DOIUrl":"10.1016/j.jcmds.2025.100120","url":null,"abstract":"<div><div>In this work, we investigate the elementwise minimal non-negative (EMN) solution of the matrix polynomial equation using an exact line search (ELS) technique to enhance the convergence of the Newton method. Nonnegative solutions to matrix equations are essential in engineering, optimization, signal processing, and data mining, driving advancements and improving efficiency in these fields. While recent advancements in solving matrix equations with nonnegative constraints have emphasized iterative methods, optimization strategies, and theoretical developments, efficiently finding the EMN solution remains a significant challenge. The proposed method integrates the Newton method with an exact line search (ELS) strategy to accelerate convergence and improve solution accuracy. Numerical experiments demonstrate that this approach requires fewer iterations to reach the EMN solution compared to the standard Newton method. Moreover, the method shows improved stability, particularly when dealing with ill-conditioned input matrices and very small tolerance errors.</div></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"15 ","pages":"Article 100120"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144189659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}