首页 > 最新文献

Applied Stochastic Models in Business and Industry最新文献

英文 中文
Optimal Transport Autoregression to Forecast High-Frequency Financial Data Distributions 预测高频金融数据分布的最优运输自回归
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-30 DOI: 10.1002/asmb.70067
Paolo Pagnottoni

In this paper, we study the properties and performance of optimal transport autoregression in modeling and forecasting high-frequency financial data distributions. We build on a class of univariate autoregressive transport models recently proposed in the literature (Zhu and Müller) where the distributional time series dynamics is modeled either through a single scalar, similarly with traditional Euclidean autoregressive models, or via a functional distribution-contraction coefficient. Properties and performance of the models are investigated through an empirical application to forecast distributions of high-frequency financial price returns and volatility of Bitcoin. Our results show that forecast errors are highly time- and quantile-dependent: while autoregressive transport models are generally able to predict return and volatility densities during “normal business” periods, forecast errors tend to rise in the proximity of extreme quantiles, though such increase is non-monotonic. We highlight the strengths and weaknesses of the method in modeling the distributional time series of high-frequency, noisy financial data, suggesting some potential directions for future research.

本文研究了最优传输自回归在高频金融数据分布建模和预测中的性质和性能。我们建立在一类最近在文献中提出的单变量自回归输运模型(Zhu和m ller)的基础上,其中分布时间序列动力学要么通过单个标量建模,类似于传统的欧几里得自回归模型,要么通过函数分布收缩系数建模。通过对高频金融价格回报和比特币波动性分布的实证应用,研究了模型的性质和性能。我们的研究结果表明,预测误差高度依赖于时间和分位数:虽然自回归输运模型通常能够预测“正常业务”期间的收益和波动密度,但在极端分位数附近,预测误差往往会上升,尽管这种增加是非单调的。我们强调了该方法在高频、噪声金融数据的分布时间序列建模中的优点和缺点,并提出了未来研究的一些潜在方向。
{"title":"Optimal Transport Autoregression to Forecast High-Frequency Financial Data Distributions","authors":"Paolo Pagnottoni","doi":"10.1002/asmb.70067","DOIUrl":"https://doi.org/10.1002/asmb.70067","url":null,"abstract":"<p>In this paper, we study the properties and performance of optimal transport autoregression in modeling and forecasting high-frequency financial data distributions. We build on a class of univariate autoregressive transport models recently proposed in the literature (Zhu and Müller) where the distributional time series dynamics is modeled either through a single scalar, similarly with traditional Euclidean autoregressive models, or via a functional distribution-contraction coefficient. Properties and performance of the models are investigated through an empirical application to forecast distributions of high-frequency financial price returns and volatility of Bitcoin. Our results show that forecast errors are highly time- and quantile-dependent: while autoregressive transport models are generally able to predict return and volatility densities during “normal business” periods, forecast errors tend to rise in the proximity of extreme quantiles, though such increase is non-monotonic. We highlight the strengths and weaknesses of the method in modeling the distributional time series of high-frequency, noisy financial data, suggesting some potential directions for future research.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"42 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.70067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145905216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring and Assessing the Healthcare Services Experience: A Proposal of a Synthetic Index 衡量和评估医疗服务体验:一个综合指数的建议
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-30 DOI: 10.1002/asmb.70065
Leonardo Salvatore Alaimo, Filomena Maggino

The evaluation of healthcare services is a crucial aspect of public health management, as it provides insights into service effectiveness, efficiency, and user satisfaction. This paper proposes a multidimensional approach to measuring healthcare service experience by constructing a synthetic index that incorporates three key perceptual dimensions: Cost, accessibility, and quality. These latent constructs are measured using a set of elementary indicators from the European Quality of Life Survey. To develop the dimensional synthetic indices—one for each experience dimension—as well as the overall experience index and the customer satisfaction index, we employ Partial Least Squares Path Modeling (PLS-PM). This approach not only enables the synthesis of latent variables but also allows for the analysis of structural relationships between them. The results provide a comprehensive framework for assessing healthcare service experiences and offer valuable insights for policymakers and service providers aiming to enhance healthcare quality and accessibility while improving user satisfaction.

医疗保健服务的评估是公共卫生管理的一个重要方面,因为它提供了对服务有效性、效率和用户满意度的见解。本文提出了一种多维度的方法来衡量医疗服务体验,通过构建一个综合指数,其中包括三个关键的感知维度:成本,可及性和质量。这些潜在的构念是用欧洲生活质量调查的一组基本指标来衡量的。我们采用偏最小二乘路径建模(PLS-PM)来开发维度综合指数(每个体验维度一个)以及整体体验指数和客户满意度指数。这种方法不仅可以综合潜在变量,而且可以分析它们之间的结构关系。研究结果为评估医疗保健服务体验提供了一个全面的框架,并为旨在提高医疗保健质量和可及性、同时提高用户满意度的政策制定者和服务提供商提供了有价值的见解。
{"title":"Measuring and Assessing the Healthcare Services Experience: A Proposal of a Synthetic Index","authors":"Leonardo Salvatore Alaimo,&nbsp;Filomena Maggino","doi":"10.1002/asmb.70065","DOIUrl":"https://doi.org/10.1002/asmb.70065","url":null,"abstract":"<div>\u0000 \u0000 <p>The evaluation of healthcare services is a crucial aspect of public health management, as it provides insights into service effectiveness, efficiency, and user satisfaction. This paper proposes a multidimensional approach to measuring healthcare service experience by constructing a synthetic index that incorporates three key perceptual dimensions: Cost, accessibility, and quality. These latent constructs are measured using a set of elementary indicators from the European Quality of Life Survey. To develop the dimensional synthetic indices—one for each experience dimension—as well as the overall experience index and the customer satisfaction index, we employ Partial Least Squares Path Modeling (PLS-PM). This approach not only enables the synthesis of latent variables but also allows for the analysis of structural relationships between them. The results provide a comprehensive framework for assessing healthcare service experiences and offer valuable insights for policymakers and service providers aiming to enhance healthcare quality and accessibility while improving user satisfaction.</p>\u0000 </div>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"42 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145891154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending Explainable Ensemble Trees to Regression Contexts 将可解释的集成树扩展到回归上下文
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-26 DOI: 10.1002/asmb.70064
Massimo Aria, Agostino Gnasso, Carmela Iorio, Marjolein Fokkema

The advent of ensemble methods, such as Random Forest (RF), has led to a paradigm shift in supervised learning. These methods have achieved remarkable levels of prediction accuracy by aggregating multiple weak learners. However, a drawback of these methods is their lack of transparency, which often prevents users from understanding their prediction processes. In light of these challenges, Explainable Ensemble Trees (E2Tree) has recently been proposed, providing a graphical representation of the relationships between response variables and predictors in RFs for classification. E2Tree constructs a single decision tree based on (dis)similarities between observations. By summarizing both distances in terms of predictors and a forest as a single decision tree, E2Tree merges the strengths of both decision trees and decision tree ensembles. In this paper, we propose to extend the E2Tree methodology to regression contexts. We investigate the performance of E2Tree for regression using real-world datasets. We use the Mantel test to test the correlation between similarities of the RF and E2Tree.

随机森林(Random Forest, RF)等集成方法的出现,导致了监督学习的范式转变。这些方法通过聚合多个弱学习器达到了显著的预测精度。然而,这些方法的缺点是缺乏透明度,这通常会阻碍用户理解它们的预测过程。鉴于这些挑战,最近提出了可解释集成树(E2Tree),它提供了RFs中响应变量和预测因子之间关系的图形表示,用于分类。E2Tree基于观测值之间的(非)相似性构建了一个单一的决策树。通过总结预测器和森林作为单个决策树的距离,E2Tree结合了决策树和决策树集成的优势。在本文中,我们建议将E2Tree方法扩展到回归上下文。我们使用真实世界的数据集来研究E2Tree的回归性能。我们使用Mantel测试来测试RF和E2Tree相似度之间的相关性。
{"title":"Extending Explainable Ensemble Trees to Regression Contexts","authors":"Massimo Aria,&nbsp;Agostino Gnasso,&nbsp;Carmela Iorio,&nbsp;Marjolein Fokkema","doi":"10.1002/asmb.70064","DOIUrl":"https://doi.org/10.1002/asmb.70064","url":null,"abstract":"<div>\u0000 \u0000 <p>The advent of ensemble methods, such as Random Forest (RF), has led to a paradigm shift in supervised learning. These methods have achieved remarkable levels of prediction accuracy by aggregating multiple weak learners. However, a drawback of these methods is their lack of transparency, which often prevents users from understanding their prediction processes. In light of these challenges, Explainable Ensemble Trees (E2Tree) has recently been proposed, providing a graphical representation of the relationships between response variables and predictors in RFs for classification. E2Tree constructs a single decision tree based on (dis)similarities between observations. By summarizing both distances in terms of predictors and a forest as a single decision tree, E2Tree merges the strengths of both decision trees and decision tree ensembles. In this paper, we propose to extend the E2Tree methodology to regression contexts. We investigate the performance of E2Tree for regression using real-world datasets. We use the Mantel test to test the correlation between similarities of the RF and E2Tree.</p>\u0000 </div>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"42 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145887585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GARCH With Intervention Analysis to Evaluate Short Selling Restrictions GARCH与干预分析评估卖空限制
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-22 DOI: 10.1002/asmb.70063
Wilson Calmon, Gabriel Mizuno, Sara Paixão, Adrian Pizzinga

At a critical moment in the 2007–2009 financial crisis, financial authorities in the US, Japan, the United Kingdom, France, Canada, and Germany unanimously banned short sales in their respective markets. We estimate GARCH models with intervention analysis to assess the effect of such regulatory decisions on the unconditional or long-term stock market volatility, and we focus on trading days under short selling restrictions. Contrary to the conclusion reached by some important literature, our findings reveal that, for all six aforementioned markets, the volatility did not grow once the restrictions were imposed, and it began to decrease to the levels observed previously to the bankruptcy of Lehman Brothers.

在2007-2009年金融危机的关键时刻,美国、日本、英国、法国、加拿大和德国的金融当局一致禁止在各自的市场进行卖空。我们使用干预分析来估计GARCH模型,以评估此类监管决策对无条件或长期股票市场波动的影响,并且我们关注卖空限制下的交易日。与一些重要文献得出的结论相反,我们的研究结果表明,对于上述所有六个市场,一旦实施限制,波动性并没有增加,而是开始下降到雷曼兄弟破产之前观察到的水平。
{"title":"GARCH With Intervention Analysis to Evaluate Short Selling Restrictions","authors":"Wilson Calmon,&nbsp;Gabriel Mizuno,&nbsp;Sara Paixão,&nbsp;Adrian Pizzinga","doi":"10.1002/asmb.70063","DOIUrl":"https://doi.org/10.1002/asmb.70063","url":null,"abstract":"<div>\u0000 \u0000 <p>At a critical moment in the 2007–2009 financial crisis, financial authorities in the US, Japan, the United Kingdom, France, Canada, and Germany unanimously banned short sales in their respective markets. We estimate GARCH models with intervention analysis to assess the effect of such regulatory decisions on the <i>unconditional</i> or <i>long-term</i> stock market volatility, and we focus on trading days under short selling restrictions. Contrary to the conclusion reached by some important literature, our findings reveal that, for all six aforementioned markets, the volatility did not grow once the restrictions were imposed, and it began to decrease to the levels observed previously to the bankruptcy of Lehman Brothers.</p>\u0000 </div>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"42 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145891543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kernel Principal Component Analysis for Uncertain Data Objects and Its Application in Classification 不确定数据对象核主成分分析及其在分类中的应用
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-18 DOI: 10.1002/asmb.70062
Changwan Ko, Behnam Tavakkol, Youngseon Jeong

Uncertain data mining has been a growing field of research in recent years. Numerous data mining techniques for performing tasks such as clustering, classification, anomaly detection, and so on have been developed for uncertain data. Principal component analysis (PCA) and its extension, kernel principal component analysis (KPCA), are two well-known techniques that are widely used for dimensionality reduction and feature extraction for traditional certain data. However, for uncertain data, these techniques have not been developed to the best of our knowledge. In this paper, uncertain principal component analysis (UPCA) and uncertain kernel principal component analysis (UKPCA) are developed. The proposed techniques consider the inherent uncertainty of the uncertain data, unlike the traditional techniques that ignore such uncertainty. In addition, in this paper, we propose the decision tree algorithm classification model combined with the developed UKPCA technique. The proposed model is capable of achieving high classification accuracy for both real-world and synthetic data, especially for cases that involve classes having nonlinear/arbitrary shapes.

近年来,不确定数据挖掘已成为一个新兴的研究领域。针对不确定数据,已经开发了许多数据挖掘技术,用于执行诸如聚类、分类、异常检测等任务。主成分分析(PCA)及其扩展核主成分分析(KPCA)是两种众所周知的技术,广泛用于传统特定数据的降维和特征提取。然而,对于不确定的数据,这些技术还没有发展到我们所知的最好水平。本文发展了不确定主成分分析(UPCA)和不确定核主成分分析(UKPCA)。所提出的技术考虑了不确定数据的固有不确定性,而不像传统技术那样忽略这种不确定性。此外,本文还结合发展起来的UKPCA技术,提出了决策树算法分类模型。所提出的模型能够对真实世界和合成数据实现较高的分类精度,特别是对于涉及具有非线性/任意形状的类的情况。
{"title":"Kernel Principal Component Analysis for Uncertain Data Objects and Its Application in Classification","authors":"Changwan Ko,&nbsp;Behnam Tavakkol,&nbsp;Youngseon Jeong","doi":"10.1002/asmb.70062","DOIUrl":"https://doi.org/10.1002/asmb.70062","url":null,"abstract":"<div>\u0000 \u0000 <p>Uncertain data mining has been a growing field of research in recent years. Numerous data mining techniques for performing tasks such as clustering, classification, anomaly detection, and so on have been developed for uncertain data. Principal component analysis (PCA) and its extension, kernel principal component analysis (KPCA), are two well-known techniques that are widely used for dimensionality reduction and feature extraction for traditional certain data. However, for uncertain data, these techniques have not been developed to the best of our knowledge. In this paper, uncertain principal component analysis (UPCA) and uncertain kernel principal component analysis (UKPCA) are developed. The proposed techniques consider the inherent uncertainty of the uncertain data, unlike the traditional techniques that ignore such uncertainty. In addition, in this paper, we propose the decision tree algorithm classification model combined with the developed UKPCA technique. The proposed model is capable of achieving high classification accuracy for both real-world and synthetic data, especially for cases that involve classes having nonlinear/arbitrary shapes.</p>\u0000 </div>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"42 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ransomware Detection Using Sample Entropy and Graphical Models: A Methodology for Explainable Artificial Intelligence (XAI) in Cybersecurity 基于样本熵和图形模型的勒索软件检测:网络安全中可解释人工智能(XAI)的一种方法
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-14 DOI: 10.1002/asmb.70061
Danilo Bruschi, Marzio De Corato, Alfio Ferrara, Silvia Salini

Malware detection poses a critical challenge for both society and Business and Industry (B&I), particularly given the necessity for secure digital transformation. Among various cybersecurity threats, ransomware has emerged as especially disruptive, capable of halting operations, interrupting business continuity, and causing significant financial damage. Recent research has increasingly leveraged machine learning (ML) techniques to detect ransomware using Hardware Performance Counters (HPCs)—special CPU registers that track low-level hardware activities. In this study, we first propose a Sample Entropy (SampEn)-based method for compressing HPC time series data. This method effectively reduces dimensionality while preserving essential behavioral patterns, thus making it particularly suitable for practical B&I scenarios where accuracy and computational efficiency are crucial. Second, we investigate explainable algorithms for ransomware detection in B&I contexts, emphasizing transparency and interpretability. To achieve this goal, we focus on graphical models, specifically Markov Random Fields (MRFs) and Bayesian Networks. We evaluate the performance of these explainable methods against a baseline comprising Elastic Net, Support Vector Machines (SVM) with a radial kernel, XGBoost, and Autoencoder models. Our results demonstrate that these graphical models provide consistent and interpretable outcomes, closely aligned with known ransomware behaviors.

恶意软件检测对社会、商业和工业都构成了严峻的挑战,特别是考虑到安全数字化转型的必要性。在各种网络安全威胁中,勒索软件的破坏性尤其突出,它能够停止运营,中断业务连续性,并造成重大财务损失。最近的研究越来越多地利用机器学习(ML)技术来检测勒索软件,使用硬件性能计数器(hpc) -跟踪低级硬件活动的特殊CPU寄存器。在本研究中,我们首先提出了一种基于样本熵(SampEn)的HPC时间序列数据压缩方法。该方法在保留基本行为模式的同时有效地降低了维数,因此特别适用于精度和计算效率至关重要的实际B&;I场景。其次,我们研究了在B&;I环境中勒索软件检测的可解释算法,强调透明度和可解释性。为了实现这一目标,我们专注于图形模型,特别是马尔可夫随机场(mrf)和贝叶斯网络。我们对这些可解释方法的性能进行了评估,基准包括弹性网络、径向核支持向量机(SVM)、XGBoost和自动编码器模型。我们的研究结果表明,这些图形模型提供了一致和可解释的结果,与已知的勒索软件行为密切相关。
{"title":"Ransomware Detection Using Sample Entropy and Graphical Models: A Methodology for Explainable Artificial Intelligence (XAI) in Cybersecurity","authors":"Danilo Bruschi,&nbsp;Marzio De Corato,&nbsp;Alfio Ferrara,&nbsp;Silvia Salini","doi":"10.1002/asmb.70061","DOIUrl":"https://doi.org/10.1002/asmb.70061","url":null,"abstract":"<div>\u0000 \u0000 <p>Malware detection poses a critical challenge for both society and Business and Industry (B&amp;I), particularly given the necessity for secure digital transformation. Among various cybersecurity threats, ransomware has emerged as especially disruptive, capable of halting operations, interrupting business continuity, and causing significant financial damage. Recent research has increasingly leveraged machine learning (ML) techniques to detect ransomware using Hardware Performance Counters (HPCs)—special CPU registers that track low-level hardware activities. In this study, we first propose a Sample Entropy (SampEn)-based method for compressing HPC time series data. This method effectively reduces dimensionality while preserving essential behavioral patterns, thus making it particularly suitable for practical B&amp;I scenarios where accuracy and computational efficiency are crucial. Second, we investigate explainable algorithms for ransomware detection in B&amp;I contexts, emphasizing transparency and interpretability. To achieve this goal, we focus on graphical models, specifically Markov Random Fields (MRFs) and Bayesian Networks. We evaluate the performance of these explainable methods against a baseline comprising Elastic Net, Support Vector Machines (SVM) with a radial kernel, XGBoost, and Autoencoder models. Our results demonstrate that these graphical models provide consistent and interpretable outcomes, closely aligned with known ransomware behaviors.</p>\u0000 </div>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"41 6","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145848193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discussing Cascading Failures: The Bursting Point Processes Approach 讨论级联故障:爆发点过程方法
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-09 DOI: 10.1002/asmb.70060
Maxim Finkelstein, Na Young Yoo, Ji Hwan Cha

We discuss the new approach to modelling cascading failures from a probabilistic viewpoint based on exploding self-exciting point processes. It explains a possible mechanism of the cascade development. From the practical point of view, this approach might be oversimplified for modelling the flow of events (failures) in, for example, real-life power grids (as, e.g., not considering the specific network topology). However, even in this general form, it can be useful for overall modelling of the converging process of cascading failures and understanding the probabilistic nature of this interesting phenomenon. Three baseline processes are considered: the geometric process, the geometric-type process with a decreasing threshold after each event and the extended generalised Polya process.

本文从概率的角度讨论了基于爆炸自激点过程的级联故障建模新方法。它解释了一种可能的级联发展机制。从实际的角度来看,这种方法对于建模事件流(故障)可能过于简化,例如,现实生活中的电网(例如,不考虑特定的网络拓扑结构)。然而,即使在这种一般形式下,它对于级联故障的收敛过程的整体建模和理解这种有趣现象的概率性质也是有用的。考虑了三种基线过程:几何过程、每次事件后阈值递减的几何型过程和扩展的广义Polya过程。
{"title":"Discussing Cascading Failures: The Bursting Point Processes Approach","authors":"Maxim Finkelstein,&nbsp;Na Young Yoo,&nbsp;Ji Hwan Cha","doi":"10.1002/asmb.70060","DOIUrl":"https://doi.org/10.1002/asmb.70060","url":null,"abstract":"<div>\u0000 \u0000 <p>We discuss the new approach to modelling cascading failures from a probabilistic viewpoint based on exploding self-exciting point processes. It explains a <i>possible mechanism</i> of the cascade development. From the practical point of view, this approach might be oversimplified for modelling the flow of events (failures) in, for example, real-life power grids (as, e.g., not considering the specific network topology). However, even in this general form, it can be useful for overall modelling of the converging process of cascading failures and understanding the probabilistic nature of this interesting phenomenon. Three baseline processes are considered: the geometric process, the geometric-type process with a decreasing threshold after each event and the extended generalised Polya process.</p>\u0000 </div>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"41 6","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145750597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Approach to Explore Consumer Behavior Patterns in Retail Markets Using Market Basket Analysis 利用购物篮分析探索零售市场中消费者行为模式的方法
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-24 DOI: 10.1002/asmb.70057
Aarti Pardeshi, Yogesh Shahare, Kalpana Kumaran, Anand Muni Mishra, Piyush Kumar Shukla, Mohamed M. Hassan, Fayez Althobaiti

Big data is becoming an integral part of everyday life. Data can be generated through various sources, and analyzing this data to generate revenue is the biggest challenge. The growth of grocery stores in the online and offline market requires retailers to analyze customer purchase behavior. Effective analysis can improve service quality, profitability, and customer satisfaction. This paper focuses on Market Basket Analysis (MBA), an efficient technique that identifies customer purchase behaviors. AAGI an Apriori-based algorithm is developed and used to analyze categorical data collected from grocery stores. The dataset consists of 10,233 transactions collected from the Raigad region Maharashtra, India and includes 125 unique items across dairy products, fruits, processed food, and vegetables. Association rules were generated using support values of 2%, 3%, 3.5%, and 4% and confidence values of 20%, 30%, and 40%. The best results were observed with the support of 3.5% and confidence of 30%, which produced 18 profound association rules. These findings inform targeted marketing strategies and cross-selling opportunities.

大数据正在成为日常生活中不可或缺的一部分。数据可以通过各种来源产生,而分析这些数据以产生收入是最大的挑战。杂货店在线上和线下市场的增长要求零售商分析顾客的购买行为。有效的分析可以提高服务质量、盈利能力和客户满意度。市场购物篮分析(Market Basket Analysis, MBA)是一种识别顾客购买行为的有效方法。AAGI是一种基于apriori的算法,用于分析从杂货店收集的分类数据。该数据集包括从印度马哈拉施特拉邦Raigad地区收集的10,233笔交易,包括乳制品、水果、加工食品和蔬菜等125个独特项目。关联规则的生成使用2%、3%、3.5%和4%的支持值和20%、30%和40%的置信度。在支持度为3.5%,置信度为30%的情况下,观察到的结果最好,产生了18条深刻的关联规则。这些发现为有针对性的营销策略和交叉销售机会提供了信息。
{"title":"An Approach to Explore Consumer Behavior Patterns in Retail Markets Using Market Basket Analysis","authors":"Aarti Pardeshi,&nbsp;Yogesh Shahare,&nbsp;Kalpana Kumaran,&nbsp;Anand Muni Mishra,&nbsp;Piyush Kumar Shukla,&nbsp;Mohamed M. Hassan,&nbsp;Fayez Althobaiti","doi":"10.1002/asmb.70057","DOIUrl":"https://doi.org/10.1002/asmb.70057","url":null,"abstract":"<div>\u0000 \u0000 <p>Big data is becoming an integral part of everyday life. Data can be generated through various sources, and analyzing this data to generate revenue is the biggest challenge. The growth of grocery stores in the online and offline market requires retailers to analyze customer purchase behavior. Effective analysis can improve service quality, profitability, and customer satisfaction. This paper focuses on Market Basket Analysis (MBA), an efficient technique that identifies customer purchase behaviors. AAGI an Apriori-based algorithm is developed and used to analyze categorical data collected from grocery stores. The dataset consists of 10,233 transactions collected from the Raigad region Maharashtra, India and includes 125 unique items across dairy products, fruits, processed food, and vegetables. Association rules were generated using support values of 2%, 3%, 3.5%, and 4% and confidence values of 20%, 30%, and 40%. The best results were observed with the support of 3.5% and confidence of 30%, which produced 18 profound association rules. These findings inform targeted marketing strategies and cross-selling opportunities.</p>\u0000 </div>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"41 6","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Driven Investment Strategies Using Bayesian Inference in Regime-Switching Models 制度交换模型中贝叶斯推理的数据驱动投资策略
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-23 DOI: 10.1002/asmb.70058
Eléonore Blanchard, Pierre-Olivier Goffard

This article presents the benefits of using Bayesian algorithms to fit regime-switching models to daily financial returns data in order to design trading strategies. Our study focuses on a Gaussian hidden Markov model (HMM). We show how the application of a simple smoothing technique preserves the hidden Markov structure and facilitates regime detection even in instances of highly volatile data. The effectiveness of a trading strategy, based on regime detection, may be hindered by a high rate of false signals, leading to numerous trades and, consequently, an escalation in transaction costs. By reducing variance through data smoothing, we enhance the persistence of regimes over time. We validate our statistical learning procedures using synthetic data prior to their application to real-world financial data.

本文介绍了使用贝叶斯算法将制度转换模型拟合到每日财务回报数据以设计交易策略的好处。本文主要研究高斯隐马尔可夫模型(HMM)。我们展示了一个简单的平滑技术的应用如何保留隐马尔可夫结构和促进状态检测,即使在高度易失性数据的情况下。基于制度检测的交易策略的有效性可能会受到高错误信号率的阻碍,从而导致大量交易,从而导致交易成本的上升。通过数据平滑减少方差,我们增强了制度随时间的持久性。我们使用合成数据验证我们的统计学习程序,然后将其应用于现实世界的金融数据。
{"title":"Data Driven Investment Strategies Using Bayesian Inference in Regime-Switching Models","authors":"Eléonore Blanchard,&nbsp;Pierre-Olivier Goffard","doi":"10.1002/asmb.70058","DOIUrl":"https://doi.org/10.1002/asmb.70058","url":null,"abstract":"<p>This article presents the benefits of using Bayesian algorithms to fit regime-switching models to daily financial returns data in order to design trading strategies. Our study focuses on a Gaussian hidden Markov model (HMM). We show how the application of a simple smoothing technique preserves the hidden Markov structure and facilitates regime detection even in instances of highly volatile data. The effectiveness of a trading strategy, based on regime detection, may be hindered by a high rate of false signals, leading to numerous trades and, consequently, an escalation in transaction costs. By reducing variance through data smoothing, we enhance the persistence of regimes over time. We validate our statistical learning procedures using synthetic data prior to their application to real-world financial data.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"41 6","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.70058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Functional Data Regression on Distribution-Valued Data via Logarithm Derivative Quantile Transformation 基于对数导数分位数变换的分布值数据函数回归
IF 1.5 4区 数学 Q3 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-23 DOI: 10.1002/asmb.70059
Gianmarco Borrata, Antonio Balzanella, Rosanna Verde

In this paper, we introduce a new regression method tailored for data presented as distributions. Building on the latest advancements in Distributional Data Analysis (DDA), we propose a new regression model based on a transformation of quantile functions using Logarithmic Derivative Quantile (LDQ) functions. For each distributional variable Xj$$ {X}_j $$ (where j=1,,p$$ j=1,dots, p $$), we model the LDQ functions as functional data by applying smoothing B-splines at the points corresponding to the distributions' quantiles. The main contribution is the development of a regression model that considers functional regression coefficients. This allows for the consideration of distribution characteristics such as position, variability, and shape. Another contribution is the development of a robust procedure based on trimming distributions to reduce the instability of the tails and make more effective predictions. The proposed approach is corroborated by real environmental data. Cross-validation and bootstrap techniques have been employed to assess the effectiveness of both the new regression model and its robust variant.

在本文中,我们介绍了一种新的回归方法,为数据的分布量身定制。基于分布数据分析(DDA)的最新进展,我们提出了一种新的基于对数导数分位数(LDQ)函数的分位数函数转换的回归模型。对于每个分布变量X j $$ {X}_j $$(其中j = 1)…,p $$ j=1,dots, p $$),我们通过在分布的分位数对应的点上应用平滑b样条,将LDQ函数建模为函数数据。主要贡献是开发了考虑函数回归系数的回归模型。这允许考虑分布特征,如位置、可变性和形状。另一个贡献是开发了一种基于修剪分布的鲁棒程序,以减少尾部的不稳定性并进行更有效的预测。该方法得到了实际环境数据的验证。交叉验证和自举技术已被用于评估新的回归模型及其鲁棒变体的有效性。
{"title":"Functional Data Regression on Distribution-Valued Data via Logarithm Derivative Quantile Transformation","authors":"Gianmarco Borrata,&nbsp;Antonio Balzanella,&nbsp;Rosanna Verde","doi":"10.1002/asmb.70059","DOIUrl":"https://doi.org/10.1002/asmb.70059","url":null,"abstract":"<p>In this paper, we introduce a new regression method tailored for data presented as distributions. Building on the latest advancements in Distributional Data Analysis (DDA), we propose a new regression model based on a transformation of quantile functions using Logarithmic Derivative Quantile (LDQ) functions. For each distributional variable <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mrow>\u0000 <mi>X</mi>\u0000 </mrow>\u0000 <mrow>\u0000 <mi>j</mi>\u0000 </mrow>\u0000 </msub>\u0000 </mrow>\u0000 <annotation>$$ {X}_j $$</annotation>\u0000 </semantics></math> (where <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>j</mi>\u0000 <mo>=</mo>\u0000 <mn>1</mn>\u0000 <mo>,</mo>\u0000 <mi>…</mi>\u0000 <mo>,</mo>\u0000 <mi>p</mi>\u0000 </mrow>\u0000 <annotation>$$ j=1,dots, p $$</annotation>\u0000 </semantics></math>), we model the LDQ functions as functional data by applying smoothing B-splines at the points corresponding to the distributions' quantiles. The main contribution is the development of a regression model that considers functional regression coefficients. This allows for the consideration of distribution characteristics such as position, variability, and shape. Another contribution is the development of a robust procedure based on trimming distributions to reduce the instability of the tails and make more effective predictions. The proposed approach is corroborated by real environmental data. Cross-validation and bootstrap techniques have been employed to assess the effectiveness of both the new regression model and its robust variant.</p>","PeriodicalId":55495,"journal":{"name":"Applied Stochastic Models in Business and Industry","volume":"41 6","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asmb.70059","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied Stochastic Models in Business and Industry
全部 Geobiology ECOL RESTOR ENG SANIT AMBIENT Big Earth Data Geosci. J. ERN: Regulation (IO) (Topic) Essentials of Polymer Flooding Technique ECOTOXICOLOGY BIOGEOSCIENCES 2011 Conference on Lasers and Electro-Optics Europe and 12th European Quantum Electronics Conference (CLEO EUROPE/EQEC) ERN: Stock Market Risk (Topic) Ocean and Coastal Research 2009 International Workshop on Intelligent Systems and Applications Expert Rev. Clin. Immunol. 2011 International Conference on Computer Distributed Control and Intelligent Environmental Monitoring Nat. Hazards Earth Syst. Sci. 2010 11th IEEE/ACM International Conference on Grid Computing Nat. Clim. Change EUR UROL FOCUS IEEE Magn. Lett. EUROSURVEILLANCE Clean Technol. Environ. Policy EUR PHYS J-SPEC TOP EUR PHYS J-APPL PHYS ACTA CARDIOL SIN J. Hydrol. Acta Neurol. Scand. Acta Geochimica 1997 IEEE Ultrasonics Symposium Proceedings. An International Symposium (Cat. No.97CH36118) ENVIRON HEALTH-GLOB ACTA CHIR BELG Appl. Geochem. 2013 IEEE International Conference on Communications (ICC) 航空科学与技术(英文) ACTA GEOL SIN-ENGL Environ. Mol. Mutagen. B SOC GEOL MEX Energy Storage Geosci. Front. [1992] Proceedings of the Seventh Annual IEEE Symposium on Logic in Computer Science J. Meteorolog. Res. Eur. J. Control Int. J. Paleopathol. Adv. Meteorol. 2013 International Conference on Optical MEMS and Nanophotonics (OMN) 2012 European Frequency and Time Forum Environ. Eng. Sci. 2009 International Conference on Environmental Science and Information Application Technology Ocean Sci. Ecol. Processes Lith. J. Phys. Exp. Hematol. Oncol. 2013 Abstracts IEEE International Conference on Plasma Science (ICOPS) Laser Phys. 2012 International Conference on High Voltage Engineering and Application APL Photonics Laser Phys. Lett. Ecol. Indic. AAPG Bull. Clim. Change Org. Geochem. ECOLOGY J. Atmos. Chem. Am. J. Phys. Anthropol. Contrib. Mineral. Petrol. ACTA PETROL SIN Am. J. Sci. Acta Oceanolog. Sin. Ecol. Res. Geochim. Cosmochim. Acta ECOSYSTEMS Chem. Ecol. Communications Earth & Environment Conserv. Genet. Resour. Acta Geophys. Carbon Balance Manage. Int. J. Biometeorol. Ecol. Monogr. CRIT REV ENV SCI TEC ACTA GEOL POL Asia-Pac. J. Atmos. Sci. Ecol. Eng. COMP BIOCHEM PHYS C IZV-PHYS SOLID EART+ Engineering Science and Technology, an International Journal Conserv. Biol. Adv. Atmos. Sci. Ann. Glaciol. Aust. J. Earth Sci. Environ. Geochem. Health Clean-Soil Air Water Environ. Pollut. Bioavailability Hydrol. Earth Syst. Sci. Environ. Toxicol. Pharmacol. Aquat. Geochem. Appl. Clay Sci. Environ. Educ. Res, ENVIRON GEOL ERN: Other Macroeconomics: Aggregative Models (Topic) Archaeol. Anthropol. Sci.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1