Fernando Montoya, Hernán Astudillo, Daniela Díaz, Esteban Berríos
Conventional methods for process monitoring often fail to capture the causal relationships that drive outcomes, making hard to distinguish causal anomalies from mere correlations in activity flows. Hence, there is a need for approaches that allow causal interpretation of atypical scenarios (anomalies), allowing to identify the influence of operational variables on these anomalies. This article introduces (CaProM), an innovative technique based on causality techniques, applied during the planning phase in business process environments. The technique combines two causal perspectives: anomaly attribution and distribution change attribution. It has three stages: (1) process events are collected and recorded, identifying flow instances; (2) causal learning of process activities, building a directed acyclic graphs (DAGs) represent dependencies among variables; and (3) use of DAGs to monitor the process, detecting anomalies and critical nodes. The technique was validated with a industry dataset from the banking sector, comprising 562 activity flow plans. The study monitored causal structures during the planning and execution stages, and allowed to identify the main factor behind a major deviation from planned values. This work contributes to business process monitoring by introducing a causal approach that enhances both the interpretability and explainability of anomalies. The technique allows to understand which specific variables have caused an atypical scenario, providing a clear view of the causal relationships within processes and ensuring greater accuracy in decision-making. This causal analysis employs cross-sectional data, avoiding the need to average multiple time instances and reducing potential biases, and unlike time series methods, it preserves the relationships among variables.
传统的流程监控方法往往无法捕捉到驱动结果的因果关系,因此很难将因果异常与活动流中的单纯相关性区分开来。因此,我们需要能对非典型情景(异常情况)进行因果解释的方法,以确定操作变量对这些异常情况的影响。本文介绍了一种基于因果关系技术的创新技术(CaProM),适用于业务流程环境中的规划阶段。该技术结合了两个因果关系视角:异常归因和分布变化归因。该技术分为三个阶段:(1) 收集和记录流程事件,识别流程实例;(2) 流程活动的因果学习,构建表示变量间依赖关系的有向无环图(DAG);(3) 使用 DAG 监控流程,检测异常和关键节点。该技术通过银行业的行业数据集进行了验证,其中包括 562 个活动流程计划。该研究对计划和执行阶段的因果结构进行了监控,并确定了计划值出现重大偏差的主要原因。这项工作通过引入一种因果方法,提高了异常情况的可解释性和可解释性,从而为业务流程监控做出了贡献。通过这项技术,可以了解哪些特定变量导致了异常情况,从而清晰地了解流程中的因果关系,确保决策更加准确。这种因果分析采用横截面数据,避免了对多个时间实例求平均值的需要,减少了潜在的偏差,而且与时间序列方法不同,它保留了变量之间的关系。
{"title":"Causal Learning: Monitoring Business Processes Based on Causal Structures.","authors":"Fernando Montoya, Hernán Astudillo, Daniela Díaz, Esteban Berríos","doi":"10.3390/e26100867","DOIUrl":"https://doi.org/10.3390/e26100867","url":null,"abstract":"<p><p>Conventional methods for process monitoring often fail to capture the causal relationships that drive outcomes, making hard to distinguish causal anomalies from mere correlations in activity flows. Hence, there is a need for approaches that allow causal interpretation of atypical scenarios (anomalies), allowing to identify the influence of operational variables on these anomalies. This article introduces (<i>CaProM</i>), an innovative technique based on causality techniques, applied during the planning phase in business process environments. The technique combines two causal perspectives: <i>anomaly attribution</i> and <i>distribution change attribution</i>. It has three stages: (1) process events are collected and recorded, identifying flow instances; (2) causal learning of process activities, building a directed acyclic graphs (DAGs) represent dependencies among variables; and (3) use of DAGs to monitor the process, detecting anomalies and critical nodes. The technique was validated with a industry dataset from the banking sector, comprising 562 activity flow plans. The study monitored causal structures during the planning and execution stages, and allowed to identify the main factor behind a major deviation from planned values. This work contributes to business process monitoring by introducing a causal approach that enhances both the <i>interpretability</i> and <i>explainability of anomalies</i>. The technique allows to understand which specific variables have caused an atypical scenario, providing a clear view of the causal relationships within processes and ensuring greater accuracy in decision-making. This causal analysis employs cross-sectional data, avoiding the need to average multiple time instances and reducing potential biases, and unlike time series methods, it preserves the relationships among variables.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11507059/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mou Xu, Yuying Zhang, Liu Yang, Shining Yang, Jianbo Lu
The thermodynamics of black holes (BHs) and their corrections have become a hot topic in the study of gravitational physics, with significant progress made in recent decades. In this paper, we study the thermodynamics and corrections of spherically symmetric BHs in models f(R)=R+αR2 and f(R)=R+2γR+8Λ under the f(R) theory, which includes the electrodynamic field and the cosmological constant. Considering thermal fluctuations around equilibrium states, we find that, for both f(R) models, the corrected entropy is meaningful in the case of a negative cosmological constant (anti-de Sitter-RN spacetime) with Λ=-1. It is shown that when the BHs' horizon radius is small, thermal fluctuations have a more significant effect on the corrected entropy. Using the corrected entropy, we derive expressions for the relevant corrected thermodynamic quantities (such as Helmholtz free energy, internal energy, Gibbs free energy, and specific heat) and calculate the effects of the correction terms. The results indicate that the corrections to Helmholtz free energy and Gibbs free energy, caused by thermal fluctuations, are remarkable for small BHs. In addition, we explore the stability of BHs using specific heat. The study reveals that the corrected BH thermodynamics exhibit locally stable for both models, and corrected systems undergo a Hawking-Page phase transition. Considering the requirement on the non-negative volume of BHs, we also investigate the constraint on the EH radius of BHs.
{"title":"Corrected Thermodynamics of Black Holes in <i>f</i>(<i>R</i>) Gravity with Electrodynamic Field and Cosmological Constant.","authors":"Mou Xu, Yuying Zhang, Liu Yang, Shining Yang, Jianbo Lu","doi":"10.3390/e26100868","DOIUrl":"https://doi.org/10.3390/e26100868","url":null,"abstract":"<p><p>The thermodynamics of black holes (BHs) and their corrections have become a hot topic in the study of gravitational physics, with significant progress made in recent decades. In this paper, we study the thermodynamics and corrections of spherically symmetric BHs in models f(R)=R+αR2 and f(R)=R+2γR+8Λ under the f(R) theory, which includes the electrodynamic field and the cosmological constant. Considering thermal fluctuations around equilibrium states, we find that, for both f(R) models, the corrected entropy is meaningful in the case of a negative cosmological constant (anti-de Sitter-RN spacetime) with Λ=-1. It is shown that when the BHs' horizon radius is small, thermal fluctuations have a more significant effect on the corrected entropy. Using the corrected entropy, we derive expressions for the relevant corrected thermodynamic quantities (such as Helmholtz free energy, internal energy, Gibbs free energy, and specific heat) and calculate the effects of the correction terms. The results indicate that the corrections to Helmholtz free energy and Gibbs free energy, caused by thermal fluctuations, are remarkable for small BHs. In addition, we explore the stability of BHs using specific heat. The study reveals that the corrected BH thermodynamics exhibit locally stable for both models, and corrected systems undergo a Hawking-Page phase transition. Considering the requirement on the non-negative volume of BHs, we also investigate the constraint on the EH radius of BHs.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11507423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beam search is a commonly used algorithm in image captioning to improve the accuracy and robustness of generated captions by finding the optimal word sequence. However, it mainly focuses on the highest-scoring sequence at each step, often overlooking the broader image context, which can lead to suboptimal results. Additionally, beam search tends to select similar words across sequences, causing repetitive and less diverse output. These limitations suggest that, while effective, beam search can be further improved to better capture the richness and variety needed for high-quality captions. To address these issues, this paper presents meshed context-aware beam search (MCBS). In MCBS for image captioning, the generated caption context is dynamically used to influence the image attention mechanism at each decoding step, ensuring that the model focuses on different regions of the image to produce more coherent and contextually appropriate captions. Furthermore, a penalty coefficient is introduced to discourage the generation of repeated words. Through extensive testing and ablation studies across various models, our results show that MCBS significantly enhances overall model performance.
{"title":"Meshed Context-Aware Beam Search for Image Captioning.","authors":"Fengzhi Zhao, Zhezhou Yu, Tao Wang, He Zhao","doi":"10.3390/e26100866","DOIUrl":"https://doi.org/10.3390/e26100866","url":null,"abstract":"<p><p>Beam search is a commonly used algorithm in image captioning to improve the accuracy and robustness of generated captions by finding the optimal word sequence. However, it mainly focuses on the highest-scoring sequence at each step, often overlooking the broader image context, which can lead to suboptimal results. Additionally, beam search tends to select similar words across sequences, causing repetitive and less diverse output. These limitations suggest that, while effective, beam search can be further improved to better capture the richness and variety needed for high-quality captions. To address these issues, this paper presents meshed context-aware beam search (MCBS). In MCBS for image captioning, the generated caption context is dynamically used to influence the image attention mechanism at each decoding step, ensuring that the model focuses on different regions of the image to produce more coherent and contextually appropriate captions. Furthermore, a penalty coefficient is introduced to discourage the generation of repeated words. Through extensive testing and ablation studies across various models, our results show that MCBS significantly enhances overall model performance.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11508018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikhael T Sayat, Oliver Thearle, Biveen Shajilal, Sebastian P Kish, Ping Koy Lam, Nicholas J Rattenbury, John E Cater
The standard way to measure the performance of existing continuous variable quantum key distribution (CVQKD) protocols is by using the achievable secret key rate (SKR) with respect to one parameter while keeping all other parameters constant. However, this atomistic method requires many individual parameter analyses while overlooking the co-dependence of other parameters. In this work, a numerical tool is developed for comparing different CVQKD protocols while taking into account the simultaneous effects of multiple CVQKD parameters on the capability of protocols to produce positive SKRs. Using the transmittance, excess noise, and modulation amplitude parameter space, regions of positive SKR are identified to compare three discrete modulated (DM) CVQKD protocols. The results show that the M-QAM protocol outperforms the M-APSK and M-PSK protocols and that there is a non-linear increase in the capability to produce positive SKRs as the number of coherent states used for a protocol increases. The tool developed is beneficial for choosing the optimum protocol in unstable channels, such as free space, where the transmittance and excess noise fluctuate, providing a more holistic assessment of a protocol's capability to produce positive SKRs.
{"title":"Mapping Guaranteed Positive Secret Key Rates for Continuous Variable Quantum Key Distribution.","authors":"Mikhael T Sayat, Oliver Thearle, Biveen Shajilal, Sebastian P Kish, Ping Koy Lam, Nicholas J Rattenbury, John E Cater","doi":"10.3390/e26100865","DOIUrl":"https://doi.org/10.3390/e26100865","url":null,"abstract":"<p><p>The standard way to measure the performance of existing continuous variable quantum key distribution (CVQKD) protocols is by using the achievable secret key rate (SKR) with respect to one parameter while keeping all other parameters constant. However, this atomistic method requires many individual parameter analyses while overlooking the co-dependence of other parameters. In this work, a numerical tool is developed for comparing different CVQKD protocols while taking into account the simultaneous effects of multiple CVQKD parameters on the capability of protocols to produce positive SKRs. Using the transmittance, excess noise, and modulation amplitude parameter space, regions of positive SKR are identified to compare three discrete modulated (DM) CVQKD protocols. The results show that the <i>M</i>-QAM protocol outperforms the <i>M</i>-APSK and <i>M</i>-PSK protocols and that there is a non-linear increase in the capability to produce positive SKRs as the number of coherent states used for a protocol increases. The tool developed is beneficial for choosing the optimum protocol in unstable channels, such as free space, where the transmittance and excess noise fluctuate, providing a more holistic assessment of a protocol's capability to produce positive SKRs.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11507154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid developments of 5G and B5G networks have posed higher demands on retransmission in certain scenarios. This article reviews classical finite-length coding performance prediction formulas and proposes rate prediction formulas for coded modulation retransmission scenarios. Specifically, we demonstrate that a recently proposed model for correcting these prediction formulas also exhibits high accuracy in coded modulation retransmissions. To enhance the generality of this model, we introduce a range variable Pfinal to unify the predictions with different SNRs. Finally, based on simulation results, the article puts forth recommendations specific to retransmission with a high spectral efficiency.
{"title":"Finite-Blocklength Analysis of Coded Modulation with Retransmission.","authors":"Ming Jiang, Yi Wang, Fan Ding, Qiushi Xu","doi":"10.3390/e26100863","DOIUrl":"https://doi.org/10.3390/e26100863","url":null,"abstract":"<p><p>The rapid developments of 5G and B5G networks have posed higher demands on retransmission in certain scenarios. This article reviews classical finite-length coding performance prediction formulas and proposes rate prediction formulas for coded modulation retransmission scenarios. Specifically, we demonstrate that a recently proposed model for correcting these prediction formulas also exhibits high accuracy in coded modulation retransmissions. To enhance the generality of this model, we introduce a range variable Pfinal to unify the predictions with different SNRs. Finally, based on simulation results, the article puts forth recommendations specific to retransmission with a high spectral efficiency.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11506917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing studies have demonstrated significant sex differences in the neural mechanisms of daily life and neuropsychiatric disorders. The hierarchical organization of the functional brain network is a critical feature for assessing these neural mechanisms. But the sex differences in hierarchical organization have not been fully investigated. Here, we explore whether the hierarchical structure of the brain network differs between females and males using resting-state fMRI data. We measure the hierarchical entropy and the maximum modularity of each individual, and identify a significant negative correlation between the complexity of hierarchy and modularity in brain networks. At the mean level, females show higher modularity, whereas males exhibit a more complex hierarchy. At the consensus level, we use a co-classification matrix to perform a detailed investigation of the differences in the hierarchical organization between sexes and observe that the female group and the male group exhibit different interaction patterns of brain regions in the dorsal attention network (DAN) and visual network (VIN). Our findings suggest that the brains of females and males employ different network topologies to carry out brain functions. In addition, the negative correlation between hierarchy and modularity implies a need to balance the complexity in the hierarchical organization of the brain network, which sheds light on future studies of brain functions.
{"title":"Sex Differences in Hierarchical and Modular Organization of Functional Brain Networks: Insights from Hierarchical Entropy and Modularity Analysis.","authors":"Wenyu Chen, Ling Zhan, Tao Jia","doi":"10.3390/e26100864","DOIUrl":"https://doi.org/10.3390/e26100864","url":null,"abstract":"<p><p>Existing studies have demonstrated significant sex differences in the neural mechanisms of daily life and neuropsychiatric disorders. The hierarchical organization of the functional brain network is a critical feature for assessing these neural mechanisms. But the sex differences in hierarchical organization have not been fully investigated. Here, we explore whether the hierarchical structure of the brain network differs between females and males using resting-state fMRI data. We measure the hierarchical entropy and the maximum modularity of each individual, and identify a significant negative correlation between the complexity of hierarchy and modularity in brain networks. At the mean level, females show higher modularity, whereas males exhibit a more complex hierarchy. At the consensus level, we use a co-classification matrix to perform a detailed investigation of the differences in the hierarchical organization between sexes and observe that the female group and the male group exhibit different interaction patterns of brain regions in the dorsal attention network (DAN) and visual network (VIN). Our findings suggest that the brains of females and males employ different network topologies to carry out brain functions. In addition, the negative correlation between hierarchy and modularity implies a need to balance the complexity in the hierarchical organization of the brain network, which sheds light on future studies of brain functions.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11507829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of financial technology, the traditional experience-based and single-network credit default prediction model can no longer meet the current needs. This manuscript proposes a credit default prediction model based on TabNeT-Stacking. First, use the PyTorch deep learning framework to construct an improved TabNet structure. The multi-population genetic algorithm is used to optimize the Attention Transformer automatic feature selection module. The particle swarm algorithm is used to optimize the hyperparameter selection and achieve automatic parameter search. Finally, Stacking ensemble learning is used, and the improved TabNet is used to extract features. XGBoost (eXtreme Gradient Boosting), LightGBM (Light Gradient Boosting Machine), CatBoost (Category Boosting), KNN (K-NearestNeighbor), and SVM (Support Vector Machine) are selected as the first-layer base learners, and XGBoost is used as the second-layer meta-learner. The experimental results show that compared with original models, the credit default prediction model proposed in this manuscript outperforms the comparison models in terms of accuracy, precision, recall, F1 score, and AUC (Area Under the Curve) of credit default prediction results.
{"title":"Research on Credit Default Prediction Model Based on TabNet-Stacking.","authors":"Shijie Wang, Xueyong Zhang","doi":"10.3390/e26100861","DOIUrl":"https://doi.org/10.3390/e26100861","url":null,"abstract":"<p><p>With the development of financial technology, the traditional experience-based and single-network credit default prediction model can no longer meet the current needs. This manuscript proposes a credit default prediction model based on TabNeT-Stacking. First, use the PyTorch deep learning framework to construct an improved TabNet structure. The multi-population genetic algorithm is used to optimize the Attention Transformer automatic feature selection module. The particle swarm algorithm is used to optimize the hyperparameter selection and achieve automatic parameter search. Finally, Stacking ensemble learning is used, and the improved TabNet is used to extract features. XGBoost (eXtreme Gradient Boosting), LightGBM (Light Gradient Boosting Machine), CatBoost (Category Boosting), KNN (K-NearestNeighbor), and SVM (Support Vector Machine) are selected as the first-layer base learners, and XGBoost is used as the second-layer meta-learner. The experimental results show that compared with original models, the credit default prediction model proposed in this manuscript outperforms the comparison models in terms of accuracy, precision, recall, F1 score, and AUC (Area Under the Curve) of credit default prediction results.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11506879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander P Alodjants, Anna E Avdyushina, Dmitriy V Tsarev, Igor A Bessmertny, Andrey Yu Khrennikov
Quantum-inspired algorithms represent an important direction in modern software information technologies that use heuristic methods and approaches of quantum science. This work presents a quantum approach for document search, retrieval, and ranking based on the Bell-like test, which is well-known in quantum physics. We propose quantum probability theory in the hyperspace analog to language (HAL) framework exploiting a Hilbert space for word and document vector specification. The quantum approach allows for accounting for specific user preferences in different contexts. To verify the algorithm proposed, we use a dataset of synthetic advertising text documents from travel agencies generated by the OpenAI GPT-4 model. We show that the "entanglement" in two-word document search and retrieval can be recognized as the frequent occurrence of two words in incompatible query contexts. We have found that the user preferences and word ordering in the query play a significant role in relatively small sizes of the HAL window. The comparison with the cosine similarity metrics demonstrates the key advantages of our approach based on the user-enforced contextual and semantic relationships between words and not just their superficial occurrence in texts. Our approach to retrieving and ranking documents allows for the creation of new information search engines that require no resource-intensive deep machine learning algorithms.
量子启发算法是现代软件信息技术的一个重要方向,它使用了量子科学的启发式方法和途径。这项工作基于量子物理学中著名的类贝尔检验(Bell-like test),提出了一种用于文档搜索、检索和排序的量子方法。我们在超空间类比语言(HAL)框架中提出了量子概率论,利用希尔伯特空间进行单词和文档向量规范。量子方法允许考虑不同语境下的特定用户偏好。为了验证所提出的算法,我们使用了由 OpenAI GPT-4 模型生成的旅行社合成广告文本文件数据集。我们发现,双词文档搜索和检索中的 "纠缠 "可以被识别为两个词在不相容的查询上下文中频繁出现。我们发现,在 HAL 窗口相对较小的情况下,用户偏好和查询中的单词排序起着重要作用。与余弦相似度指标的比较表明,我们的方法的主要优势在于基于用户强制要求的词与词之间的上下文和语义关系,而不仅仅是它们在文本中的表面出现。我们的文档检索和排序方法允许创建不需要资源密集型深度机器学习算法的新型信息搜索引擎。
{"title":"Quantum Approach for Contextual Search, Retrieval, and Ranking of Classical Information.","authors":"Alexander P Alodjants, Anna E Avdyushina, Dmitriy V Tsarev, Igor A Bessmertny, Andrey Yu Khrennikov","doi":"10.3390/e26100862","DOIUrl":"https://doi.org/10.3390/e26100862","url":null,"abstract":"<p><p>Quantum-inspired algorithms represent an important direction in modern software information technologies that use heuristic methods and approaches of quantum science. This work presents a quantum approach for document search, retrieval, and ranking based on the Bell-like test, which is well-known in quantum physics. We propose quantum probability theory in the hyperspace analog to language (HAL) framework exploiting a Hilbert space for word and document vector specification. The quantum approach allows for accounting for specific user preferences in different contexts. To verify the algorithm proposed, we use a dataset of synthetic advertising text documents from travel agencies generated by the OpenAI GPT-4 model. We show that the \"entanglement\" in two-word document search and retrieval can be recognized as the frequent occurrence of two words in incompatible query contexts. We have found that the user preferences and word ordering in the query play a significant role in relatively small sizes of the HAL window. The comparison with the cosine similarity metrics demonstrates the key advantages of our approach based on the user-enforced contextual and semantic relationships between words and not just their superficial occurrence in texts. Our approach to retrieving and ranking documents allows for the creation of new information search engines that require no resource-intensive deep machine learning algorithms.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11508094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samidha Shetty, Gordon Brittan, Prasanta S Bandyopadhyay
Empirical Bayes-based Methods (EBM) is an increasingly popular form of Objective Bayesianism (OB). It is identified in particular with the statistician Bradley Efron. The main aims of this paper are, first, to describe and illustrate its main features and, second, to locate its role by comparing it with two other statistical paradigms, Subjective Bayesianism (SB) and Evidentialism. EBM's main formal features are illustrated in some detail by schematic examples. The comparison between what Efron calls their underlying "philosophies" is by way of a distinction made between confirmation and evidence. Although this distinction is sometimes made in the statistical literature, it is relatively rare and never to the same point as here. That is, the distinction is invariably spelled out intra- and not inter-paradigmatically solely in terms of one or the other accounts. The distinction made in this paper between confirmation and evidence is illustrated by two well-known statistical paradoxes: the base-rate fallacy and Popper's paradox of ideal evidence. The general conclusion reached is that each of the paradigms has a basic role to play and all are required by an adequate account of statistical inference from a technically informed and fine-grained philosophical perspective.
{"title":"Empirical Bayes Methods, Evidentialism, and the Inferential Roles They Play.","authors":"Samidha Shetty, Gordon Brittan, Prasanta S Bandyopadhyay","doi":"10.3390/e26100859","DOIUrl":"https://doi.org/10.3390/e26100859","url":null,"abstract":"<p><p>Empirical Bayes-based Methods (<i>EBM</i>) is an increasingly popular form of Objective Bayesianism (<i>OB</i>). It is identified in particular with the statistician Bradley Efron. The main aims of this paper are, first, to describe and illustrate its main features and, second, to locate its role by comparing it with two other statistical paradigms, Subjective Bayesianism (<i>SB</i>) and Evidentialism<i>. EBM</i>'s main formal features are illustrated in some detail by schematic examples. The comparison between what Efron calls their underlying \"philosophies\" is by way of a distinction made between confirmation and evidence. Although this distinction is sometimes made in the statistical literature, it is relatively rare and never to the same point as here. That is, the distinction is invariably spelled out intra- and not inter-paradigmatically solely in terms of one or the other accounts. The distinction made in this paper between confirmation and evidence is illustrated by two well-known statistical paradoxes: the base-rate fallacy and Popper's paradox of ideal evidence. The general conclusion reached is that each of the paradigms has a basic role to play and all are required by an adequate account of statistical inference from a technically informed and fine-grained philosophical perspective.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11507398/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The selection of suppliers represents a pivotal aspect of supply chain management and has a considerable impact on the success and competitiveness of the organization in question. The selection of a suitable supplier is a multi-criteria decision making (MCDM) problem based on a number of qualitative, quantitative, and even conflicting criteria. The aim of this paper is to propose a novel MCDM approach dedicated to the supplier evaluation problem using an ordered fuzzy decision making system. This study uses a fuzzy inference system based on IF-THEN rules with ordered fuzzy numbers (OFNs). The approach employs the concept of OFNs to account for potential uncertainty and subjectivity in the decision making process, and it also takes into account the trends of changes in assessment values and entropy in the final supplier evaluation. This paper's principal contribution is the development of a knowledge base and the demonstration of its application in an ordered fuzzy expert system for multi-criteria supplier evaluation in a dynamic and uncertain environment. The proposed system takes into account the dynamic changes in the value of assessment parameters in the overall supplier assessment, allowing for the differentiation of suppliers based on current and historical data. The utilization of OFNs in a fuzzy model then allows for a reduction in the complexity of the knowledge base in comparison to a classical fuzzy system and makes it more accessible to users, as it requires only basic arithmetic operations in the inference process. This paper presents a comprehensive framework for the assessment of suppliers against a range of criteria, including local hiring, completeness, and defect factors. Furthermore, the potential to integrate sustainability and ESG (environmental, social, and corporate governance) criteria in the assessment process adds value to the decision making framework by adapting to current trends in supply chain management.
{"title":"Approach Based on the Ordered Fuzzy Decision Making System Dedicated to Supplier Evaluation in Supply Chain Management.","authors":"Katarzyna Rudnik, Anna Chwastyk, Iwona Pisz","doi":"10.3390/e26100860","DOIUrl":"https://doi.org/10.3390/e26100860","url":null,"abstract":"<p><p>The selection of suppliers represents a pivotal aspect of supply chain management and has a considerable impact on the success and competitiveness of the organization in question. The selection of a suitable supplier is a multi-criteria decision making (MCDM) problem based on a number of qualitative, quantitative, and even conflicting criteria. The aim of this paper is to propose a novel MCDM approach dedicated to the supplier evaluation problem using an ordered fuzzy decision making system. This study uses a fuzzy inference system based on IF-THEN rules with ordered fuzzy numbers (OFNs). The approach employs the concept of OFNs to account for potential uncertainty and subjectivity in the decision making process, and it also takes into account the trends of changes in assessment values and entropy in the final supplier evaluation. This paper's principal contribution is the development of a knowledge base and the demonstration of its application in an ordered fuzzy expert system for multi-criteria supplier evaluation in a dynamic and uncertain environment. The proposed system takes into account the dynamic changes in the value of assessment parameters in the overall supplier assessment, allowing for the differentiation of suppliers based on current and historical data. The utilization of OFNs in a fuzzy model then allows for a reduction in the complexity of the knowledge base in comparison to a classical fuzzy system and makes it more accessible to users, as it requires only basic arithmetic operations in the inference process. This paper presents a comprehensive framework for the assessment of suppliers against a range of criteria, including local hiring, completeness, and defect factors. Furthermore, the potential to integrate sustainability and ESG (environmental, social, and corporate governance) criteria in the assessment process adds value to the decision making framework by adapting to current trends in supply chain management.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11507921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142497268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}