Rafał Brociek, Agata Wajda, Christian Napoli, Giacomo Capizzi, Damian Słota
This article presents an algorithm for solving the direct and inverse problem for a model consisting of a fractional differential equation with non-integer order derivatives with respect to time and space. The Caputo derivative was taken as the fractional derivative with respect to time, and the Riemann-Liouville derivative in the case of space. On one of the boundaries of the considered domain, a fractional boundary condition of the third kind was adopted. In the case of the direct problem, a differential scheme was presented, and a metaheuristic optimization algorithm, namely the Group Teaching Optimization Algorithm (GTOA), was used to solve the inverse problem. The article presents numerical examples illustrating the operation of the proposed methods. In the case of inverse problem, a function occurring in the fractional boundary condition was identified. The presented approach can be an effective tool for modeling the anomalous diffusion phenomenon.
{"title":"An Inverse Problem for a Fractional Space-Time Diffusion Equation with Fractional Boundary Condition.","authors":"Rafał Brociek, Agata Wajda, Christian Napoli, Giacomo Capizzi, Damian Słota","doi":"10.3390/e28010081","DOIUrl":"10.3390/e28010081","url":null,"abstract":"<p><p>This article presents an algorithm for solving the direct and inverse problem for a model consisting of a fractional differential equation with non-integer order derivatives with respect to time and space. The Caputo derivative was taken as the fractional derivative with respect to time, and the Riemann-Liouville derivative in the case of space. On one of the boundaries of the considered domain, a fractional boundary condition of the third kind was adopted. In the case of the direct problem, a differential scheme was presented, and a metaheuristic optimization algorithm, namely the Group Teaching Optimization Algorithm (GTOA), was used to solve the inverse problem. The article presents numerical examples illustrating the operation of the proposed methods. In the case of inverse problem, a function occurring in the fractional boundary condition was identified. The presented approach can be an effective tool for modeling the anomalous diffusion phenomenon.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work investigates the mechanisms of information transfer underlying causal relationships between brain regions during resting-state conditions in patients with schizophrenia (SCZ). A large fMRI dataset including healthy controls and SCZ patients was analyzed to estimate directed information flow using local Transfer Entropy (TE). Four functional interaction patterns-referred to as rules-were identified between brain regions: activation in the same state (ActS), activation in the opposite state (ActO), turn-off in the same state (TfS), and turn-off in the opposite state (TfO), indicating a dynamics toward converging (Acts/Tfs = S) and diverging (ActO/TfO = O) states of brain regions. These interactions were integrated within a multiplex network framework, in which each rule was represented as a directed network layer. Our results reveal widespread alterations in the functional architecture of SCZ brain networks, particularly affecting schizophrenia-related systems such as bottom-up sensory pathways and associative cortical dynamics. An imbalance between S and O rules was observed, leading to reduced network stability. This shift results in a more randomized functional network organization. These findings provide a mechanistic link between excitation/inhibition (E/I) imbalance and mesoscopic network dysconnectivity, in agreement with previous dynamic functional connectivity and Dynamic Causal Modeling (DCM) studies. Overall, our approach offers an integrated framework for characterizing directed brain communication patterns and psychiatric phenotypes. Future work will focus on systematic comparisons with DCM and other functional connectivity methods.
{"title":"Understanding Schizophrenia Pathophysiology via fMRI-Based Information Theory and Multiplex Network Analysis.","authors":"Fabrizio Parente","doi":"10.3390/e28010083","DOIUrl":"10.3390/e28010083","url":null,"abstract":"<p><p>This work investigates the mechanisms of information transfer underlying causal relationships between brain regions during resting-state conditions in patients with schizophrenia (SCZ). A large fMRI dataset including healthy controls and SCZ patients was analyzed to estimate directed information flow using local Transfer Entropy (TE). Four functional interaction patterns-referred to as rules-were identified between brain regions: activation in the same state (ActS), activation in the opposite state (ActO), turn-off in the same state (TfS), and turn-off in the opposite state (TfO), indicating a dynamics toward converging (Acts/Tfs = S) and diverging (ActO/TfO = O) states of brain regions. These interactions were integrated within a multiplex network framework, in which each rule was represented as a directed network layer. Our results reveal widespread alterations in the functional architecture of SCZ brain networks, particularly affecting schizophrenia-related systems such as bottom-up sensory pathways and associative cortical dynamics. An imbalance between S and O rules was observed, leading to reduced network stability. This shift results in a more randomized functional network organization. These findings provide a mechanistic link between excitation/inhibition (E/I) imbalance and mesoscopic network dysconnectivity, in agreement with previous dynamic functional connectivity and Dynamic Causal Modeling (DCM) studies. Overall, our approach offers an integrated framework for characterizing directed brain communication patterns and psychiatric phenotypes. Future work will focus on systematic comparisons with DCM and other functional connectivity methods.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) technology provides fine-grained access control capabilities for P2P networks. However, its long-term development has been constrained by three major challenges: the trade-off between computational efficiency and functional completeness, decentralized trust security issues, and the problems of attribute revocation and traceability. This paper proposes a decentralized CP-ABE scheme based on multiple authorities (R-T-D-ABE). By leveraging three core techniques, including threshold distributed key generation, versioned attribute revocation, and identity-key binding verification, the scheme efficiently achieves both revocation and accountability while ensuring resistance against collusion attacks and forward/backward security. Security analysis demonstrates that the proposed scheme satisfies IND-CPA security under the Generic Group Model (GGM). Experimental results indicate that it not only guarantees efficient decentralized encryption and decryption but also realizes the dual functions of revocation and accountability, thereby providing a functionally complete and efficient access control solution for P2P networks.
{"title":"Revocable and Traceable Decentralized ABE for P2P Networks.","authors":"Dan Gao, Huanhuan Xu, Shuqu Qian","doi":"10.3390/e28010077","DOIUrl":"10.3390/e28010077","url":null,"abstract":"<p><p>Ciphertext-Policy Attribute-Based Encryption (CP-ABE) technology provides fine-grained access control capabilities for P2P networks. However, its long-term development has been constrained by three major challenges: the trade-off between computational efficiency and functional completeness, decentralized trust security issues, and the problems of attribute revocation and traceability. This paper proposes a decentralized CP-ABE scheme based on multiple authorities (R-T-D-ABE). By leveraging three core techniques, including threshold distributed key generation, versioned attribute revocation, and identity-key binding verification, the scheme efficiently achieves both revocation and accountability while ensuring resistance against collusion attacks and forward/backward security. Security analysis demonstrates that the proposed scheme satisfies IND-CPA security under the Generic Group Model (GGM). Experimental results indicate that it not only guarantees efficient decentralized encryption and decryption but also realizes the dual functions of revocation and accountability, thereby providing a functionally complete and efficient access control solution for P2P networks.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We derive a few extended versions of the Kraft inequality for lossy compression, which pave the way to the derivation of several refinements and extensions of the well-known Shannon lower bound in a variety of instances of rate-distortion coding. These refinements and extensions include sharper bounds for one-to-one codes and D-semifaithful codes, a Shannon lower bound for distortion measures based on sliding-window functions, and an individual-sequence counterpart of the Shannon lower bound.
{"title":"Refinements and Generalizations of the Shannon Lower Bound via Extensions of the Kraft Inequality.","authors":"Neri Merhav","doi":"10.3390/e28010076","DOIUrl":"10.3390/e28010076","url":null,"abstract":"<p><p>We derive a few extended versions of the Kraft inequality for lossy compression, which pave the way to the derivation of several refinements and extensions of the well-known Shannon lower bound in a variety of instances of rate-distortion coding. These refinements and extensions include sharper bounds for one-to-one codes and <i>D</i>-semifaithful codes, a Shannon lower bound for distortion measures based on sliding-window functions, and an individual-sequence counterpart of the Shannon lower bound.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840369/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ultimate objective of research on hypernetwork robustness is to enhance its capability to withstand external attacks and natural disasters. For hypernetworks such as telecommunication networks, public safety networks, and military networks-where security requirements are extremely high-achieving higher communication robustness is essential. This study integrates the structural characteristics of hypernetworks with an optimization method for communication robustness by combining four key indicators: hyper-betweenness centrality, hyper-centrality of feature subgraph, hyper-centrality of Fiedler, and hyperdistance entropy. Using the best improvement performance (BIP_T) as the evaluation metric, simulation experiments were conducted to comparatively analyze the effectiveness of these four indicators in enhancing the communication robustness of Barabási-Albert (BA), Erdos-Renyi (ER), and Newman-Watts (NW) hypernetworks, and theoretically derives the hyperedge addition threshold θ. The results show that all four indicators effectively improve the communication robustness of hypernetworks, although with varying degrees of optimization. Among them, hyper-betweenness centrality demonstrates the most significant optimization effect, followed by hyper-centrality of feature subgraph and hyper-centrality of Fiedler, while hyperdistance entropy exhibits a relatively weaker effect. Furthermore, these four indicators and the proposed communication robustness optimization method exhibit strong generalizability and have been effectively applied to the WIKI-VOTE social hypernetwork.
{"title":"Optimization Method for Robustness of Hypernetwork Communication with Integrated Structural Features.","authors":"Lei Chen, Xiujuan Ma, Fuxiang Ma","doi":"10.3390/e28010075","DOIUrl":"10.3390/e28010075","url":null,"abstract":"<p><p>The ultimate objective of research on hypernetwork robustness is to enhance its capability to withstand external attacks and natural disasters. For hypernetworks such as telecommunication networks, public safety networks, and military networks-where security requirements are extremely high-achieving higher communication robustness is essential. This study integrates the structural characteristics of hypernetworks with an optimization method for communication robustness by combining four key indicators: hyper-betweenness centrality, hyper-centrality of feature subgraph, hyper-centrality of Fiedler, and hyperdistance entropy. Using the best improvement performance (BIP_T) as the evaluation metric, simulation experiments were conducted to comparatively analyze the effectiveness of these four indicators in enhancing the communication robustness of Barabási-Albert (BA), Erdos-Renyi (ER), and Newman-Watts (NW) hypernetworks, and theoretically derives the hyperedge addition threshold θ. The results show that all four indicators effectively improve the communication robustness of hypernetworks, although with varying degrees of optimization. Among them, hyper-betweenness centrality demonstrates the most significant optimization effect, followed by hyper-centrality of feature subgraph and hyper-centrality of Fiedler, while hyperdistance entropy exhibits a relatively weaker effect. Furthermore, these four indicators and the proposed communication robustness optimization method exhibit strong generalizability and have been effectively applied to the WIKI-VOTE social hypernetwork.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kees Schouhamer Immink, Jos H Weber, Tuan Thanh Nguyen, Kui Cai
The design of low-complexity and efficient constrained codes has been a major research item for many years. This paper reports on a versatile method named concatenated constrained codes for designing efficient fixed-length constrained codes with small complexity. A concatenated constrained code comprises two (or more) cooperating constrained codes of low complexity enabling long constrained codes that are not practically feasible with prior art methods. We apply the concatenated coding approach to two case studies, namely the design of constant-weight and low-weight codes. In a binary constant-weight code, each codeword has the same number, w, of 1's, where w is called the weight of a codeword. We specifically focus on the trading between coder complexity and redundancy.
{"title":"Concatenated Constrained Coding: A New Approach to Efficient Constant-Weight Codes.","authors":"Kees Schouhamer Immink, Jos H Weber, Tuan Thanh Nguyen, Kui Cai","doi":"10.3390/e28010078","DOIUrl":"10.3390/e28010078","url":null,"abstract":"<p><p>The design of low-complexity and efficient constrained codes has been a major research item for many years. This paper reports on a versatile method named concatenated constrained codes for designing efficient fixed-length constrained codes with small complexity. A concatenated constrained code comprises two (or more) cooperating constrained codes of low complexity enabling long constrained codes that are not practically feasible with prior art methods. We apply the concatenated coding approach to two case studies, namely the design of constant-weight and low-weight codes. In a binary constant-weight code, each codeword has the same number, <i>w</i>, of 1's, where <i>w</i> is called the weight of a codeword. We specifically focus on the trading between coder complexity and redundancy.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839811/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146060989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Code coverage-guided unit test generation (CGTG) and large language model-based test generation (LLMTG) are two principal approaches for the generation of unit tests. Each of these approaches has its inherent advantages and drawbacks. Tests generated by CGTG have been shown to exhibit high code coverage and high executability. However, they lack the capacity to comprehend code intent, which results in an inability to identify deviations between code implementation and design intent (i.e., functional defects). Conversely, although LLMTG demonstrates an advantage in terms of code intent analysis, it is generally characterized by low executability and necessitates iterative debugging. In order to enhance the ability of unit test generation to identify functional defects, a novel framework has been proposed, entitled the intent analysis-guided unit test generation and refinement (IGTG&R) model. The IGTG&R model consists of a two-stage process for test generation. In the first stage, we introduce coverage path entropy to enhance CGTG to achieve high executability and code coverage of test cases. The second stage refines the test cases using LLMs to identify functional defects. We quantify and verify the interference of incorrect code implementation on intent analysis through conditional entropy. In order to reduce this interference, the focal method body is excluded from the code context information during intent analysis. Using these two-stage process, IGTG&R achieves a more profound comprehension of the intent of the code and the identification of functional defects. The IGTG&R model has been demonstrated to achieve an identification rate of functional defects ranging from 65% to 89%, with an execution success rate of 100% and a code coverage rate of 75.8%. This indicates that IGTG&R is superior to the CGTG and LLMTG approaches in multiple aspects.
{"title":"IGTG&R: An Intent Analysis-Guided Unit Test Generation and Refinement Framework.","authors":"Xiaojian Liu, Yangyang Zhang","doi":"10.3390/e28010074","DOIUrl":"10.3390/e28010074","url":null,"abstract":"<p><p>Code coverage-guided unit test generation (CGTG) and large language model-based test generation (LLMTG) are two principal approaches for the generation of unit tests. Each of these approaches has its inherent advantages and drawbacks. Tests generated by CGTG have been shown to exhibit high code coverage and high executability. However, they lack the capacity to comprehend code intent, which results in an inability to identify deviations between code implementation and design intent (i.e., functional defects). Conversely, although LLMTG demonstrates an advantage in terms of code intent analysis, it is generally characterized by low executability and necessitates iterative debugging. In order to enhance the ability of unit test generation to identify functional defects, a novel framework has been proposed, entitled the intent analysis-guided unit test generation and refinement (IGTG&R) model. The IGTG&R model consists of a two-stage process for test generation. In the first stage, we introduce coverage path entropy to enhance CGTG to achieve high executability and code coverage of test cases. The second stage refines the test cases using LLMs to identify functional defects. We quantify and verify the interference of incorrect code implementation on intent analysis through conditional entropy. In order to reduce this interference, the focal method body is excluded from the code context information during intent analysis. Using these two-stage process, IGTG&R achieves a more profound comprehension of the intent of the code and the identification of functional defects. The IGTG&R model has been demonstrated to achieve an identification rate of functional defects ranging from 65% to 89%, with an execution success rate of 100% and a code coverage rate of 75.8%. This indicates that IGTG&R is superior to the CGTG and LLMTG approaches in multiple aspects.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839985/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Zhao, Yan-Han Yang, Li-Ming Zhao, Ming-Xing Luo
We demonstrate a complete and experimentally validated self-testing protocol for two-qubit partially entangled states, which avoids the need for full tomographic reconstruction. Using a room-temperature type-II PPKTP polarization-entangled source and a free-space optical setup, we implement both quantum state tomography and optimal generalized Bell measurements within a single system. Our approach achieves high-fidelity self-testing of non-maximally entangled states under black-box assumptions, establishing a solid foundation for device-independent quantum information processing applications.
{"title":"Experimentally Self-Testing Partially Entangled Two-Qubit States on an Optical Platform.","authors":"Xin Zhao, Yan-Han Yang, Li-Ming Zhao, Ming-Xing Luo","doi":"10.3390/e28010079","DOIUrl":"10.3390/e28010079","url":null,"abstract":"<p><p>We demonstrate a complete and experimentally validated self-testing protocol for two-qubit partially entangled states, which avoids the need for full tomographic reconstruction. Using a room-temperature type-II PPKTP polarization-entangled source and a free-space optical setup, we implement both quantum state tomography and optimal generalized Bell measurements within a single system. Our approach achieves high-fidelity self-testing of non-maximally entangled states under black-box assumptions, establishing a solid foundation for device-independent quantum information processing applications.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839558/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E-commerce retailers bear substantial additional costs arising from high product return rates due to lenient return policies and consumers' impulsive purchasing. This study aims to accurately predict product return behavior before payment, supporting proactive return management and reducing potential losses. Based on the Graph Transformer, we proposed a novel return prediction model, Returnformer, which focuses on capturing user-product connections represented in topological structures of bipartite graphs. The Returnformer first integrates global topological embeddings into original node features to alleviate structural information loss caused by graph partitioning. It then employs a Graph Transformer to capture long-range user-item dependencies within local subgraphs. In addition, a graph-level attention mechanism is introduced to facilitate the propagation of global return patterns across different subgraphs. Experiments on a real-world e-commerce dataset show that the Returnformer outperforms four machine learning models in terms of prediction accuracy, demonstrating superior performance compared to the state-of-the-art models. The proposed model enables retailers to identify potential return risks prior to payment, thereby supporting timely and proactive preventive interventions.
{"title":"Returnformer: A Graph Transformer-Based Model for Predicting Product Returns in E-Commerce.","authors":"Qian Cao, Ning Zhang, Huiyong Li","doi":"10.3390/e28010072","DOIUrl":"10.3390/e28010072","url":null,"abstract":"<p><p>E-commerce retailers bear substantial additional costs arising from high product return rates due to lenient return policies and consumers' impulsive purchasing. This study aims to accurately predict product return behavior before payment, supporting proactive return management and reducing potential losses. Based on the Graph Transformer, we proposed a novel return prediction model, Returnformer, which focuses on capturing user-product connections represented in topological structures of bipartite graphs. The Returnformer first integrates global topological embeddings into original node features to alleviate structural information loss caused by graph partitioning. It then employs a Graph Transformer to capture long-range user-item dependencies within local subgraphs. In addition, a graph-level attention mechanism is introduced to facilitate the propagation of global return patterns across different subgraphs. Experiments on a real-world e-commerce dataset show that the Returnformer outperforms four machine learning models in terms of prediction accuracy, demonstrating superior performance compared to the state-of-the-art models. The proposed model enables retailers to identify potential return risks prior to payment, thereby supporting timely and proactive preventive interventions.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12839650/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guilherme I Correr, Pedro C Azado, Diogo O Soares-Pinto, Gabriel G Carlo
Parameterized quantum circuits are central to the development of variational quantum algorithms in the NISQ era. A key feature of these circuits is their ability to generate an expressive set of quantum states, enabling the approximation of solutions to diverse problems. The expressibility of such circuits can be assessed by analyzing the ensemble of states produced when their parameters are randomly sampled, a property closely tied to quantum complexity. In this work, we compare different classes of parameterized quantum circuits with a prototypical family of universal random circuits to investigate how rapidly they approach the asymptotic complexity defined by the Haar measure. We find that parameterized circuits exhibit faster convergence in terms of the number of gates required, as quantified through expressibility and majorization-based complexity measures. Moreover, the topology of qubit connections proves crucial, significantly affecting entanglement generation and, consequently, complexity growth. The majorization criterion emerges as a valuable complementary tool, offering distinct insights into the behavior of random state generation in the considered circuit families.
{"title":"Optimal Complexity of Parameterized Quantum Circuits.","authors":"Guilherme I Correr, Pedro C Azado, Diogo O Soares-Pinto, Gabriel G Carlo","doi":"10.3390/e28010073","DOIUrl":"10.3390/e28010073","url":null,"abstract":"<p><p>Parameterized quantum circuits are central to the development of variational quantum algorithms in the NISQ era. A key feature of these circuits is their ability to generate an expressive set of quantum states, enabling the approximation of solutions to diverse problems. The expressibility of such circuits can be assessed by analyzing the ensemble of states produced when their parameters are randomly sampled, a property closely tied to quantum complexity. In this work, we compare different classes of parameterized quantum circuits with a prototypical family of universal random circuits to investigate how rapidly they approach the asymptotic complexity defined by the Haar measure. We find that parameterized circuits exhibit faster convergence in terms of the number of gates required, as quantified through expressibility and majorization-based complexity measures. Moreover, the topology of qubit connections proves crucial, significantly affecting entanglement generation and, consequently, complexity growth. The majorization criterion emerges as a valuable complementary tool, offering distinct insights into the behavior of random state generation in the considered circuit families.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12840082/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146061037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}