Using the matching data of the UN COMTRADE, China Customs, and China Industrial Enterprise database from 2000 to 2013, we explore the relationship between product density and the export technological complexity at the enterprise level. We find product density at the firm level has a significant positive effect on the export technological complexity of Chinese manufacturing enterprises. The result remains still robust after adopting instrumental variable method, replacing explanatory and explained variables and substituting more detailed data for the proxy variables. Both specialized agglomeration and diversified agglomeration have significant positive effects on the export technological complexity. The results of moderating effect test indicate that specialized industry agglomeration has significantly reinforced the abovementioned effects as a moderator. However, diversified agglomeration significantly inhibits the technological upgrading effect. Heterogeneity tests show that product density is more conducive to the upgrading of the export technological complexity of enterprises in the eastern region, and so are to the foreign-funded, capital-intensive, and technology-intensive enterprises.
{"title":"The Effects of Product Density on China’s Export Technological Complexity Upgrading: The Role of Industrial Agglomeration","authors":"Changqing Lin, Yongcai Han","doi":"10.1155/cplx/4647996","DOIUrl":"https://doi.org/10.1155/cplx/4647996","url":null,"abstract":"<div>\u0000 <p>Using the matching data of the UN COMTRADE, China Customs, and China Industrial Enterprise database from 2000 to 2013, we explore the relationship between product density and the export technological complexity at the enterprise level. We find product density at the firm level has a significant positive effect on the export technological complexity of Chinese manufacturing enterprises. The result remains still robust after adopting instrumental variable method, replacing explanatory and explained variables and substituting more detailed data for the proxy variables. Both specialized agglomeration and diversified agglomeration have significant positive effects on the export technological complexity. The results of moderating effect test indicate that specialized industry agglomeration has significantly reinforced the abovementioned effects as a moderator. However, diversified agglomeration significantly inhibits the technological upgrading effect. Heterogeneity tests show that product density is more conducive to the upgrading of the export technological complexity of enterprises in the eastern region, and so are to the foreign-funded, capital-intensive, and technology-intensive enterprises.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/4647996","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143741660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To suppress the frequency oscillation phenomenon that occurs in the parallel control system of multiple virtual synchronous generators (multi-VSG) during load mutation, this paper proposes a multi-VSG parallel control strategy based on sliding mode linear active disturbance rejection (SM-LADRC). Initially, mathematical modeling of the multi-VSG parallel control system is conducted to analyze the mechanism by which load mutation affect frequency. Subsequently, based on the rotor motion equation of the VSG, linear active disturbance rejection control (LADRC) is applied to the angular frequency, and an extended state observer (ESO) is constructed to estimate and compensate for the system’s frequency state and load mutation in real time, thereby enhancing the system’s disturbance rejection capability. Concurrently, an integral sliding mode linear state error feedback (SM-LSEF) control law is formulated to rapidly adjust the frequency error control quantity, eliminating the reaching phase and accelerating the system’s response speed. Moreover, the integral sliding mode, by introducing an integral term, continuously approximates the switching function, making the sliding mode surface smoother, which effectively suppresses sliding mode chattering and improves the system’s robustness. Finally, simulation comparisons validate the correctness and effectiveness of the proposed control strategy, providing a theoretical and simulation experimental basis for engineering applications.
{"title":"Research on Multi-VSG Parallel Control Strategy Based on Sliding Mode Active Disturbance Rejection","authors":"Fanxing Rao, Yupeng Xiang, Shuai Weng, Huimin Xiong, Xiaopin Yang, Jizheng Zhang, Cui Wang, Yunchuan Ding","doi":"10.1155/cplx/9646736","DOIUrl":"https://doi.org/10.1155/cplx/9646736","url":null,"abstract":"<div>\u0000 <p>To suppress the frequency oscillation phenomenon that occurs in the parallel control system of multiple virtual synchronous generators (multi-VSG) during load mutation, this paper proposes a multi-VSG parallel control strategy based on sliding mode linear active disturbance rejection (SM-LADRC). Initially, mathematical modeling of the multi-VSG parallel control system is conducted to analyze the mechanism by which load mutation affect frequency. Subsequently, based on the rotor motion equation of the VSG, linear active disturbance rejection control (LADRC) is applied to the angular frequency, and an extended state observer (ESO) is constructed to estimate and compensate for the system’s frequency state and load mutation in real time, thereby enhancing the system’s disturbance rejection capability. Concurrently, an integral sliding mode linear state error feedback (SM-LSEF) control law is formulated to rapidly adjust the frequency error control quantity, eliminating the reaching phase and accelerating the system’s response speed. Moreover, the integral sliding mode, by introducing an integral term, continuously approximates the switching function, making the sliding mode surface smoother, which effectively suppresses sliding mode chattering and improves the system’s robustness. Finally, simulation comparisons validate the correctness and effectiveness of the proposed control strategy, providing a theoretical and simulation experimental basis for engineering applications.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/9646736","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert U. Ayres, Jeroen van den Bergh, Gara Villalba
We discuss the relationship between environmental sustainability and system complexity. This is motivated by the fact that solutions to environmental challenges often create additional complexity in the overall socioeconomic system, at local to global levels. This increase in complexity might hamper the ultimate achievement of sustainability. This theme is over utmost importance but is overlooked in studies of environmental sustainability, environmental and climate policy, and sustainability transitions. It merits serious attention as this can provide a general basis and clarification of related topics that are currently studied in isolation—think of energy rebound, carbon leakage, green paradox (fossil fuel market responses to climate policy), circular economy, and environmental problem shifting. The relationship between complexity and sustainability is examined from thermodynamic and systemic perspectives, resulting in identifying a set of mechanisms of complexity increase and clarifying how this potentially creates barriers to meeting sustainability goals. While this issue is pertinent to all economies and countries, it is of high relevance to developing countries as their economies are likely to undergo considerable complexity increases in the near future due to further development. The question is then whether countries will be able to steer their development in a sustainable direction while simultaneously limiting a more roundabout nature of their production structure. We contend that this may require “complexity policy” and outline ideas in this regard. An important role can be played by cap-and-trade, but this will work mainly for carbon emission and not for other environmental pressures. Ultimately, a policy mix could guide different subsystem complexities in terms of environmental pressures and welfare impacts—resulting in optimizing system complexity for sustainability.
{"title":"System Complexity Versus Environmental Sustainability: Theory and Policy","authors":"Robert U. Ayres, Jeroen van den Bergh, Gara Villalba","doi":"10.1155/cplx/1213388","DOIUrl":"https://doi.org/10.1155/cplx/1213388","url":null,"abstract":"<div>\u0000 <p>We discuss the relationship between environmental sustainability and system complexity. This is motivated by the fact that solutions to environmental challenges often create additional complexity in the overall socioeconomic system, at local to global levels. This increase in complexity might hamper the ultimate achievement of sustainability. This theme is over utmost importance but is overlooked in studies of environmental sustainability, environmental and climate policy, and sustainability transitions. It merits serious attention as this can provide a general basis and clarification of related topics that are currently studied in isolation—think of energy rebound, carbon leakage, green paradox (fossil fuel market responses to climate policy), circular economy, and environmental problem shifting. The relationship between complexity and sustainability is examined from thermodynamic and systemic perspectives, resulting in identifying a set of mechanisms of complexity increase and clarifying how this potentially creates barriers to meeting sustainability goals. While this issue is pertinent to all economies and countries, it is of high relevance to developing countries as their economies are likely to undergo considerable complexity increases in the near future due to further development. The question is then whether countries will be able to steer their development in a sustainable direction while simultaneously limiting a more roundabout nature of their production structure. We contend that this may require “complexity policy” and outline ideas in this regard. An important role can be played by cap-and-trade, but this will work mainly for carbon emission and not for other environmental pressures. Ultimately, a policy mix could guide different subsystem complexities in terms of environmental pressures and welfare impacts—resulting in optimizing system complexity for sustainability.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/1213388","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of rapid development of social networks, users gather together through social network platforms (SNPs) to form social circles, which become the main carriers for users to disseminate public opinion information and emotions, and it is easy to form the phenomenon of network public opinion polarization in social circles. To prevent the polarization of network public opinions in social circles, the application of blockchain technology to SNPs can provide a credible database for the traceability of public opinions and avoid the spread of false information in social circles. Based on blockchain traceability, this paper establishes a three-party evolutionary game model consisting of government, SNPs, and social circle managers (SCMs). Through model solving and numerical simulation, the influencing factors of each party’s strategy are analyzed. The findings suggest: (1) Although the rising cost of building blockchain platforms may undermine the government’s incentive to regulate, the government will still prioritise support for their construction if they can significantly enhance social welfare. (2) The government could boost SNPs’ willingness to adopt blockchain technology through subsidy measures, but increased privacy risks could undermine this willingness. (3) Blockchain technology realizes the traceability of public opinion and improves the rational guidance tendency of SCMs. Therefore, in order to promote and strengthen the collaborative governance of network public opinion in social circles, SNPs should enhance the use of blockchain technology, while the government should give full play to its regulatory and supervisory roles. This study introduces blockchain technology for public opinion traceability; explores a collaborative network public opinion governance system based on strict regulation by the government, introduction of blockchain technology by SNPs, and rational guidance by SCMs; expands the related research on network public opinion governance; and provides a new idea for the study of network public opinion governance of social circles based on blockchain traceability.
{"title":"Analysis of Collaborative Governance of Network Public Opinion in Social Circles Based on Blockchain Traceability Strategy: An Evolutionary Game Theory Approach","authors":"Zeguo Qiu, Yunhao Chen, Hao Han, Yuchen Yin","doi":"10.1155/cplx/8884816","DOIUrl":"https://doi.org/10.1155/cplx/8884816","url":null,"abstract":"<div>\u0000 <p>In the context of rapid development of social networks, users gather together through social network platforms (SNPs) to form social circles, which become the main carriers for users to disseminate public opinion information and emotions, and it is easy to form the phenomenon of network public opinion polarization in social circles. To prevent the polarization of network public opinions in social circles, the application of blockchain technology to SNPs can provide a credible database for the traceability of public opinions and avoid the spread of false information in social circles. Based on blockchain traceability, this paper establishes a three-party evolutionary game model consisting of government, SNPs, and social circle managers (SCMs). Through model solving and numerical simulation, the influencing factors of each party’s strategy are analyzed. The findings suggest: (1) Although the rising cost of building blockchain platforms may undermine the government’s incentive to regulate, the government will still prioritise support for their construction if they can significantly enhance social welfare. (2) The government could boost SNPs’ willingness to adopt blockchain technology through subsidy measures, but increased privacy risks could undermine this willingness. (3) Blockchain technology realizes the traceability of public opinion and improves the rational guidance tendency of SCMs. Therefore, in order to promote and strengthen the collaborative governance of network public opinion in social circles, SNPs should enhance the use of blockchain technology, while the government should give full play to its regulatory and supervisory roles. This study introduces blockchain technology for public opinion traceability; explores a collaborative network public opinion governance system based on strict regulation by the government, introduction of blockchain technology by SNPs, and rational guidance by SCMs; expands the related research on network public opinion governance; and provides a new idea for the study of network public opinion governance of social circles based on blockchain traceability.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/8884816","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Correct detection of plant diseases is critical for enhancing crop yield and quality. Conventional methods, such as visual inspection and microscopic analysis, are typically labor-intensive, subjective, and vulnerable to human error, making them infeasible for extensive monitoring. In this study, we propose a novel technique to detect tomato leaf diseases effectively and efficiently through a pipeline of four stages. First, image enhancement techniques deal with problems of illumination and noise to recover the visual details as clearly and accurately as possible. Subsequently, regions of interest (ROIs), containing possible symptoms of a disease, are accurately captured. The ROIs are then fed into K-means clustering, which can separate the leaf sections based on health and disease, allowing the diagnosis of multiple diseases. After that, a hybrid feature extraction approach taking advantage of three methods is proposed. A discrete wavelet transform (DWT) extracts hidden and abstract textures in the diseased zones by breaking down the pixel values of the images to various frequency ranges. Through spatial relation analysis of pixels, the gray level co-occurrence matrix (GLCM) is extremely valuable in delivering texture patterns in correlation with specific ailments. Principal component analysis (PCA) is a technique for dimensionality reduction, feature selection, and redundancy elimination. We collected 9014 samples from publicly available repositories; this dataset allows us to have a diverse and representative collection of tomato leaf images. The study addresses four main diseases: curl virus, bacterial spot, late blight, and Septoria spot. To rigorously evaluate the model, the dataset is split into 70%, 10%, and 20% as training, validation, and testing subsets, respectively. The proposed technique was able to achieve a fantastic accuracy of 99.97%, higher than current approaches. The high precision achieved emphasizes the promising implications of incorporating DWT, PCA, GLCM, and ANN techniques in an automated system for plant diseases, offering a powerful solution for farmers in managing crop health efficiently.
{"title":"AI-Powered Precision in Diagnosing Tomato Leaf Diseases","authors":"MD Jiabul Hoque, Md. Saiful Islam, Md. Khaliluzzaman","doi":"10.1155/cplx/7838841","DOIUrl":"https://doi.org/10.1155/cplx/7838841","url":null,"abstract":"<div>\u0000 <p>Correct detection of plant diseases is critical for enhancing crop yield and quality. Conventional methods, such as visual inspection and microscopic analysis, are typically labor-intensive, subjective, and vulnerable to human error, making them infeasible for extensive monitoring. In this study, we propose a novel technique to detect tomato leaf diseases effectively and efficiently through a pipeline of four stages. First, image enhancement techniques deal with problems of illumination and noise to recover the visual details as clearly and accurately as possible. Subsequently, regions of interest (ROIs), containing possible symptoms of a disease, are accurately captured. The ROIs are then fed into K-means clustering, which can separate the leaf sections based on health and disease, allowing the diagnosis of multiple diseases. After that, a hybrid feature extraction approach taking advantage of three methods is proposed. A discrete wavelet transform (DWT) extracts hidden and abstract textures in the diseased zones by breaking down the pixel values of the images to various frequency ranges. Through spatial relation analysis of pixels, the gray level co-occurrence matrix (GLCM) is extremely valuable in delivering texture patterns in correlation with specific ailments. Principal component analysis (PCA) is a technique for dimensionality reduction, feature selection, and redundancy elimination. We collected 9014 samples from publicly available repositories; this dataset allows us to have a diverse and representative collection of tomato leaf images. The study addresses four main diseases: curl virus, bacterial spot, late blight, and <i>Septoria</i> spot. To rigorously evaluate the model, the dataset is split into 70%, 10%, and 20% as training, validation, and testing subsets, respectively. The proposed technique was able to achieve a fantastic accuracy of 99.97%, higher than current approaches. The high precision achieved emphasizes the promising implications of incorporating DWT, PCA, GLCM, and ANN techniques in an automated system for plant diseases, offering a powerful solution for farmers in managing crop health efficiently.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/7838841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143622733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some complex systems (e.g., an ecosystem) in direct contact with an environment can be assigned the temperature of the environment. Other complex systems, such as human beings, can maintain a core temperature of 36.5°C in environments with different temperatures, at least for a short period of time. Finally, for complex systems such as financial markets, whose environments we understand very little of, is there even a reasonable way to define a temperature? It is clear that human beings are almost never in thermal equilibrium with their surroundings, but can financial markets achieve detailed balance independently at all scales, or is information flow in such systems different at different scales? If we combine the information-theoretic picture with the thermodynamics picture of entropy, temperature is the driving force for changes in information content of a system. From an interactions point of view, the information content of a financial market can be computed from the cross correlations between its stocks. In their 2015 paper, Ye et al. (2015) constructed the normalized graph Laplacians in different time periods based on strong cross correlations between stocks listed on the New York Stock Exchange. By writing the partition function in terms of polynomials of the normalized graph Laplacian, Ye et al. computed the average energy E, entropy S, and inverse temperature β = 1/kBT. This led us to an information-based definition of the inverse temperature. In this work, we investigated the inverse temperature β(ϵ, n) at different times n and scales ϵ for two mature financial markets, using the S&P 500 and Nikkei 225 cross sections of stocks from January 2007 to May 2023. In the dynamics of β, the most prominent features are peaks at various times. We identified five esoteric and seven characteristic peaks and studied how they change with scale ϵ. The latter consists of a negative power-law dip followed by a positive power-law rise, with exponents narrowly distributed between 0.3–0.4. In addition, we constructed heat maps of β that reveal positive-, negative-, and infinite-slope cascades that hint at their possible exogenous and endogenous origins. Notably, the heat map of β confirmed that the 2007−2009 Global Financial Crisis was an endogenous crash in the US market, which in turn caused an exogenous crash in the Japanese stock market. To better understand the evolution of β, we analyzed ΔJ (the difference in the number of links) and ΔQ (the difference in the number of triangles) and found they oscillate in time. Occasionally, very intense swings of ΔQemerge over all scales, suggesting significant market-level reconstructions at these times.
{"title":"Scale-Dependent Inverse Temperature Features Associated With Crashes in the US and Japanese Stock Markets","authors":"Peter Tsung-Wen Yen, Siew Ann Cheong","doi":"10.1155/cplx/9451788","DOIUrl":"https://doi.org/10.1155/cplx/9451788","url":null,"abstract":"<div>\u0000 <p>Some complex systems (e.g., an ecosystem) in direct contact with an environment can be assigned the temperature of the environment. Other complex systems, such as human beings, can maintain a core temperature of 36.5°C in environments with different temperatures, at least for a short period of time. Finally, for complex systems such as financial markets, whose environments we understand very little of, is there even a reasonable way to define a temperature? It is clear that human beings are almost never in thermal equilibrium with their surroundings, but can financial markets achieve detailed balance independently at all scales, or is information flow in such systems different at different scales? If we combine the information-theoretic picture with the thermodynamics picture of entropy, temperature is the driving force for changes in information content of a system. From an interactions point of view, the information content of a financial market can be computed from the cross correlations between its stocks. In their 2015 paper, Ye et al. (2015) constructed the normalized graph Laplacians in different time periods based on strong cross correlations between stocks listed on the New York Stock Exchange. By writing the partition function in terms of polynomials of the normalized graph Laplacian, Ye et al. computed the average energy <i>E</i>, entropy <i>S</i>, and inverse temperature <i>β</i> = 1/<i>k</i><sub><i>B</i></sub><i>T</i>. This led us to an information-based definition of the inverse temperature. In this work, we investigated the inverse temperature <i>β</i>(<i>ϵ</i>, <i>n</i>) at different times <i>n</i> and scales <i>ϵ</i> for two mature financial markets, using the S&P 500 and Nikkei 225 cross sections of stocks from January 2007 to May 2023. In the dynamics of <i>β</i>, the most prominent features are peaks at various times. We identified five esoteric and seven characteristic peaks and studied how they change with scale <i>ϵ</i>. The latter consists of a negative power-law dip followed by a positive power-law rise, with exponents narrowly distributed between 0.3–0.4. In addition, we constructed heat maps of <i>β</i> that reveal positive-, negative-, and infinite-slope cascades that hint at their possible exogenous and endogenous origins. Notably, the heat map of <i>β</i> confirmed that the 2007−2009 Global Financial Crisis was an endogenous crash in the US market, which in turn caused an exogenous crash in the Japanese stock market. To better understand the evolution of <i>β</i>, we analyzed Δ<i>J</i> (the difference in the number of links) and Δ<i>Q</i> (the difference in the number of triangles) and found they oscillate in time. Occasionally, very intense swings of Δ<i>Q</i>emerge over all scales, suggesting significant market-level reconstructions at these times.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/9451788","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143595243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sooyoun Choi, Yunil Roh, Yong Dam Jeong, Il Hyo Jung
Cancer metastasis is one of the leading causes of death in cancer patients. Dysregulation of the WNT signaling pathway is known to increase the risk of cancer metastasis by leading to excessive accumulation of β-catenin, which activates epithelial–mesenchymal transition (EMT) mechanisms that induce cell motility. Although mono and combination therapies are being developed to prevent metastasis by controlling the abnormally elevated levels of β-catenin, there are limitations in comparing and predicting the treatment effects due to the complexity of cell signaling pathways. In addition, uncertainty exists in determining the optimal combination ratio of each therapy in combination treatments. In this study, we aim to address these challenges by investigating optimal modulation strategies to minimize β-catenin concentration, using a mathematical model that comprehensively describes the interactions between the WNT signaling pathway and transforming growth factor-β (TGF-β) involved in EMT, along with optimal control theory. We analyze the efficacy of monotherapy strategies to prevent the hyperactivation of β-catenin and quantitatively determine the optimal combination ratio for preventing EMT, based on the E-cadherin biomarker as an indicator of EMT. Furthermore, we identify the optimal therapy protocol that minimizes patient burden while maximizing therapeutic efficacy by incorporating considerations of control sequences and delay times. Our findings are expected to not only enhance the understanding of the complex signaling pathways underlying cancer metastasis but also contribute to the development of novel therapeutic approaches.
{"title":"Prevention of EMT-Mediated Metastasis via Optimal Modulation Strategies for the Dysregulated WNT Pathway Interacting With TGF-β","authors":"Sooyoun Choi, Yunil Roh, Yong Dam Jeong, Il Hyo Jung","doi":"10.1155/cplx/9007322","DOIUrl":"https://doi.org/10.1155/cplx/9007322","url":null,"abstract":"<div>\u0000 <p>Cancer metastasis is one of the leading causes of death in cancer patients. Dysregulation of the WNT signaling pathway is known to increase the risk of cancer metastasis by leading to excessive accumulation of β-catenin, which activates epithelial–mesenchymal transition (EMT) mechanisms that induce cell motility. Although mono and combination therapies are being developed to prevent metastasis by controlling the abnormally elevated levels of β-catenin, there are limitations in comparing and predicting the treatment effects due to the complexity of cell signaling pathways. In addition, uncertainty exists in determining the optimal combination ratio of each therapy in combination treatments. In this study, we aim to address these challenges by investigating optimal modulation strategies to minimize β-catenin concentration, using a mathematical model that comprehensively describes the interactions between the WNT signaling pathway and transforming growth factor-β (TGF-β) involved in EMT, along with optimal control theory. We analyze the efficacy of monotherapy strategies to prevent the hyperactivation of β-catenin and quantitatively determine the optimal combination ratio for preventing EMT, based on the E-cadherin biomarker as an indicator of EMT. Furthermore, we identify the optimal therapy protocol that minimizes patient burden while maximizing therapeutic efficacy by incorporating considerations of control sequences and delay times. Our findings are expected to not only enhance the understanding of the complex signaling pathways underlying cancer metastasis but also contribute to the development of novel therapeutic approaches.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/9007322","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Complex Social Internet of Things (CSIoT) integrates the connectivity of IoT with the relational dynamics of complex social networks, creating systems where devices autonomously form and manage relationships. The centrality measures specify the topological characteristics of each node in terms of local and global information of the node in the network. The detection of effective devices in disseminating information across CSIoT networks is critical for optimizing communication, improving network performance, and ensuring efficient resource utilization. In this paper, temporal centrality measures are used to identify influential devices in information dissemination. For this purpose, first, the centrality measures for SIoT network devices have been redefined, and then, using the SIR model, each of the measures has been evaluated in terms of the success rate in identifying effective devices in information dissemination. The results have shown that in SIoT networks that have a high clustering coefficient, the centrality measures of closeness and betweenness have a better performance in identifying influential devices that are effective in spreading information. Also, for networks that have a high degree of heterogeneity, the device coreness centrality and device Katz centrality measures perform better than other measures. Finally, the results show that mobile devices play a more important role in disseminating information than static devices.
{"title":"Detection of Effective Devices in Information Dissemination on the Complex Social Internet of Things Networks Based on Device Centrality Measures","authors":"Wei Deng, Junqi Deng, Peyman Arebi","doi":"10.1155/cplx/2919169","DOIUrl":"https://doi.org/10.1155/cplx/2919169","url":null,"abstract":"<div>\u0000 <p>The Complex Social Internet of Things (CSIoT) integrates the connectivity of IoT with the relational dynamics of complex social networks, creating systems where devices autonomously form and manage relationships. The centrality measures specify the topological characteristics of each node in terms of local and global information of the node in the network. The detection of effective devices in disseminating information across CSIoT networks is critical for optimizing communication, improving network performance, and ensuring efficient resource utilization. In this paper, temporal centrality measures are used to identify influential devices in information dissemination. For this purpose, first, the centrality measures for SIoT network devices have been redefined, and then, using the SIR model, each of the measures has been evaluated in terms of the success rate in identifying effective devices in information dissemination. The results have shown that in SIoT networks that have a high clustering coefficient, the centrality measures of closeness and betweenness have a better performance in identifying influential devices that are effective in spreading information. Also, for networks that have a high degree of heterogeneity, the device coreness centrality and device Katz centrality measures perform better than other measures. Finally, the results show that mobile devices play a more important role in disseminating information than static devices.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/2919169","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143554689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vahideh Sahargahi, Vahid Majidnezhad, Saeid Taghavi Afshord, Yasser Jafari
This study addresses influence maximization in complex networks, aiming to identify optimal seed nodes for maximal cascades. Greedy methods, though effective, prove inefficient for large-scale social networks. This article introduces a double-chromosome evolutionary algorithm to tackle this challenge efficiently. This method introduces a smart operator for stochastic selection based on the node degree to initialize the primary solutions. A novel smart approach was also employed to improve the convergence of the proposed method by ranking the nodes existing in the current solution and using a blacklist to reduce the probability of selecting the nodes that might be influenced by the selected nodes. Moreover, a novel local search operator with appropriate efficiency was proposed to increase influence. To maintain solution diversity, a population diversity retention operator is integrated. Experimental evaluations on six real-world networks revealed the algorithm’s superiority in terms of influence rates, consistently outperforming the DPSO algorithm and ranking second to CELF with minimal margin according to statistical analysis using the Friedman test. For runtime efficiency, the proposed method demonstrated significantly shorter execution times compared to CELF and DPSO, showcasing its scalability and robustness. These results underscore the method’s effectiveness for applications requiring accurate identification of influential nodes.
{"title":"EIM: A Novel Evolutionary Influence Maximizer in Complex Networks","authors":"Vahideh Sahargahi, Vahid Majidnezhad, Saeid Taghavi Afshord, Yasser Jafari","doi":"10.1155/cplx/9973872","DOIUrl":"https://doi.org/10.1155/cplx/9973872","url":null,"abstract":"<div>\u0000 <p>This study addresses influence maximization in complex networks, aiming to identify optimal seed nodes for maximal cascades. Greedy methods, though effective, prove inefficient for large-scale social networks. This article introduces a double-chromosome evolutionary algorithm to tackle this challenge efficiently. This method introduces a smart operator for stochastic selection based on the node degree to initialize the primary solutions. A novel smart approach was also employed to improve the convergence of the proposed method by ranking the nodes existing in the current solution and using a blacklist to reduce the probability of selecting the nodes that might be influenced by the selected nodes. Moreover, a novel local search operator with appropriate efficiency was proposed to increase influence. To maintain solution diversity, a population diversity retention operator is integrated. Experimental evaluations on six real-world networks revealed the algorithm’s superiority in terms of influence rates, consistently outperforming the DPSO algorithm and ranking second to CELF with minimal margin according to statistical analysis using the Friedman test. For runtime efficiency, the proposed method demonstrated significantly shorter execution times compared to CELF and DPSO, showcasing its scalability and robustness. These results underscore the method’s effectiveness for applications requiring accurate identification of influential nodes.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/9973872","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongliang Li, Fan Liu, Yilin Liu, Wuneng Zhou, Kaili Liao
In this paper, the fixed-time consensus problem for leader–follower multiagent systems with directed graphs is discussed. First, a new fixed-time state observer of the multiagent system is designed. In the new state observer, an auxiliary matrix is introduced which can be theoretically obtained by solving a linear matrix inequality. The important role of the auxiliary matrix is that it makes the upper bound of the fixed time for the state observation of multiagent system solvable and more accurate, which enable the observation error system converges faster. Based on the new fixed-time state observer, a sufficient condition is given with which the observation of the state of multiagent system can be reached in fixed time. Second, a new fixed-time control protocol is designed for the leader–follower multiagent system. In the new controller, another auxiliary matrix is introduced which can also be theoretically obtained with linear matrix inequality. With the new control protocol, the closed-loop leader–follower multiagent system is theoretically shown that the fixed-time consensus can be reached in fixed time by means of the concept of fixed-time stability, Lyapunov stability theory, and LaSalle’s invariance principle. The important role of the new control protocol is that it also makes the fixed time for the consensus multiagent system solvable and then the closed-loop multiagent system converges faster. Finally, some numerical simulations are presented to demonstrate convincingly the superiority of the method and results obtained in this paper. From the simulations, it can be seen that in comparison with some existing works, the estimate of fixed time for the consensus problem may be more accurate and faster.
{"title":"On Novel Design Methods of Fixed-Time State Observation and Consensus Control for Linear Leader–Follower Multiagent System","authors":"Hongliang Li, Fan Liu, Yilin Liu, Wuneng Zhou, Kaili Liao","doi":"10.1155/cplx/6615172","DOIUrl":"https://doi.org/10.1155/cplx/6615172","url":null,"abstract":"<div>\u0000 <p>In this paper, the fixed-time consensus problem for leader–follower multiagent systems with directed graphs is discussed. First, a new fixed-time state observer of the multiagent system is designed. In the new state observer, an auxiliary matrix is introduced which can be theoretically obtained by solving a linear matrix inequality. The important role of the auxiliary matrix is that it makes the upper bound of the fixed time for the state observation of multiagent system solvable and more accurate, which enable the observation error system converges faster. Based on the new fixed-time state observer, a sufficient condition is given with which the observation of the state of multiagent system can be reached in fixed time. Second, a new fixed-time control protocol is designed for the leader–follower multiagent system. In the new controller, another auxiliary matrix is introduced which can also be theoretically obtained with linear matrix inequality. With the new control protocol, the closed-loop leader–follower multiagent system is theoretically shown that the fixed-time consensus can be reached in fixed time by means of the concept of fixed-time stability, Lyapunov stability theory, and LaSalle’s invariance principle. The important role of the new control protocol is that it also makes the fixed time for the consensus multiagent system solvable and then the closed-loop multiagent system converges faster. Finally, some numerical simulations are presented to demonstrate convincingly the superiority of the method and results obtained in this paper. From the simulations, it can be seen that in comparison with some existing works, the estimate of fixed time for the consensus problem may be more accurate and faster.</p>\u0000 </div>","PeriodicalId":50653,"journal":{"name":"Complexity","volume":"2025 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/cplx/6615172","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143466218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}