Panfeng Liu, Guoliang Qiu, Biaoshuai Tao, Kuan Yang
We study cascades in social networks with the independent cascade (IC) model and the Susceptible-Infected-recovered (SIR) model. The well-studied IC model fails to capture the feature of node recovery, and the SIR model is a variant of the IC model with the node recovery feature. In the SIR model, by computing the probability that a node successfully infects another before its recovery and viewing this probability as the corresponding IC parameter, the SIR model becomes an "out-going-edge-correlated" version of the IC model: the events of the infections along different out-going edges of a node become dependent in the SIR model, whereas these events are independent in the IC model. In this paper, we thoroughly compare the two models and examine the effect of this extra dependency in the SIR model. By a carefully designed coupling argument, we show that the seeds in the IC model have a stronger influence spread than their counterparts in the SIR model, and sometimes it can be significantly stronger. Specifically, we prove that, given the same network, the same seed sets, and the parameters of the two models being set based on the above-mentioned equivalence, the expected number of infected nodes at the end of the cascade for the IC model is weakly larger than that for the SIR model, and there are instances where this dominance is significant. We also study the influence maximization problem with the SIR model. We show that the above-mentioned difference in the two models yields different seed-selection strategies, which motivates the design of influence maximization algorithms specifically for the SIR model. We design efficient approximation algorithms with theoretical guarantees by adapting the reverse-reachable-set-based algorithms, commonly used for the IC model, to the SIR model.
我们用独立级联(IC)模型和易感-感染-恢复(SIR)模型来研究社交网络中的级联。已被广泛研究的 IC 模型未能捕捉到节点恢复的特征,而 SIR 模型是 IC 模型的一个变体,具有节点恢复的特征。在 SIR 模型中,通过计算一个节点在恢复之前成功感染另一个节点的概率,并将该概率视为相应的 IC 参数,SIR 模型成为 IC 模型的 "出边相关 "版本:在 SIR 模型中,节点不同出边的感染事件变得相互依赖,而在 IC 模型中,这些事件是独立的。在本文中,我们全面比较了这两种模型,并研究了 SIR 模型中这种额外依赖性的影响。通过精心设计的耦合论证,我们证明了 IC 模型中的种子比 SIR 模型中的种子具有更强的影响力传播,有时甚至强得多。具体来说,我们证明了在相同的网络、相同的种子集以及根据上述等价性设置两个模型的参数的情况下,IC 模型在级联结束时受感染节点的预期数量弱于 SIR 模型,而且在某些情况下这种优势是显著的。我们还研究了 SIR 模型的影响最大化问题。我们发现,上述两种模型的差异会产生不同的种子选择策略,这促使我们设计出专门针对 SIR 模型的影响力最大化算法。我们通过将常用于 IC 模型的基于反向可达集的算法调整到 SIR 模型,设计出了具有理论保证的高效近似算法。
{"title":"A Thorough Comparison Between Independent Cascade and Susceptible-Infected-Recovered Models","authors":"Panfeng Liu, Guoliang Qiu, Biaoshuai Tao, Kuan Yang","doi":"arxiv-2408.11470","DOIUrl":"https://doi.org/arxiv-2408.11470","url":null,"abstract":"We study cascades in social networks with the independent cascade (IC) model\u0000and the Susceptible-Infected-recovered (SIR) model. The well-studied IC model\u0000fails to capture the feature of node recovery, and the SIR model is a variant\u0000of the IC model with the node recovery feature. In the SIR model, by computing\u0000the probability that a node successfully infects another before its recovery\u0000and viewing this probability as the corresponding IC parameter, the SIR model\u0000becomes an \"out-going-edge-correlated\" version of the IC model: the events of\u0000the infections along different out-going edges of a node become dependent in\u0000the SIR model, whereas these events are independent in the IC model. In this\u0000paper, we thoroughly compare the two models and examine the effect of this\u0000extra dependency in the SIR model. By a carefully designed coupling argument,\u0000we show that the seeds in the IC model have a stronger influence spread than\u0000their counterparts in the SIR model, and sometimes it can be significantly\u0000stronger. Specifically, we prove that, given the same network, the same seed\u0000sets, and the parameters of the two models being set based on the\u0000above-mentioned equivalence, the expected number of infected nodes at the end\u0000of the cascade for the IC model is weakly larger than that for the SIR model,\u0000and there are instances where this dominance is significant. We also study the\u0000influence maximization problem with the SIR model. We show that the\u0000above-mentioned difference in the two models yields different seed-selection\u0000strategies, which motivates the design of influence maximization algorithms\u0000specifically for the SIR model. We design efficient approximation algorithms\u0000with theoretical guarantees by adapting the reverse-reachable-set-based\u0000algorithms, commonly used for the IC model, to the SIR model.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Fukushima-Daiichi release of radioactivity is a relevant event to study the atmospheric dispersion modelling of radionuclides. Actually, the atmospheric deposition onto the ground may be studied through the map of measured Cs-137 established consecutively to the accident. The limits of detection were low enough to make the measurements possible as far as 250km from the nuclear power plant. This large scale deposition has been modelled with the Eulerian model ldX. However, several weeks of emissions in multiple weather conditions make it a real challenge. Besides, these measurements are accumulated deposition of Cs-137 over the whole period and do not inform of deposition mechanisms involved: in-cloud, below-cloud, dry deposition. In a previous study (Qu{'e}rel et al., 2016), a comprehensive sensitivity analysis was performed in order to understand wet deposition mechanisms. It has been shown that the choice of the wet deposition scheme has a strong impact on assessment of deposition patterns. Nevertheless, a ``best'' scheme could not be highlighted as it depends on the selected criteria: the ranking differs according to the statistical indicators considered (correlation, figure of merit in space and factor 2). A possibility to explain the difficulty to discriminate between several schemes was the uncertainties in the modelling, resulting from the meteorological data for instance. Since the move of the plume is not properly modelled, the deposition processes are applied with an inaccurate activity concentration in the air. In the framework of the SAKURA project, an MRI-IRSN collaboration, new meteorological fields at higher resolution (Sekiyama et al., 2013) were provided and allow to reconsider the previous study. An update including these new meteorology data is presented. In addition, the focus is put on the deposition schemes commonly used in nuclear emergency context.
{"title":"Impact of changing the wet deposition schemes in ldx on 137-cs atmosperic deposits after the fukushima accident","authors":"Arnaud QuérelIRSN, IRSN/PSE-SANTE/SESUC/BMCA, Denis QuéloIRSN/PSE-SANTE/SESUC/BMCA, IRSN, Yelva RoustanCEREA, Anne MathieuIRSN, IRSN/PSE-SANTE/SESUC/BMCA, Mizuo KajinoMRI, Thomas SekiyamaMRI, Kouji AdachiMRI, Damien DidierIRSN, IRSN/PSE-SANTE/SESUC/BMCA, Yasuhito IgarashiMRI, Takashi MakiMRI","doi":"arxiv-2408.11460","DOIUrl":"https://doi.org/arxiv-2408.11460","url":null,"abstract":"The Fukushima-Daiichi release of radioactivity is a relevant event to study\u0000the atmospheric dispersion modelling of radionuclides. Actually, the\u0000atmospheric deposition onto the ground may be studied through the map of\u0000measured Cs-137 established consecutively to the accident. The limits of\u0000detection were low enough to make the measurements possible as far as 250km\u0000from the nuclear power plant. This large scale deposition has been modelled\u0000with the Eulerian model ldX. However, several weeks of emissions in multiple\u0000weather conditions make it a real challenge. Besides, these measurements are\u0000accumulated deposition of Cs-137 over the whole period and do not inform of\u0000deposition mechanisms involved: in-cloud, below-cloud, dry deposition. In a\u0000previous study (Qu{'e}rel et al., 2016), a comprehensive sensitivity analysis\u0000was performed in order to understand wet deposition mechanisms. It has been\u0000shown that the choice of the wet deposition scheme has a strong impact on\u0000assessment of deposition patterns. Nevertheless, a ``best'' scheme could not be\u0000highlighted as it depends on the selected criteria: the ranking differs\u0000according to the statistical indicators considered (correlation, figure of\u0000merit in space and factor 2). A possibility to explain the difficulty to\u0000discriminate between several schemes was the uncertainties in the modelling,\u0000resulting from the meteorological data for instance. Since the move of the\u0000plume is not properly modelled, the deposition processes are applied with an\u0000inaccurate activity concentration in the air. In the framework of the SAKURA\u0000project, an MRI-IRSN collaboration, new meteorological fields at higher\u0000resolution (Sekiyama et al., 2013) were provided and allow to reconsider the\u0000previous study. An update including these new meteorology data is presented. In\u0000addition, the focus is put on the deposition schemes commonly used in nuclear\u0000emergency context.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of computational and numerical methods in recent times has provided new avenues for analyzing art historiographical narratives and tracing the evolution of art styles therein. Here, we investigate an evolutionary process underpinning the emergence and stylization of contemporary user-generated visual art styles using the complexity-entropy (C-H) plane, which quantifies local structures in paintings. Informatizing 149,780 images curated in DeviantArt and Behance platforms from 2010 to 2020, we analyze the relationship between local information of the C-H space and multi-level image features generated by a deep neural network and a feature extraction algorithm. The results reveal significant statistical relationships between the C-H information of visual artistic styles and the dissimilarities of the multi-level image features over time within groups of artworks. By disclosing a particular C-H region where the diversity of image representations is noticeably manifested, our analyses reveal an empirical condition of emerging styles that are both novel in the C-H plane and characterized by greater stylistic diversity. Our research shows that visual art analyses combined with physics-inspired methodologies and machine learning, can provide macroscopic insights into quantitatively mapping relevant characteristics of an evolutionary process underpinning the creative stylization of uncharted visual arts of given groups and time.
{"title":"Diversity and stylization of the contemporary user-generated visual arts in the complexity-entropy plane","authors":"Seunghwan Kim, Byunghwee Lee, Wonjae Lee","doi":"arxiv-2408.10356","DOIUrl":"https://doi.org/arxiv-2408.10356","url":null,"abstract":"The advent of computational and numerical methods in recent times has\u0000provided new avenues for analyzing art historiographical narratives and tracing\u0000the evolution of art styles therein. Here, we investigate an evolutionary\u0000process underpinning the emergence and stylization of contemporary\u0000user-generated visual art styles using the complexity-entropy (C-H) plane,\u0000which quantifies local structures in paintings. Informatizing 149,780 images\u0000curated in DeviantArt and Behance platforms from 2010 to 2020, we analyze the\u0000relationship between local information of the C-H space and multi-level image\u0000features generated by a deep neural network and a feature extraction algorithm.\u0000The results reveal significant statistical relationships between the C-H\u0000information of visual artistic styles and the dissimilarities of the\u0000multi-level image features over time within groups of artworks. By disclosing a\u0000particular C-H region where the diversity of image representations is\u0000noticeably manifested, our analyses reveal an empirical condition of emerging\u0000styles that are both novel in the C-H plane and characterized by greater\u0000stylistic diversity. Our research shows that visual art analyses combined with\u0000physics-inspired methodologies and machine learning, can provide macroscopic\u0000insights into quantitatively mapping relevant characteristics of an\u0000evolutionary process underpinning the creative stylization of uncharted visual\u0000arts of given groups and time.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of economy and global trade, flying has increasingly become the main way for people to travel, and taxi is the main transfer tool of the airport, especially in the airport with wide passenger sources and large passenger flow. However, at present, many large airports have the phenomenon of low taxi load rate, unbalanced income and long waiting time for passengers. Therefore, how can drivers make decisions to maximize their benefits, and how can management departments allocate resources and formulate management systems to improve ride efficiency and balance drivers' benefits have become urgent problems to be solved. This paper solves the above problems by establishing the taxi driver dual-sensitive decision model, the longitudinal taxi queuing model and the short-distance passenger re-return forced M/M/1 queuing model.
{"title":"Application of dual-sensitive decision mechanism based on queuing theory in airport taxi management","authors":"Fang He","doi":"arxiv-2408.11867","DOIUrl":"https://doi.org/arxiv-2408.11867","url":null,"abstract":"With the rapid development of economy and global trade, flying has\u0000increasingly become the main way for people to travel, and taxi is the main\u0000transfer tool of the airport, especially in the airport with wide passenger\u0000sources and large passenger flow. However, at present, many large airports have\u0000the phenomenon of low taxi load rate, unbalanced income and long waiting time\u0000for passengers. Therefore, how can drivers make decisions to maximize their\u0000benefits, and how can management departments allocate resources and formulate\u0000management systems to improve ride efficiency and balance drivers' benefits\u0000have become urgent problems to be solved. This paper solves the above problems\u0000by establishing the taxi driver dual-sensitive decision model, the longitudinal\u0000taxi queuing model and the short-distance passenger re-return forced M/M/1\u0000queuing model.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consensus formation is a complex process, particularly in networked groups. When individuals are incentivized to dig in and refuse to compromise, leaders may be essential to guiding the group to consensus. Specifically, the relative geodesic position of leaders (which we use as a proxy for ease of communication between leaders) could be important for reaching consensus. Additionally, groups searching for consensus can be confounded by noisy signals in which individuals are given false information about the actions of their fellow group members. We tested the effects of the geodesic distance between leaders (geodesic distance ranging from 1-4) and of noise (noise levels at 0%, 5%, and 10%) by recruiting participants (N=3,456) for a set of experiments (n=216 groups). We find that noise makes groups less likely to reach consensus, and the groups that do reach consensus take longer to find it. We find that leadership changes the behavior of both leaders and followers in important ways (for instance, being labeled a leader makes people more likely to 'go with the flow'). However, we find no evidence that the distance between leaders is a significant factor in the probability of reaching consensus. While other network properties of leaders undoubtedly impact consensus formation, the distance between leaders in network sub-groups appears not to matter.
{"title":"Bringing Leaders of Network Sub-Groups Closer Together Does Not Facilitate Consensus","authors":"Matthew I. Jones, Nicholas A. Christakis","doi":"arxiv-2408.09309","DOIUrl":"https://doi.org/arxiv-2408.09309","url":null,"abstract":"Consensus formation is a complex process, particularly in networked groups.\u0000When individuals are incentivized to dig in and refuse to compromise, leaders\u0000may be essential to guiding the group to consensus. Specifically, the relative\u0000geodesic position of leaders (which we use as a proxy for ease of communication\u0000between leaders) could be important for reaching consensus. Additionally,\u0000groups searching for consensus can be confounded by noisy signals in which\u0000individuals are given false information about the actions of their fellow group\u0000members. We tested the effects of the geodesic distance between leaders\u0000(geodesic distance ranging from 1-4) and of noise (noise levels at 0%, 5%, and\u000010%) by recruiting participants (N=3,456) for a set of experiments (n=216\u0000groups). We find that noise makes groups less likely to reach consensus, and\u0000the groups that do reach consensus take longer to find it. We find that\u0000leadership changes the behavior of both leaders and followers in important ways\u0000(for instance, being labeled a leader makes people more likely to 'go with the\u0000flow'). However, we find no evidence that the distance between leaders is a\u0000significant factor in the probability of reaching consensus. While other\u0000network properties of leaders undoubtedly impact consensus formation, the\u0000distance between leaders in network sub-groups appears not to matter.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Threshold-driven models and game theory are two fundamental paradigms for describing human interactions in social systems. However, in mimicking social contagion processes, models that simultaneously incorporate these two mechanisms have been largely overlooked. Here, we study a general model that integrates hybrid interaction forms by assuming that a part of nodes in a network are driven by the threshold mechanism, while the remaining nodes exhibit imitation behavior governed by their rationality (under the game-theoretic framework). Our results reveal that the spreading dynamics are determined by the payoff of adoption. For positive payoffs, increasing the density of highly rational nodes can promote the adoption process, accompanied by a hybrid phase transition. The degree of rationality can regulate the spreading speed, with less rational imitators slowing down the spread. We further find that the results are opposite for negative payoffs of adoption. This model may provide valuable insights into understanding the complex dynamics of social contagion phenomena in real-world social networks.
{"title":"Social contagion under hybrid interactions","authors":"Xincheng Shu, Man Yang, Zhongyuan Ruan, Qi Xuan","doi":"arxiv-2408.05050","DOIUrl":"https://doi.org/arxiv-2408.05050","url":null,"abstract":"Threshold-driven models and game theory are two fundamental paradigms for\u0000describing human interactions in social systems. However, in mimicking social\u0000contagion processes, models that simultaneously incorporate these two\u0000mechanisms have been largely overlooked. Here, we study a general model that\u0000integrates hybrid interaction forms by assuming that a part of nodes in a\u0000network are driven by the threshold mechanism, while the remaining nodes\u0000exhibit imitation behavior governed by their rationality (under the\u0000game-theoretic framework). Our results reveal that the spreading dynamics are\u0000determined by the payoff of adoption. For positive payoffs, increasing the\u0000density of highly rational nodes can promote the adoption process, accompanied\u0000by a hybrid phase transition. The degree of rationality can regulate the\u0000spreading speed, with less rational imitators slowing down the spread. We\u0000further find that the results are opposite for negative payoffs of adoption.\u0000This model may provide valuable insights into understanding the complex\u0000dynamics of social contagion phenomena in real-world social networks.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141946039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A social interaction (so-called higher-order event/interaction) can be regarded as the activation of the hyperlink among the corresponding individuals. Social interactions can be, thus, represented as higher-order temporal networks, that record the higher-order events occurring at each time step over time. The prediction of higher-order interactions is usually overlooked in traditional temporal network prediction methods, where a higher-order interaction is regarded as a set of pairwise interactions. The prediction of future higher-order interactions is crucial to forecast and mitigate the spread the information, epidemics and opinion on higher-order social contact networks. In this paper, we propose novel memory-based models for higher-order temporal network prediction. By using these models, we aim to predict the higher-order temporal network one time step ahead, based on the network observed in the past. Importantly, we also intent to understand what network properties and which types of previous interactions enable the prediction. The design and performance analysis of these models are supported by our analysis of the memory property of networks, e.g., similarity of the network and activity of a hyperlink over time respectively. Our models assume that a target hyperlink's future activity (active or not) depends the past activity of the target link and of all or selected types of hyperlinks that overlap with the target. We then compare the performance of both models with a baseline utilizing a pairwise temporal network prediction method. In eight real-world networks, we find that both models consistently outperform the baseline and the refined model tends to perform the best. Our models also reveal how past interactions of the target hyperlink and different types of hyperlinks that overlap with the target contribute to the prediction of the target's future activity.
{"title":"Higher-Order Temporal Network Prediction and Interpretation","authors":"H. A. Bart Peters, Alberto Ceria, Huijuan Wang","doi":"arxiv-2408.05165","DOIUrl":"https://doi.org/arxiv-2408.05165","url":null,"abstract":"A social interaction (so-called higher-order event/interaction) can be\u0000regarded as the activation of the hyperlink among the corresponding\u0000individuals. Social interactions can be, thus, represented as higher-order\u0000temporal networks, that record the higher-order events occurring at each time\u0000step over time. The prediction of higher-order interactions is usually\u0000overlooked in traditional temporal network prediction methods, where a\u0000higher-order interaction is regarded as a set of pairwise interactions. The\u0000prediction of future higher-order interactions is crucial to forecast and\u0000mitigate the spread the information, epidemics and opinion on higher-order\u0000social contact networks. In this paper, we propose novel memory-based models\u0000for higher-order temporal network prediction. By using these models, we aim to\u0000predict the higher-order temporal network one time step ahead, based on the\u0000network observed in the past. Importantly, we also intent to understand what\u0000network properties and which types of previous interactions enable the\u0000prediction. The design and performance analysis of these models are supported\u0000by our analysis of the memory property of networks, e.g., similarity of the\u0000network and activity of a hyperlink over time respectively. Our models assume\u0000that a target hyperlink's future activity (active or not) depends the past\u0000activity of the target link and of all or selected types of hyperlinks that\u0000overlap with the target. We then compare the performance of both models with a\u0000baseline utilizing a pairwise temporal network prediction method. In eight\u0000real-world networks, we find that both models consistently outperform the\u0000baseline and the refined model tends to perform the best. Our models also\u0000reveal how past interactions of the target hyperlink and different types of\u0000hyperlinks that overlap with the target contribute to the prediction of the\u0000target's future activity.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141946040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we investigate the transition between layer-localized and delocalized regimes in a general contact-based social contagion model on multiplex networks. We begin by analyzing the layer-localization to delocalization transition through the inverse participation ratio (IPR). Utilizing perturbation analysis, we derive a new analytical approximation for the transition point and an expression for the IPR in the non-dominant layer within the localized regime. Additionally, we examine the transition from a non-dominant to a dominant regime, providing an analytical expression for the transition point. These transitions are further explored and validated through dynamical simulations.
{"title":"Eigenvector Localization and Universal Regime Transitions in Multiplex Networks: A Perturbative Approach","authors":"Joan Hernàndez Tey, Emanuele Cozzo","doi":"arxiv-2408.04784","DOIUrl":"https://doi.org/arxiv-2408.04784","url":null,"abstract":"In this work, we investigate the transition between layer-localized and\u0000delocalized regimes in a general contact-based social contagion model on\u0000multiplex networks. We begin by analyzing the layer-localization to\u0000delocalization transition through the inverse participation ratio (IPR).\u0000Utilizing perturbation analysis, we derive a new analytical approximation for\u0000the transition point and an expression for the IPR in the non-dominant layer\u0000within the localized regime. Additionally, we examine the transition from a\u0000non-dominant to a dominant regime, providing an analytical expression for the\u0000transition point. These transitions are further explored and validated through\u0000dynamical simulations.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141946041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Here we show that "exposure segregation" - the degree to which individuals of one group are exposed to individuals of another in day-to-day mobility - is dependent on the structure of cities, and the importance of downtowns in particular. Recent work uses aggregated data to claim that the location of amenities can inhibit or facilitate interactions between groups: if a city is residentially segregated, as many American cities are, then amenities between segregated communities should encourage them to mix. We show that the relationship between "bridging" amenities and socio-economic mixing breaks down when we examine the amenities themselves, rather than the urban aggregates. For example, restaurants with locations that suggest low expected mixing do not, much of the time, have low mixing: there is only a weak correlation between bridging and mixing at the level of the restaurant, despite a strong correlation at the level of the supermarket. This is because downtowns - and the bundle of amenities that define them - tend not to be situated in bridge areas but play an important role in drawing diverse groups together.
{"title":"The role of central places in exposure segregation","authors":"Andrew Renninger, Mateo Neira, Elsa Arcaute","doi":"arxiv-2408.04373","DOIUrl":"https://doi.org/arxiv-2408.04373","url":null,"abstract":"Here we show that \"exposure segregation\" - the degree to which individuals of\u0000one group are exposed to individuals of another in day-to-day mobility - is\u0000dependent on the structure of cities, and the importance of downtowns in\u0000particular. Recent work uses aggregated data to claim that the location of\u0000amenities can inhibit or facilitate interactions between groups: if a city is\u0000residentially segregated, as many American cities are, then amenities between\u0000segregated communities should encourage them to mix. We show that the\u0000relationship between \"bridging\" amenities and socio-economic mixing breaks down\u0000when we examine the amenities themselves, rather than the urban aggregates. For\u0000example, restaurants with locations that suggest low expected mixing do not,\u0000much of the time, have low mixing: there is only a weak correlation between\u0000bridging and mixing at the level of the restaurant, despite a strong\u0000correlation at the level of the supermarket. This is because downtowns - and\u0000the bundle of amenities that define them - tend not to be situated in bridge\u0000areas but play an important role in drawing diverse groups together.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141946042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Kravchenko, Andrey A. Bagrov, Mikhail I. Katsnelson, Veronica Dudarev
While intuitive for humans, the concept of visual complexity is hard to define and quantify formally. We suggest adopting the multi-scale structural complexity (MSSC) measure, an approach that defines structural complexity of an object as the amount of dissimilarities between distinct scales in its hierarchical organization. In this work, we apply MSSC to the case of visual stimuli, using an open dataset of images with subjective complexity scores obtained from human participants (SAVOIAS). We demonstrate that MSSC correlates with subjective complexity on par with other computational complexity measures, while being more intuitive by definition, consistent across categories of images, and easier to compute. We discuss objective and subjective elements inherently present in human perception of complexity and the domains where the two are more likely to diverge. We show how the multi-scale nature of MSSC allows further investigation of complexity as it is perceived by humans.
{"title":"Multi-scale structural complexity as a quantitative measure of visual complexity","authors":"Anna Kravchenko, Andrey A. Bagrov, Mikhail I. Katsnelson, Veronica Dudarev","doi":"arxiv-2408.04076","DOIUrl":"https://doi.org/arxiv-2408.04076","url":null,"abstract":"While intuitive for humans, the concept of visual complexity is hard to\u0000define and quantify formally. We suggest adopting the multi-scale structural\u0000complexity (MSSC) measure, an approach that defines structural complexity of an\u0000object as the amount of dissimilarities between distinct scales in its\u0000hierarchical organization. In this work, we apply MSSC to the case of visual\u0000stimuli, using an open dataset of images with subjective complexity scores\u0000obtained from human participants (SAVOIAS). We demonstrate that MSSC correlates\u0000with subjective complexity on par with other computational complexity measures,\u0000while being more intuitive by definition, consistent across categories of\u0000images, and easier to compute. We discuss objective and subjective elements\u0000inherently present in human perception of complexity and the domains where the\u0000two are more likely to diverge. We show how the multi-scale nature of MSSC\u0000allows further investigation of complexity as it is perceived by humans.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}