Pub Date : 2023-01-01Epub Date: 2023-09-21DOI: 10.1007/s41109-023-00595-y
Marzena Fügenschuh, Feng Fu
Incorporating social factors into disease prevention and control efforts is an important undertaking of behavioral epidemiology. The interplay between disease transmission and human health behaviors, such as vaccine uptake, results in complex dynamics of biological and social contagions. Maximizing intervention adoptions via network-based targeting algorithms by harnessing the power of social contagion for behavior and attitude changes largely remains a challenge. Here we address this issue by considering a multiplex network setting. Individuals are situated on two layers of networks: the disease transmission network layer and the peer influence network layer. The disease spreads through direct close contacts while vaccine views and uptake behaviors spread interpersonally within a potentially virtual network. The results of our comprehensive simulations show that network-based targeting with pro-vaccine supporters as initial seeds significantly influences vaccine adoption rates and reduces the extent of an epidemic outbreak. Network targeting interventions are much more effective by selecting individuals with a central position in the opinion network as compared to those grouped in a community or connected professionally. Our findings provide insight into network-based interventions to increase vaccine confidence and demand during an ongoing epidemic.
{"title":"Overcoming vaccine hesitancy by multiplex social network targeting: an analysis of targeting algorithms and implications.","authors":"Marzena Fügenschuh, Feng Fu","doi":"10.1007/s41109-023-00595-y","DOIUrl":"10.1007/s41109-023-00595-y","url":null,"abstract":"<p><p>Incorporating social factors into disease prevention and control efforts is an important undertaking of behavioral epidemiology. The interplay between disease transmission and human health behaviors, such as vaccine uptake, results in complex dynamics of biological and social contagions. Maximizing intervention adoptions via network-based targeting algorithms by harnessing the power of social contagion for behavior and attitude changes largely remains a challenge. Here we address this issue by considering a multiplex network setting. Individuals are situated on two layers of networks: the disease transmission network layer and the peer influence network layer. The disease spreads through direct close contacts while vaccine views and uptake behaviors spread interpersonally within a potentially virtual network. The results of our comprehensive simulations show that network-based targeting with pro-vaccine supporters as initial seeds significantly influences vaccine adoption rates and reduces the extent of an epidemic outbreak. Network targeting interventions are much more effective by selecting individuals with a central position in the opinion network as compared to those grouped in a community or connected professionally. Our findings provide insight into network-based interventions to increase vaccine confidence and demand during an ongoing epidemic.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10514145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41151470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The graph-theoretic based studies employing bipartite network approach mostly focus on surveying the statistical properties of the structure and behavior of the network systems under the domain of complex network analysis. They aim to provide the big-picture-view insights of a networked system by looking into the dynamic interaction and relationship among the vertices. Nonetheless, incorporating the features of individual vertex and capturing the dynamic interaction of the heterogeneous local rules governing each of them in the studies is lacking. The methodology in achieving this could hardly be found. Consequently, this study intends to propose a methodology framework that considers the influence of heterogeneous features of each node to the overall network behavior in modeling real-world bipartite network system. The proposed framework consists of three main stages with principal processes detailed in each stage, and three libraries of techniques to guide the modeling activities. It is iterative and process-oriented in nature and allows future network expansion. Two case studies from the domain of communicable disease in epidemiology and habitat suitability in ecology employing this framework are also presented. The results obtained suggest that the methodology could serve as a generic framework in advancing the current state of the art of bipartite network approach.
{"title":"A methodology framework for bipartite network modeling.","authors":"Chin Ying Liew, Jane Labadin, Woon Chee Kok, Monday Okpoto Eze","doi":"10.1007/s41109-023-00533-y","DOIUrl":"https://doi.org/10.1007/s41109-023-00533-y","url":null,"abstract":"<p><p>The graph-theoretic based studies employing bipartite network approach mostly focus on surveying the statistical properties of the structure and behavior of the network systems under the domain of complex network analysis. They aim to provide the big-picture-view insights of a networked system by looking into the dynamic interaction and relationship among the vertices. Nonetheless, incorporating the features of individual vertex and capturing the dynamic interaction of the heterogeneous local rules governing each of them in the studies is lacking. The methodology in achieving this could hardly be found. Consequently, this study intends to propose a methodology framework that considers the influence of heterogeneous features of each node to the overall network behavior in modeling real-world bipartite network system. The proposed framework consists of three main stages with principal processes detailed in each stage, and three libraries of techniques to guide the modeling activities. It is iterative and process-oriented in nature and allows future network expansion. Two case studies from the domain of communicable disease in epidemiology and habitat suitability in ecology employing this framework are also presented. The results obtained suggest that the methodology could serve as a generic framework in advancing the current state of the art of bipartite network approach.</p><p><strong>Graphical abstract: </strong></p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9844172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10579008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s41109-022-00520-9
Harun Pirim, Morteza Nagahi, Oumaima Larif, Mohammad Nagahisarchoghaei, Raed Jaradat
Systems Thinking (ST) has become essential for practitioners and experts when dealing with turbulent and complex environments. Twitter medium harbors social capital including systems thinkers, however there are limited studies available in the extant literature that investigate how experts' systems thinking skills, if possible at all, can be revealed within Twitter analysis. This study aims to reveal systems thinking levels of experts from their Twitter accounts represented as a network. Unraveling of latent Twitter network clusters ensues the centrality analysis of their follower networks inferred in terms of systems thinking dimensions. COVID-19 emerges as a relevant case study to investigate the relationship between COVID-19 experts' Twitter network and their systems thinking capabilities. A sample of 55 trusted expert Twitter accounts related to COVID-19 has been selected for the current study based on the lists from Forbes, Fortune, and Bustle. The Twitter network has been constructed based on the features extracted from their Twitter accounts. Community detection reveals three distinct groups of experts. In order to relate system thinking qualities to each group, systems thinking dimensions are matched with the follower network characteristics such as node-level metrics and centrality measures including degree, betweenness, closeness and Eigen centrality. Comparison of the 55 expert follower network characteristics elucidates three clusters with significant differences in centrality scores and node-level metrics. The clusters with a higher, medium, lower scores can be classified as Twitter accounts of Holistic thinkers, Middle thinkers, and Reductionist thinkers, respectfully. In conclusion, systems thinking capabilities are traced through unique network patterns in relation to the follower network characteristics associated with systems thinking dimensions.
{"title":"Integrated twitter analysis to distinguish systems thinkers at various levels: a case study of COVID-19.","authors":"Harun Pirim, Morteza Nagahi, Oumaima Larif, Mohammad Nagahisarchoghaei, Raed Jaradat","doi":"10.1007/s41109-022-00520-9","DOIUrl":"https://doi.org/10.1007/s41109-022-00520-9","url":null,"abstract":"<p><p>Systems Thinking (ST) has become essential for practitioners and experts when dealing with turbulent and complex environments. Twitter medium harbors social capital including systems thinkers, however there are limited studies available in the extant literature that investigate how experts' systems thinking skills, if possible at all, can be revealed within Twitter analysis. This study aims to reveal systems thinking levels of experts from their Twitter accounts represented as a network. Unraveling of latent Twitter network clusters ensues the centrality analysis of their follower networks inferred in terms of systems thinking dimensions. COVID-19 emerges as a relevant case study to investigate the relationship between COVID-19 experts' Twitter network and their systems thinking capabilities. A sample of 55 trusted expert Twitter accounts related to COVID-19 has been selected for the current study based on the lists from Forbes, Fortune, and Bustle. The Twitter network has been constructed based on the features extracted from their Twitter accounts. Community detection reveals three distinct groups of experts. In order to relate system thinking qualities to each group, systems thinking dimensions are matched with the follower network characteristics such as node-level metrics and centrality measures including degree, betweenness, closeness and Eigen centrality. Comparison of the 55 expert follower network characteristics elucidates three clusters with significant differences in centrality scores and node-level metrics. The clusters with a higher, medium, lower scores can be classified as Twitter accounts of Holistic thinkers, Middle thinkers, and Reductionist thinkers, respectfully. In conclusion, systems thinking capabilities are traced through unique network patterns in relation to the follower network characteristics associated with systems thinking dimensions.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9936930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10794216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-26DOI: 10.1007/s41109-023-00551-w
Eoghan Cunningham, Derek Greene
Role discovery is the task of dividing the set of nodes on a graph into classes of structurally similar roles. Modern strategies for role discovery typically rely on graph embedding techniques, which are capable of recognising complex graph structures when reducing nodes to dense vector representations. However, when working with large, real-world networks, it is difficult to interpret or validate a set of roles identified according to these methods. In this work, motivated by advancements in the field of explainable artificial intelligence, we propose surrogate explanation for role discovery, a new framework for interpreting role assignments on large graphs using small subgraph structures known as graphlets. We demonstrate our framework on a small synthetic graph with prescribed structure, before applying them to a larger real-world network. In the second case, a large, multidisciplinary citation network, we successfully identify a number of important citation patterns or structures which reflect interdisciplinary research.
{"title":"Surrogate explanations for role discovery on graphs.","authors":"Eoghan Cunningham, Derek Greene","doi":"10.1007/s41109-023-00551-w","DOIUrl":"10.1007/s41109-023-00551-w","url":null,"abstract":"<p><p>Role discovery is the task of dividing the set of nodes on a graph into classes of structurally similar roles. Modern strategies for role discovery typically rely on graph embedding techniques, which are capable of recognising complex graph structures when reducing nodes to dense vector representations. However, when working with large, real-world networks, it is difficult to interpret or validate a set of roles identified according to these methods. In this work, motivated by advancements in the field of explainable artificial intelligence, we propose surrogate explanation for role discovery, a new framework for interpreting role assignments on large graphs using small subgraph structures known as graphlets. We demonstrate our framework on a small synthetic graph with prescribed structure, before applying them to a larger real-world network. In the second case, a large, multidisciplinary citation network, we successfully identify a number of important citation patterns or structures which reflect interdisciplinary research.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10219885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9900243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-23DOI: 10.1007/s41109-023-00549-4
Mohammad Ali Soltanshahi, Babak Teimourpour, Hadi Zare
Link prediction (LP) has many applications in various fields. Much research has been carried out on the LP field, and one of the most critical problems in LP models is handling one-to-many and many-to-many relationships. To the best of our knowledge, there is no research on discriminative fine-tuning (DFT). DFT means having different learning rates for every parts of the model. We introduce the BuB model, which has two parts: relationship Builder and Relationship Booster. Relationship Builder is responsible for building the relationship, and Relationship Booster is responsible for strengthening the relationship. By writing the ranking function in polar coordinates and using the nth root, our proposed method provides solutions for handling one-to-many and many-to-many relationships and increases the optimal solutions space. We try to increase the importance of the Builder part by controlling the learning rate using the DFT concept. The experimental results show that the proposed method outperforms state-of-the-art methods on benchmark datasets.
{"title":"BuB: a builder-booster model for link prediction on knowledge graphs.","authors":"Mohammad Ali Soltanshahi, Babak Teimourpour, Hadi Zare","doi":"10.1007/s41109-023-00549-4","DOIUrl":"10.1007/s41109-023-00549-4","url":null,"abstract":"<p><p>Link prediction (LP) has many applications in various fields. Much research has been carried out on the LP field, and one of the most critical problems in LP models is handling one-to-many and many-to-many relationships. To the best of our knowledge, there is no research on discriminative fine-tuning (DFT). DFT means having different learning rates for every parts of the model. We introduce the BuB model, which has two parts: relationship Builder and Relationship Booster. Relationship Builder is responsible for building the relationship, and Relationship Booster is responsible for strengthening the relationship. By writing the ranking function in polar coordinates and using the nth root, our proposed method provides solutions for handling one-to-many and many-to-many relationships and increases the optimal solutions space. We try to increase the importance of the Builder part by controlling the learning rate using the DFT concept. The experimental results show that the proposed method outperforms state-of-the-art methods on benchmark datasets.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10204686/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9545690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-08-21DOI: 10.1007/s41109-023-00566-3
Gergely Ódor, Jana Vuckovic, Miguel-Angel Sanchez Ndoye, Patrick Thiran
Inferring the source of a diffusion in a large network of agents is a difficult but feasible task, if a few agents act as sensors revealing the time at which they got hit by the diffusion. One of the main limitations of current source identification algorithms is that they assume full knowledge of the contact network, which is rarely the case, especially for epidemics, where the source is called patient zero. Inspired by recent implementations of contact tracing algorithms, we propose a new framework, which we call Source Identification via Contact Tracing Framework (SICTF). In the SICTF, the source identification task starts at the time of the first hospitalization, and initially we have no knowledge about the contact network other than the identity of the first hospitalized agent. We may then explore the network by contact queries, and obtain symptom onset times by test queries in an adaptive way, i.e., both contact and test queries can depend on the outcome of previous queries. We also assume that some of the agents may be asymptomatic, and therefore cannot reveal their symptom onset time. Our goal is to find patient zero with as few contact and test queries as possible. We implement two local search algorithms for the SICTF: the LS algorithm, which has recently been proposed by Waniek et al. in a similar framework, is more data-efficient, but can fail to find the true source if many asymptomatic agents are present, whereas the LS+ algorithm is more robust to asymptomatic agents. By simulations we show that both LS and LS+ outperform previously proposed adaptive and non-adaptive source identification algorithms adapted to the SICTF, even though these baseline algorithms have full access to the contact network. Extending the theory of random exponential trees, we analytically approximate the source identification probability of the LS/ LS+ algorithms, and we show that our analytic results match the simulations. Finally, we benchmark our algorithms on the Data-driven COVID-19 Simulator (DCS) developed by Lorch et al., which is the first time source identification algorithms are tested on such a complex dataset.
{"title":"Source identification via contact tracing in the presence of asymptomatic patients.","authors":"Gergely Ódor, Jana Vuckovic, Miguel-Angel Sanchez Ndoye, Patrick Thiran","doi":"10.1007/s41109-023-00566-3","DOIUrl":"10.1007/s41109-023-00566-3","url":null,"abstract":"<p><p>Inferring the source of a diffusion in a large network of agents is a difficult but feasible task, if a few agents act as sensors revealing the time at which they got hit by the diffusion. One of the main limitations of current source identification algorithms is that they assume full knowledge of the contact network, which is rarely the case, especially for epidemics, where the source is called patient zero. Inspired by recent implementations of contact tracing algorithms, we propose a new framework, which we call Source Identification via Contact Tracing Framework (SICTF). In the SICTF, the source identification task starts at the time of the first hospitalization, and initially we have no knowledge about the contact network other than the identity of the first hospitalized agent. We may then explore the network by contact queries, and obtain symptom onset times by test queries in an adaptive way, i.e., both contact and test queries can depend on the outcome of previous queries. We also assume that some of the agents may be asymptomatic, and therefore cannot reveal their symptom onset time. Our goal is to find patient zero with as few contact and test queries as possible. We implement two local search algorithms for the SICTF: the LS algorithm, which has recently been proposed by Waniek et al. in a similar framework, is more data-efficient, but can fail to find the true source if many asymptomatic agents are present, whereas the LS+ algorithm is more robust to asymptomatic agents. By simulations we show that both LS and LS+ outperform previously proposed adaptive and non-adaptive source identification algorithms adapted to the SICTF, even though these baseline algorithms have full access to the contact network. Extending the theory of random exponential trees, we analytically approximate the source identification probability of the LS/ LS+ algorithms, and we show that our analytic results match the simulations. Finally, we benchmark our algorithms on the Data-driven COVID-19 Simulator (DCS) developed by Lorch et al., which is the first time source identification algorithms are tested on such a complex dataset.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442312/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10442074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-05DOI: 10.1007/s41109-023-00547-6
Alberto Cottica, Veronica Davidov, Magdalena Góralska, Jan Kubik, Guy Melançon, Richard Mole, Bruno Pinaud, Wojciech Szymański
The use of data and algorithms in the social sciences allows for exciting progress, but also poses epistemological challenges. Operations that appear innocent and purely technical may profoundly influence final results. Researchers working with data can make their process less arbitrary and more accountable by making theoretically grounded methodological choices. We apply this approach to the problem of simplifying networks representing ethnographic corpora, in the interest of visual interpretation. Network nodes represent ethnographic codes, and their edges the co-occurrence of codes in a corpus. We introduce and discuss four techniques to simplify such networks and facilitate visual analysis. We show how the mathematical characteristics of each one are aligned with an identifiable approach in sociology or anthropology: structuralism and post-structuralism; identifying the central concepts in a discourse; and discovering hegemonic and counter-hegemonic clusters of meaning. We then provide an example of how the four techniques complement each other in ethnographic analysis.
{"title":"Operationalizing anthropological theory: four techniques to simplify networks of co-occurring ethnographic codes.","authors":"Alberto Cottica, Veronica Davidov, Magdalena Góralska, Jan Kubik, Guy Melançon, Richard Mole, Bruno Pinaud, Wojciech Szymański","doi":"10.1007/s41109-023-00547-6","DOIUrl":"10.1007/s41109-023-00547-6","url":null,"abstract":"<p><p>The use of data and algorithms in the social sciences allows for exciting progress, but also poses epistemological challenges. Operations that appear innocent and purely technical may profoundly influence final results. Researchers working with data can make their process less arbitrary and more accountable by making theoretically grounded methodological choices. We apply this approach to the problem of simplifying networks representing ethnographic corpora, in the interest of visual interpretation. Network nodes represent ethnographic codes, and their edges the co-occurrence of codes in a corpus. We introduce and discuss four techniques to simplify such networks and facilitate visual analysis. We show how the mathematical characteristics of each one are aligned with an identifiable approach in sociology or anthropology: structuralism and post-structuralism; identifying the central concepts in a discourse; and discovering hegemonic and counter-hegemonic clusters of meaning. We then provide an example of how the four techniques complement each other in ethnographic analysis.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10161994/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9857388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-07-25DOI: 10.1007/s41109-023-00573-4
Stefania Ionescu, Anikó Hannák, Nicolò Pagan
Motivation: Social media platforms centered around content creators (CCs) faced rapid growth in the past decade. Currently, millions of CCs make livable incomes through platforms such as YouTube, TikTok, and Instagram. As such, similarly to the job market, it is important to ensure the success and income (usually related to the follower counts) of CCs reflect the quality of their work. Since quality cannot be observed directly, two other factors govern the network-formation process: (a) the visibility of CCs (resulted from, e.g., recommender systems and moderation processes) and (b) the decision-making process of seekers (i.e., of users focused on finding CCs). Prior virtual experiments and empirical work seem contradictory regarding fairness: While the first suggests the expected number of followers of CCs reflects their quality, the second says that quality does not perfectly predict success.
Results: Our paper extends prior models in order to bridge this gap between theoretical and empirical work. We (a) define a parameterized recommendation process which allocates visibility based on popularity biases, (b) define two metrics of individual fairness (ex-ante and ex-post), and (c) define a metric for seeker satisfaction. Through an analytical approach we show our process is an absorbing Markov Chain where exploring only the most popular CCs leads to lower expected times to absorption but higher chances of unfairness for CCs. While increasing the exploration helps, doing so only guarantees fair outcomes for the highest (and lowest) quality CC. Simulations revealed that CCs and seekers prefer different algorithmic designs: CCs generally have higher chances of fairness with anti-popularity biased recommendation processes, while seekers are more satisfied with popularity-biased recommendations. Altogether, our results suggest that while the exploration of low-popularity CCs is needed to improve fairness, platforms might not have the incentive to do so and such interventions do not entirely prevent unfair outcomes.
{"title":"The role of luck in the success of social media influencers.","authors":"Stefania Ionescu, Anikó Hannák, Nicolò Pagan","doi":"10.1007/s41109-023-00573-4","DOIUrl":"10.1007/s41109-023-00573-4","url":null,"abstract":"<p><strong>Motivation: </strong>Social media platforms centered around content creators (CCs) faced rapid growth in the past decade. Currently, millions of CCs make livable incomes through platforms such as YouTube, TikTok, and Instagram. As such, similarly to the job market, it is important to ensure the success and income (usually related to the follower counts) of CCs reflect the quality of their work. Since quality cannot be observed directly, two other factors govern the network-formation process: (a) the <i>visibility</i> of CCs (resulted from, e.g., recommender systems and moderation processes) and (b) the <i>decision-making process</i> of seekers (i.e., of users focused on finding CCs). Prior virtual experiments and empirical work seem contradictory regarding fairness: While the first suggests the expected number of followers of CCs reflects their quality, the second says that quality does not perfectly predict success.</p><p><strong>Results: </strong>Our paper extends prior models in order to bridge this gap between theoretical and empirical work. We (a) define a parameterized recommendation process which allocates visibility based on popularity biases, (b) define two metrics of individual fairness (ex-ante and ex-post), and (c) define a metric for seeker satisfaction. Through an analytical approach we show our process is an absorbing Markov Chain where exploring only the most popular CCs leads to lower expected times to absorption but higher chances of unfairness for CCs. While increasing the exploration helps, doing so only guarantees fair outcomes for the highest (and lowest) quality CC. Simulations revealed that CCs and seekers prefer different algorithmic designs: CCs generally have higher chances of fairness with anti-popularity biased recommendation processes, while seekers are more satisfied with popularity-biased recommendations. Altogether, our results suggest that while the exploration of low-popularity CCs is needed to improve fairness, platforms might not have the incentive to do so and such interventions do not entirely prevent unfair outcomes.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10368581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9887900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-11DOI: 10.1007/s41109-023-00548-5
Carly A Bobak, Yifan Zhao, Joshua J Levy, A James O'Malley
Protecting medical privacy can create obstacles in the analysis and distribution of healthcare graphs and statistical inferences accompanying them. We pose a graph simulation model which generates networks using degree and property augmentation and provide a flexible R package that allows users to create graphs that preserve vertex attribute relationships and approximating the retention of topological properties observed in the original graph (e.g., community structure). We illustrate our proposed algorithm using a case study based on Zachary's karate network and a patient-sharing graph generated from Medicare claims data in 2019. In both cases, we find that community structure is preserved, and normalized root mean square error between cumulative distributions of the degrees across the generated and the original graphs is low (0.0508 and 0.0514 respectively).
保护医疗隐私会给医疗图表的分析和发布以及随之而来的统计推断造成障碍。我们提出了一个图仿真模型,该模型利用度和属性增强生成网络,并提供了一个灵活的 R 软件包,允许用户创建保留顶点属性关系的图,并近似保留原始图中观察到的拓扑属性(如群落结构)。我们使用基于 Zachary 空手道网络的案例研究和根据 2019 年医疗保险报销数据生成的患者共享图来说明我们提出的算法。在这两种情况下,我们都发现群落结构得到了保留,生成图和原始图的度数累积分布之间的归一化均方根误差很低(分别为 0.0508 和 0.0514)。
{"title":"GRANDPA: GeneRAtive network sampling using degree and property augmentation applied to the analysis of partially confidential healthcare networks.","authors":"Carly A Bobak, Yifan Zhao, Joshua J Levy, A James O'Malley","doi":"10.1007/s41109-023-00548-5","DOIUrl":"10.1007/s41109-023-00548-5","url":null,"abstract":"<p><p>Protecting medical privacy can create obstacles in the analysis and distribution of healthcare graphs and statistical inferences accompanying them. We pose a graph simulation model which generates networks using degree and property augmentation and provide a flexible R package that allows users to create graphs that preserve vertex attribute relationships and approximating the retention of topological properties observed in the original graph (e.g., community structure). We illustrate our proposed algorithm using a case study based on Zachary's karate network and a patient-sharing graph generated from Medicare claims data in 2019. In both cases, we find that community structure is preserved, and normalized root mean square error between cumulative distributions of the degrees across the generated and the original graphs is low (0.0508 and 0.0514 respectively).</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10173245/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10115610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-09-12DOI: 10.1007/s41109-023-00588-x
Gorm Gruner Jensen, Martin Benedikt Busch, Marco Piovesan, Jan O Haerter
We investigate the development of cooperative behavior in networks over time. In our controlled laboratory experiment, subjects can cooperate by sending costly messages that contain valuable information for the receiver or other subjects in the network. Any message sent can increase the chance that subjects find the information they are looking for and consequently their profit. We find that cooperation emerges spontaneously and remains stable over time. In an additional treatment, we provide a non-binding suggestion about who to contact at the beginning of the experiment. We find that subjects partially follow our recommendation, and this increases their own and others' profit. Despite the removal of suggestions, subjects build long-lasting relationships with the suggested contacts.
Supplementary information: The online version contains supplementary material available at 10.1007/s41109-023-00588-x.
{"title":"Nudging cooperation among agents in an experimental social network.","authors":"Gorm Gruner Jensen, Martin Benedikt Busch, Marco Piovesan, Jan O Haerter","doi":"10.1007/s41109-023-00588-x","DOIUrl":"10.1007/s41109-023-00588-x","url":null,"abstract":"<p><p>We investigate the development of cooperative behavior in networks over time. In our controlled laboratory experiment, subjects can cooperate by sending costly messages that contain valuable information for the receiver or other subjects in the network. Any message sent can increase the chance that subjects find the information they are looking for and consequently their profit. We find that cooperation emerges spontaneously and remains stable over time. In an additional treatment, we provide a non-binding suggestion about who to contact at the beginning of the experiment. We find that subjects partially follow our recommendation, and this increases their own and others' profit. Despite the removal of suggestions, subjects build long-lasting relationships with the suggested contacts.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s41109-023-00588-x.</p>","PeriodicalId":37010,"journal":{"name":"Applied Network Science","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10497665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10269051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}