Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.009
Lanchao Liu, Zhu Han, H. Poor, Shuguang Cui
{"title":"Big data processing for smart grid security","authors":"Lanchao Liu, Zhu Han, H. Poor, Shuguang Cui","doi":"10.1017/CBO9781316162750.009","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.009","url":null,"abstract":"","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115567412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.004
Mingyi Hong, Wei-Cheng Liao, Ruoyu Sun, Z. Luo
This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloudbased densely deployed next-generation wireless network. In the first part of the chapter we survey a few popular first-order methods for large-scale optimization, including the block coordinate descent (BCD) method, the block successive upper-bound minimization (BSUM) method and the alternating direction method of multipliers (ADMM). In the second part of the chapter, we show that many difficult problems in managing large wireless networks can be solved efficiently and in a parallel manner, by modern first-order optimization methods. Extensive numerical results are provided to demonstrate the benefit of the proposed approach. Disciplines Signal Processing | Systems and Communications | Systems Engineering Comments This is a chapter published as Mingyi Hong, Wei-Cheng Liao, Ruoyu Sun and Zhi-Quan Luo "Optimization Algorithms for Big Data with Application in Wireless Networks," in Big Data over Networks, ed. Shuguang Cui, Alfred O. Hero III, Zhi-quan Luo, and Jose M. F. Moura (Cambridge: Cambridge University Press, 2016), pp. 66-100. Posted with permission. This book chapter is available at Iowa State University Digital Repository: https://lib.dr.iastate.edu/imse_pubs/171 I 3 Optimization algorithms for big data with application in wireless networks Mingyi Hong, Wei-Cheng Liao, Ruoyu Sun, and Zhi-Quan Luo This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloud-based densely deployed next-generation wireless network. In the first part of the chapter we survey a few popular first-order methods for large-scale optimization, including the block coordinate descent (BCD) method, the block successive upper-bound minimization (BSUM) method and the alternating direction method of multipliers (ADMM). In the second part of the chapter, we show that many difficult problems in managing large wireless networks can be solved efficiently and in a parallel manner, by modern first-order optimization methods. Extensive numerical results are provided to demonstrate the benefit of the proposed approach.
本章提出使用现代一阶大规模优化技术来管理基于云的密集部署的下一代无线网络。在本章的第一部分,我们概述了几种常用的一阶大规模优化方法,包括块坐标下降法(BCD)、块连续上界最小化法(BSUM)和乘法器交替方向法(ADMM)。在本章的第二部分,我们展示了管理大型无线网络的许多难题可以通过现代一阶优化方法以并行的方式有效地解决。大量的数值结果证明了该方法的优越性。洪明义、廖维成、孙若宇、罗志全《无线网络大数据优化算法》,载于《网络大数据》崔曙光、Alfred O. Hero III、罗志全、Jose M. F. Moura(剑桥:剑桥大学出版社,2016),第66-100页。经许可发布。本章可在爱荷华州立大学数字存储库:https://lib.dr.iastate.edu/imse_pubs/171 I 3无线网络中大数据应用的优化算法洪明义,廖维成,孙若宇,罗志全本章提出使用现代一阶大规模优化技术来管理基于云的密集部署的下一代无线网络。在本章的第一部分,我们概述了几种常用的一阶大规模优化方法,包括块坐标下降法(BCD)、块连续上界最小化法(BSUM)和乘法器交替方向法(ADMM)。在本章的第二部分,我们展示了管理大型无线网络的许多难题可以通过现代一阶优化方法以并行的方式有效地解决。大量的数值结果证明了该方法的优越性。
{"title":"Optimization algorithms for big data with application in wireless networks","authors":"Mingyi Hong, Wei-Cheng Liao, Ruoyu Sun, Z. Luo","doi":"10.1017/CBO9781316162750.004","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.004","url":null,"abstract":"This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloudbased densely deployed next-generation wireless network. In the first part of the chapter we survey a few popular first-order methods for large-scale optimization, including the block coordinate descent (BCD) method, the block successive upper-bound minimization (BSUM) method and the alternating direction method of multipliers (ADMM). In the second part of the chapter, we show that many difficult problems in managing large wireless networks can be solved efficiently and in a parallel manner, by modern first-order optimization methods. Extensive numerical results are provided to demonstrate the benefit of the proposed approach. Disciplines Signal Processing | Systems and Communications | Systems Engineering Comments This is a chapter published as Mingyi Hong, Wei-Cheng Liao, Ruoyu Sun and Zhi-Quan Luo \"Optimization Algorithms for Big Data with Application in Wireless Networks,\" in Big Data over Networks, ed. Shuguang Cui, Alfred O. Hero III, Zhi-quan Luo, and Jose M. F. Moura (Cambridge: Cambridge University Press, 2016), pp. 66-100. Posted with permission. This book chapter is available at Iowa State University Digital Repository: https://lib.dr.iastate.edu/imse_pubs/171 I 3 Optimization algorithms for big data with application in wireless networks Mingyi Hong, Wei-Cheng Liao, Ruoyu Sun, and Zhi-Quan Luo This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloud-based densely deployed next-generation wireless network. In the first part of the chapter we survey a few popular first-order methods for large-scale optimization, including the block coordinate descent (BCD) method, the block successive upper-bound minimization (BSUM) method and the alternating direction method of multipliers (ADMM). In the second part of the chapter, we show that many difficult problems in managing large wireless networks can be solved efficiently and in a parallel manner, by modern first-order optimization methods. Extensive numerical results are provided to demonstrate the benefit of the proposed approach.","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127884554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.014
Zhe-Hong Gan, Xin Yuan, Ricardo Henao, E. Tsalik, L. Carin
Inspired by the problem of inferring gene networks associated with the host response to infectious diseases, a new framework for discriminative factor models is developed. Bayesian shrinkage priors are employed to impose (near) sparsity on the factor loadings, while non-parametric techniques are utilized to infer the number of factors needed to represent the data. Two discriminative Bayesian loss functions are investigated, i.e. the logistic log-loss and the max-margin hinge loss. Efficient mean-field variational Bayesian inference and Gibbs sampling are implemented. To address large-scale datasets, an online version of variational Bayes is also developed. Experimental results on two realworld microarray-based gene expression datasets show that the proposed framework achieves comparatively superior classification performance, with model interpretation delivered via pathway association analysis.
{"title":"Inference of gene networks associated with the host response to infectious disease","authors":"Zhe-Hong Gan, Xin Yuan, Ricardo Henao, E. Tsalik, L. Carin","doi":"10.1017/CBO9781316162750.014","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.014","url":null,"abstract":"Inspired by the problem of inferring gene networks associated with the host response to infectious diseases, a new framework for discriminative factor models is developed. Bayesian shrinkage priors are employed to impose (near) sparsity on the factor loadings, while non-parametric techniques are utilized to infer the number of factors needed to represent the data. Two discriminative Bayesian loss functions are investigated, i.e. the logistic log-loss and the max-margin hinge loss. Efficient mean-field variational Bayesian inference and Gibbs sampling are implemented. To address large-scale datasets, an online version of variational Bayes is also developed. Experimental results on two realworld microarray-based gene expression datasets show that the proposed framework achieves comparatively superior classification performance, with model interpretation delivered via pathway association analysis.","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"160 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123418566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.016
A. Hero, B. Rajaratnam
Continuing advances in high-throughput mRNA probing, gene sequencing and microscopic imaging technology is producing a wealth of biomarker data on many different living organisms and conditions. Scientists hope that increasing amounts of relevant data will eventually lead to better understanding of the network of interactions between the thousands of molecules that regulate these organisms. Thus progress in understanding the biological science has become increasingly dependent on progress in understanding the data science. Data mining tools have been of particular relevance since they can sometimes be used to effectively separate the “wheat” from the “chaff”, winnowing the massive amount of data down to a few important data dimensions. Correlation mining is a data mining tool that is particularly useful for probing statistical correlations between biomarkers and recovering properties of their correlation networks. However, since the number of correlations between biomarkers is quadratically larger than the number biomarkers, the scalability of correlation mining in the big data setting becomes an issue. Furthermore, there are phase transitions that govern the correlation mining discoveries that must be understood in order for these discoveries to be reliable and of high confidence. This is especially important to understand at big data scales where the number of samples is fixed and the number of biomarkers becomes unbounded, a sampling regime referred to as the ”purely-high dimensional setting.” In this chapter, we will discuss some of the main advances and challenges in correlation mining in the context of large scale biomolecular networks with a focus on medicine. A new correlation mining application will be introduced: discovery of correlation sign flips between edges in a pair of correlation or partial correlation networks. The pair of networks could respectively correspond to a disease (or treatment) group and a control group. This paper is to appear as a chapter in the book Big Data over Networks from Cambridge University Press (ISBN: 9781107099005). 4 Large scale correlation mining for biomolecular network discovery
{"title":"Large-scale correlation mining for biomolecular network discovery","authors":"A. Hero, B. Rajaratnam","doi":"10.1017/CBO9781316162750.016","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.016","url":null,"abstract":"Continuing advances in high-throughput mRNA probing, gene sequencing and microscopic imaging technology is producing a wealth of biomarker data on many different living organisms and conditions. Scientists hope that increasing amounts of relevant data will eventually lead to better understanding of the network of interactions between the thousands of molecules that regulate these organisms. Thus progress in understanding the biological science has become increasingly dependent on progress in understanding the data science. Data mining tools have been of particular relevance since they can sometimes be used to effectively separate the “wheat” from the “chaff”, winnowing the massive amount of data down to a few important data dimensions. Correlation mining is a data mining tool that is particularly useful for probing statistical correlations between biomarkers and recovering properties of their correlation networks. However, since the number of correlations between biomarkers is quadratically larger than the number biomarkers, the scalability of correlation mining in the big data setting becomes an issue. Furthermore, there are phase transitions that govern the correlation mining discoveries that must be understood in order for these discoveries to be reliable and of high confidence. This is especially important to understand at big data scales where the number of samples is fixed and the number of biomarkers becomes unbounded, a sampling regime referred to as the ”purely-high dimensional setting.” In this chapter, we will discuss some of the main advances and challenges in correlation mining in the context of large scale biomolecular networks with a focus on medicine. A new correlation mining application will be introduced: discovery of correlation sign flips between edges in a pair of correlation or partial correlation networks. The pair of networks could respectively correspond to a disease (or treatment) group and a control group. This paper is to appear as a chapter in the book Big Data over Networks from Cambridge University Press (ISBN: 9781107099005). 4 Large scale correlation mining for biomolecular network discovery","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129715502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.008
Suzhi Bi, Rui Zhang, Z. Ding, Shuguang Cui
The fast-growing wireless data service is pushing our communication network's processing power to its limit. The ever-increasing data traffic poses imminent challenges to all aspects of the wireless system design, such as spectrum efficiency, computing capabilities, and backhaul link capacity, etc. At the same time, the massive amount of mobile data traffic may also lead to potential system performance gain that is otherwise not achievable with conventional wireless signal processing models. In this chapter, we investigate the challenges and opportunities in the design of scalable wireless systems to embrace such a “big data” era.We review state-of-the-art techniques in wireless big data processing and study the potential implementations of key technologies in the future wireless systems. We show that proper wireless system designs could harness, and in fact take advantages of the mobile big data traffic. Introduction After decades of rapid growth in data services, modern society has entered the so-called “ big data ” era , where the mobile network is a major contributor. As of the year 2013, the global penetration of mobile subscribers had reached 92%, producing staggeringly 6800 PetaBytes (6.8 × 10 18 ) of mobile data worldwide [1]. The surge of mobile data traffic in recent years is mainly attributed to the popularity of smartphones, mobile tablets, and other smart mobile devices. Mobile broadband applications such as web surfing, social networking, and online videos are now ubiquitously accessible by these mobile devices, without limitations from time and location. The recent survey shows that the number of smartphone users currently accounts for merely 25%–30% of the entire mobile subscribers. However, the figure will double in the next three years and continue to grow given the considerable room in the smartphone market for further uptake.With a compound annual growth rate of 45%, it is expected that the mobile data traffic will increase by ten times from 2013 to 2019. In addition to the vast amount of wireless data, wireless signal processing often amplifies the system big data effect in the pursuit of higher performance gain. To combat the fading channel, diversity schemes, especially the MIMO antenna technologies, are extensively used in both mobile terminals (MTs) and base stations (BSs). Numerous schemes with co-located and distributed antennas have been proposed over the years to increase the data rate or extend the cellular coverage.
{"title":"Big data aware wireless communication: challenges and opportunities","authors":"Suzhi Bi, Rui Zhang, Z. Ding, Shuguang Cui","doi":"10.1017/CBO9781316162750.008","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.008","url":null,"abstract":"The fast-growing wireless data service is pushing our communication network's processing power to its limit. The ever-increasing data traffic poses imminent challenges to all aspects of the wireless system design, such as spectrum efficiency, computing capabilities, and backhaul link capacity, etc. At the same time, the massive amount of mobile data traffic may also lead to potential system performance gain that is otherwise not achievable with conventional wireless signal processing models. In this chapter, we investigate the challenges and opportunities in the design of scalable wireless systems to embrace such a “big data” era.We review state-of-the-art techniques in wireless big data processing and study the potential implementations of key technologies in the future wireless systems. We show that proper wireless system designs could harness, and in fact take advantages of the mobile big data traffic. Introduction After decades of rapid growth in data services, modern society has entered the so-called “ big data ” era , where the mobile network is a major contributor. As of the year 2013, the global penetration of mobile subscribers had reached 92%, producing staggeringly 6800 PetaBytes (6.8 × 10 18 ) of mobile data worldwide [1]. The surge of mobile data traffic in recent years is mainly attributed to the popularity of smartphones, mobile tablets, and other smart mobile devices. Mobile broadband applications such as web surfing, social networking, and online videos are now ubiquitously accessible by these mobile devices, without limitations from time and location. The recent survey shows that the number of smartphone users currently accounts for merely 25%–30% of the entire mobile subscribers. However, the figure will double in the next three years and continue to grow given the considerable room in the smartphone market for further uptake.With a compound annual growth rate of 45%, it is expected that the mobile data traffic will increase by ten times from 2013 to 2019. In addition to the vast amount of wireless data, wireless signal processing often amplifies the system big data effect in the pursuit of higher performance gain. To combat the fading channel, diversity schemes, especially the MIMO antenna technologies, are extensively used in both mobile terminals (MTs) and base stations (BSs). Numerous schemes with co-located and distributed antennas have been proposed over the years to increase the data rate or extend the cellular coverage.","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128829533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.005
J. Pang, Meisam Razaviyayn
This chapter presents a unified framework for the design and analysis of distributed algorithms for computing first-order stationary solutions of non-cooperative games with non-differentiable player objective functions. These games are closely associated with multi-agent optimization wherein a large number of selfish players compete noncooperatively to optimize their individual objectives under various constraints. Unlike centralized algorithms that require a certain system mechanism to coordinate the players’ actions, distributed algorithms have the advantage that the players, either individually or in subgroups, can each make their best responses without full information of their rivals’ actions. These distributed algorithms by nature are particularly suited for solving hugesize games where the large number of players in the game makes the coordination of the players almost impossible. The distributed algorithms are distinguished by several features: parallel versus sequential implementations, scheduled versus randomized player selections, synchronized versus asynchronous transfer of information, and individual versus multiple player updates. Covering many variations of distributed algorithms, the unified algorithm employs convex surrogate functions to handle nonsmooth nonconvex functions and a (possibly multi-valued) choice function to dictate the players’ turns to update their strategies. There are two general approaches to establish the convergence of such algorithms: contraction versus potential based, each requiring different properties of the players’ objective functions. We present the details of the convergence analysis based on these two approaches and discuss randomized extensions of the algorithms that require less coordination and hence are more suitable for big data problems. Introduction Introduced by John von Neumann [1], modern-day game theory has developed into a very fruitful research discipline with applications in many fields. There are two major classifications of a game, cooperative versus non-cooperative. This chapter pertains to one aspect of non-cooperative games for potential applications to big data, namely, the computation of a “solution” to such a game by a distributed algorithm. In a (basic) non-cooperative game, there are finitely many selfish players/agents each optimizing a rival-dependent objective by choosing feasible strategies satisfying certain private constraints. Providing a solution concept to such a game, a Nash equilibrium (NE) [2, 3] is by definition a tuple of strategies, one for each player, such that no player will be better off by unilaterally deviating from his/her equilibrium strategy while the rivals keep executing their equilibrium strategies.
本章提出了一个统一的框架,用于计算具有不可微玩家目标函数的非合作博弈的一阶平稳解的分布式算法的设计和分析。这些博弈与多智能体优化密切相关,其中大量自私的参与者在各种约束下不合作竞争以优化他们的个人目标。集中式算法需要一定的系统机制来协调参与者的行动,而分布式算法的优势在于,参与者,无论是单独的还是子群体的,都可以在不完全了解对手行动的情况下做出最佳反应。这些分布式算法天生就特别适合解决大型游戏,因为游戏中有大量玩家,玩家之间的协调几乎是不可能的。分布式算法有几个特点:并行与顺序实现,计划与随机玩家选择,同步与异步信息传输,单个与多个玩家更新。统一算法涵盖了许多分布式算法的变体,使用凸代理函数来处理非光滑的非凸函数和(可能是多值的)选择函数来指示玩家更新策略的回合数。有两种方法可以建立这种算法的收敛性:收缩和基于潜力,每种方法都需要玩家目标函数的不同属性。我们介绍了基于这两种方法的收敛分析的细节,并讨论了算法的随机扩展,这些算法需要较少的协调,因此更适合于大数据问题。现代博弈论由冯·诺伊曼(John von Neumann)提出[1],已经发展成为一门非常富有成果的研究学科,在许多领域都有应用。游戏有两种主要类型,合作与非合作。本章涉及非合作博弈在大数据应用中的一个方面,即通过分布式算法计算这种博弈的“解决方案”。在一个(基本的)非合作博弈中,有有限多个自私的参与者/代理通过选择满足某些私有约束的可行策略来优化依赖于竞争对手的目标。为这种博弈提供一个解决方案的概念,纳什均衡(NE)[2,3]根据定义是一组策略,每个参与者一个,这样,当竞争对手继续执行他们的均衡策略时,任何参与者都不会单方面偏离自己的均衡策略而获得更好的结果。
{"title":"A unified distributed algorithm for non-cooperative games","authors":"J. Pang, Meisam Razaviyayn","doi":"10.1017/CBO9781316162750.005","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.005","url":null,"abstract":"This chapter presents a unified framework for the design and analysis of distributed algorithms for computing first-order stationary solutions of non-cooperative games with non-differentiable player objective functions. These games are closely associated with multi-agent optimization wherein a large number of selfish players compete noncooperatively to optimize their individual objectives under various constraints. Unlike centralized algorithms that require a certain system mechanism to coordinate the players’ actions, distributed algorithms have the advantage that the players, either individually or in subgroups, can each make their best responses without full information of their rivals’ actions. These distributed algorithms by nature are particularly suited for solving hugesize games where the large number of players in the game makes the coordination of the players almost impossible. The distributed algorithms are distinguished by several features: parallel versus sequential implementations, scheduled versus randomized player selections, synchronized versus asynchronous transfer of information, and individual versus multiple player updates. Covering many variations of distributed algorithms, the unified algorithm employs convex surrogate functions to handle nonsmooth nonconvex functions and a (possibly multi-valued) choice function to dictate the players’ turns to update their strategies. There are two general approaches to establish the convergence of such algorithms: contraction versus potential based, each requiring different properties of the players’ objective functions. We present the details of the convergence analysis based on these two approaches and discuss randomized extensions of the algorithms that require less coordination and hence are more suitable for big data problems. Introduction Introduced by John von Neumann [1], modern-day game theory has developed into a very fruitful research discipline with applications in many fields. There are two major classifications of a game, cooperative versus non-cooperative. This chapter pertains to one aspect of non-cooperative games for potential applications to big data, namely, the computation of a “solution” to such a game by a distributed algorithm. In a (basic) non-cooperative game, there are finitely many selfish players/agents each optimizing a rival-dependent objective by choosing feasible strategies satisfying certain private constraints. Providing a solution concept to such a game, a Nash equilibrium (NE) [2, 3] is by definition a tuple of strategies, one for each player, such that no player will be better off by unilaterally deviating from his/her equilibrium strategy while the rivals keep executing their equilibrium strategies.","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125944075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.003
S. Chouvardas, Y. Kopsinis, S. Theodoridis
{"title":"Sparsity-aware distributed learning","authors":"S. Chouvardas, Y. Kopsinis, S. Theodoridis","doi":"10.1017/CBO9781316162750.003","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.003","url":null,"abstract":"","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114074063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.007
Chen Gong, Zhengyuan Xu, Xiaodong Wang
{"title":"Distributed big data storage in optical wireless networks","authors":"Chen Gong, Zhengyuan Xu, Xiaodong Wang","doi":"10.1017/CBO9781316162750.007","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.007","url":null,"abstract":"","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129858301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.013
Xiaoning Qian, Byung-Jun Yoon, E. Dougherty
A fundamental problem of biology is to construct gene regulatory networks that characterize the operational interaction among genes. The term “gene” is used generically because such networks could involve gene products. Numerous inference algorithms have been proposed. The validity, or accuracy, of such algorithms is of central concern. Given data generated by a ground-truth network, how well does a model network inferred from the data match the data-generating network? This chapter discusses a general paradigm for inference validation based on defining a distance between networks and judging validity according to the distance between the original network and the inferred network. Such a distance will typically be based on some network characteristics, such as connectivity, rule structure, or steady-state distribution. It can also be based on some objective for which the model network is being employed, such as deriving an intervention strategy to apply to the original network with the aim of correcting aberrant behavior. Rather than assuming that a single network is inferred, one can take the perspective that the inference procedure leads to an “uncertainty class” of networks, to which belongs the ground-truth network. In this case, we define a measure of uncertainty in terms of the cost that uncertainty imposes on the objective, for which the model network is to be employed, the example discussed in the current chapter involving intervention in the yeast cell cycle network.
{"title":"Inference of gene regulatory networks: validation and uncertainty","authors":"Xiaoning Qian, Byung-Jun Yoon, E. Dougherty","doi":"10.1017/CBO9781316162750.013","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.013","url":null,"abstract":"A fundamental problem of biology is to construct gene regulatory networks that characterize the operational interaction among genes. The term “gene” is used generically because such networks could involve gene products. Numerous inference algorithms have been proposed. The validity, or accuracy, of such algorithms is of central concern. Given data generated by a ground-truth network, how well does a model network inferred from the data match the data-generating network? This chapter discusses a general paradigm for inference validation based on defining a distance between networks and judging validity according to the distance between the original network and the inferred network. Such a distance will typically be based on some network characteristics, such as connectivity, rule structure, or steady-state distribution. It can also be based on some objective for which the model network is being employed, such as deriving an intervention strategy to apply to the original network with the aim of correcting aberrant behavior. Rather than assuming that a single network is inferred, one can take the perspective that the inference procedure leads to an “uncertainty class” of networks, to which belongs the ground-truth network. In this case, we define a measure of uncertainty in terms of the cost that uncertainty imposes on the objective, for which the model network is to be employed, the example discussed in the current chapter involving intervention in the yeast cell cycle network.","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131171328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1017/CBO9781316162750.006
G. Ananthanarayanan, Ishai Menache
{"title":"Big data analytics systems","authors":"G. Ananthanarayanan, Ishai Menache","doi":"10.1017/CBO9781316162750.006","DOIUrl":"https://doi.org/10.1017/CBO9781316162750.006","url":null,"abstract":"","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133796695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}