Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9244935
Sagnik Bhattacharya, Adrish Banerjee
We develop general techniques to bound the size of the balls of a given radius r for q-ary discrete metrics, using the generating function for the metric and Sanov’s theorem, that reduces to the known bound in the case of the Hamming metric and gives us a new bound in the case of the Lee metric. We use the techniques developed to find Hamming, Elias-Bassalygo and Gilbert-Varshamov bounds for the Lee metric.
{"title":"A method to find the volume of a sphere in the Lee metric, and its applications","authors":"Sagnik Bhattacharya, Adrish Banerjee","doi":"10.1109/ITA50056.2020.9244935","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244935","url":null,"abstract":"We develop general techniques to bound the size of the balls of a given radius r for q-ary discrete metrics, using the generating function for the metric and Sanov’s theorem, that reduces to the known bound in the case of the Hamming metric and gives us a new bound in the case of the Lee metric. We use the techniques developed to find Hamming, Elias-Bassalygo and Gilbert-Varshamov bounds for the Lee metric.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123728702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9244960
Abhinanda Dutta, S. Weber
Channel assignment for wireless radios employing random access is found in several contexts, including low-power wide area network (LPWAN) protocols such as LoRaWAN. This paper considers the assignment of a set of N radios to M available channels with the objective of maximizing the sum throughput. The difficulty lies in the facts that i) the radios connect to the access point (or gateway) over independent erasure channels and ii) the radios are subject to collision, i.e., if two or more packets arrive at the access point on the same channel then all such packets "collide" and are lost. The problem is approached by defining lower and upper bounds on the throughput, and then extremizing the bounds. Initial numerical results for M = 2 channels suggest i) there is notable variation in sum throughput across problem instances, but ii) the impact of scheduling on the throughput for a given problem instance is relatively small.
{"title":"Random access channel assignment on a collision erasure channel","authors":"Abhinanda Dutta, S. Weber","doi":"10.1109/ITA50056.2020.9244960","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244960","url":null,"abstract":"Channel assignment for wireless radios employing random access is found in several contexts, including low-power wide area network (LPWAN) protocols such as LoRaWAN. This paper considers the assignment of a set of N radios to M available channels with the objective of maximizing the sum throughput. The difficulty lies in the facts that i) the radios connect to the access point (or gateway) over independent erasure channels and ii) the radios are subject to collision, i.e., if two or more packets arrive at the access point on the same channel then all such packets \"collide\" and are lost. The problem is approached by defining lower and upper bounds on the throughput, and then extremizing the bounds. Initial numerical results for M = 2 channels suggest i) there is notable variation in sum throughput across problem instances, but ii) the impact of scheduling on the throughput for a given problem instance is relatively small.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126324589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9244998
Kunping Huang, Netanel Raviv, Siddhartha Jain, Pulakesh Upadhyaya, Jehoshua Bruck, P. Siegel, Anxiao Andrew Jiang
Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments.
{"title":"Improve Robustness of Deep Neural Networks by Coding","authors":"Kunping Huang, Netanel Raviv, Siddhartha Jain, Pulakesh Upadhyaya, Jehoshua Bruck, P. Siegel, Anxiao Andrew Jiang","doi":"10.1109/ITA50056.2020.9244998","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244998","url":null,"abstract":"Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115039190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9244975
Nader Beigiparast, G. M. Guvensen, E. Ayanoglu
We present the analysis of a single-carrier massive MIMO system for the frequency selective Gaussian multi-user channel, in both uplink and downlink directions. We develop expressions for the achievable sum rate when there is spatial correlation among antennas at the base station. It is known that the channel matched filter precoder (CMFP) performs the best in a spatially uncorrelated downlink channel. However, we show that, in a spatially correlated downlink channel with two different correlation patterns and at high long-term average power, two other precoders have better performance. For the uplink channel, part of the equivalent noise in the channel goes away, and implementing two conventional equalizers leads to a better performance compared to the channel matched filter equalizer (CMFE). These results are verified for uniform linear and uniform planar arrays. In the latter, due to more correlation, the performance drop with a spatially correlated channel is larger, but the performance gain against channel matched filter precoder or equalizer is also bigger. In highly correlated cases, the performance can be a significant multiple of that of the channel matched filter precoder or equalizers.
{"title":"Spatial Correlation in Single-Carrier Massive MIMO Systems","authors":"Nader Beigiparast, G. M. Guvensen, E. Ayanoglu","doi":"10.1109/ITA50056.2020.9244975","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244975","url":null,"abstract":"We present the analysis of a single-carrier massive MIMO system for the frequency selective Gaussian multi-user channel, in both uplink and downlink directions. We develop expressions for the achievable sum rate when there is spatial correlation among antennas at the base station. It is known that the channel matched filter precoder (CMFP) performs the best in a spatially uncorrelated downlink channel. However, we show that, in a spatially correlated downlink channel with two different correlation patterns and at high long-term average power, two other precoders have better performance. For the uplink channel, part of the equivalent noise in the channel goes away, and implementing two conventional equalizers leads to a better performance compared to the channel matched filter equalizer (CMFE). These results are verified for uniform linear and uniform planar arrays. In the latter, due to more correlation, the performance drop with a spatially correlated channel is larger, but the performance gain against channel matched filter precoder or equalizer is also bigger. In highly correlated cases, the performance can be a significant multiple of that of the channel matched filter precoder or equalizers.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133949566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9244999
Alireza Javani, Marwen Zorgui, Zhiying Wang
Having timely and fresh knowledge about the current state of information sources is critical in a variety of applications. In particular, a status update may arrive at the destination much later than its generation time due to processing and communication delays. The freshness of the status update at the destination is captured by the notion of age of information. In this study, we first analyze a network with a single source, n servers, and the monitor (destination). The servers independently sense the source of information and send the status update to the monitor. We then extend our result to multiple independent sources of information in the presence of n servers. We assume that updates arrive at the servers according to Poisson random processes. Each server sends its update to the monitor through a direct link, which is modeled as a queue. The service time to transmit an update is considered to be an exponential random variable. We examine both homogeneous and heterogeneous service and arrival rates for the single-source case, and only homogeneous arrival and service rates for the multiple sources case. We derive a closed-form expression for the average age of information under a last-come-first-serve (LCFS) queue for a single source and arbitrary n homogeneous servers. For n = 2, 3, we derive the explicit average age of information for arbitrary sources and homogeneous servers, and for a single source and heterogeneous servers. For n = 2 we find the optimal arrival rates given fixed sum arrival rate and service rates.
{"title":"Age of Information in Multiple Sensing","authors":"Alireza Javani, Marwen Zorgui, Zhiying Wang","doi":"10.1109/ITA50056.2020.9244999","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244999","url":null,"abstract":"Having timely and fresh knowledge about the current state of information sources is critical in a variety of applications. In particular, a status update may arrive at the destination much later than its generation time due to processing and communication delays. The freshness of the status update at the destination is captured by the notion of age of information. In this study, we first analyze a network with a single source, n servers, and the monitor (destination). The servers independently sense the source of information and send the status update to the monitor. We then extend our result to multiple independent sources of information in the presence of n servers. We assume that updates arrive at the servers according to Poisson random processes. Each server sends its update to the monitor through a direct link, which is modeled as a queue. The service time to transmit an update is considered to be an exponential random variable. We examine both homogeneous and heterogeneous service and arrival rates for the single-source case, and only homogeneous arrival and service rates for the multiple sources case. We derive a closed-form expression for the average age of information under a last-come-first-serve (LCFS) queue for a single source and arbitrary n homogeneous servers. For n = 2, 3, we derive the explicit average age of information for arbitrary sources and homogeneous servers, and for a single source and heterogeneous servers. For n = 2 we find the optimal arrival rates given fixed sum arrival rate and service rates.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115853888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9245013
Morteza Soltani, Z. Rezki
This paper studies the degraded optical wiretap channel with an input-dependent Gaussian noise when the channel input is only constrained by nonnegativity and average-intensity constraints. We consider the rate-equivocation region of this wiretap channel and through solving a convex optimization problem, we establish that discrete input distributions with an infinite number of mass points exhaust the entire rate-equivocation region of the degraded OWC-IDGN with non-negativity and average-intensity constraints. This result implies that when nonnegativity and average-intensity constraints are imposed on the channel input: 1) the secrecy-capacity-achieving input distribution of the degraded OWC-IDGN is discrete with an unbounded support, i.e., the support set of the optimal distribution is countably infinite; 2) the channel capacity (the case with no secrecy constraints) is also achieved by a discrete distribution with an unbounded support set.
{"title":"New Results on The Rate-Equivocation Region of The Optical Wiretap Channel with Input-Dependent Gaussian Noise with an Average-Intensity Constraint","authors":"Morteza Soltani, Z. Rezki","doi":"10.1109/ITA50056.2020.9245013","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9245013","url":null,"abstract":"This paper studies the degraded optical wiretap channel with an input-dependent Gaussian noise when the channel input is only constrained by nonnegativity and average-intensity constraints. We consider the rate-equivocation region of this wiretap channel and through solving a convex optimization problem, we establish that discrete input distributions with an infinite number of mass points exhaust the entire rate-equivocation region of the degraded OWC-IDGN with non-negativity and average-intensity constraints. This result implies that when nonnegativity and average-intensity constraints are imposed on the channel input: 1) the secrecy-capacity-achieving input distribution of the degraded OWC-IDGN is discrete with an unbounded support, i.e., the support set of the optimal distribution is countably infinite; 2) the channel capacity (the case with no secrecy constraints) is also achieved by a discrete distribution with an unbounded support set.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127057459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9244932
Jamie Haddock, Lara Kassab, Alona Kryshchenko, D. Needell
In today’s data-driven world, there is an unprecedented demand for large-scale temporal data analysis. Dynamic topic modeling has been widely used in social and data sciences with the goal of learning latent topics that emerge, evolve, and fade over time. Previous work on dynamic topic modeling primarily employ the method of nonnegative matrix factorization (NMF), where slices of the data tensor are each factorized into the product of lower dimensional nonnegative matrices. With this approach, however, noise can have devastating effects on the learned latent topics and obscure the true topics in the data. To overcome this issue, we propose instead adopting the method of nonnegative CANDECOMP/PARAFAC (CP) tensor decomposition (NNCPD), where the data tensor is directly decomposed into a minimal sum of outer products of nonnegative vectors. We show experimental evidence that suggests that NNCPD is robust to noise in the data when one overestimates the CP rank of the tensor.
{"title":"On Nonnegative CP Tensor Decomposition Robustness to Noise","authors":"Jamie Haddock, Lara Kassab, Alona Kryshchenko, D. Needell","doi":"10.1109/ITA50056.2020.9244932","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244932","url":null,"abstract":"In today’s data-driven world, there is an unprecedented demand for large-scale temporal data analysis. Dynamic topic modeling has been widely used in social and data sciences with the goal of learning latent topics that emerge, evolve, and fade over time. Previous work on dynamic topic modeling primarily employ the method of nonnegative matrix factorization (NMF), where slices of the data tensor are each factorized into the product of lower dimensional nonnegative matrices. With this approach, however, noise can have devastating effects on the learned latent topics and obscure the true topics in the data. To overcome this issue, we propose instead adopting the method of nonnegative CANDECOMP/PARAFAC (CP) tensor decomposition (NNCPD), where the data tensor is directly decomposed into a minimal sum of outer products of nonnegative vectors. We show experimental evidence that suggests that NNCPD is robust to noise in the data when one overestimates the CP rank of the tensor.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115733488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9245015
Xinmiao Zhang
The Generalized Integrated Interleaved (GII) codes nest short Reed-Solomon (RS)/ BCH sub-codewords to generate codewords of stronger RS/BCH codes. They can achieve hyper-speed decoding and good error-correcting performance with low complexity, and hence are one of the best candidates for next-generation terabit/s communications and storage. The key-equation solver (KES) of the nested decoding for correcting extra errors limits the achievable clock frequency and contributes to a significant portion of the decoder area. This paper summarizes our recent work on hardware architecture design for the nested KES. The clock frequency bottleneck is first eliminated by reformulating the nested KES and exploiting architectural transformations. Then the complexity of each processing element (PE) in the nested KES architecture is reduced by a scaled nested KES algorithm. Furthermore, the number of PEs is reduced by exploiting the data dependency and analyzing the minimum number of coefficients to keep for the involved polynomials.
{"title":"Efficient Nested Key Equation Solver Architectures for Generalized Integrated Interleaved Codes","authors":"Xinmiao Zhang","doi":"10.1109/ITA50056.2020.9245015","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9245015","url":null,"abstract":"The Generalized Integrated Interleaved (GII) codes nest short Reed-Solomon (RS)/ BCH sub-codewords to generate codewords of stronger RS/BCH codes. They can achieve hyper-speed decoding and good error-correcting performance with low complexity, and hence are one of the best candidates for next-generation terabit/s communications and storage. The key-equation solver (KES) of the nested decoding for correcting extra errors limits the achievable clock frequency and contributes to a significant portion of the decoder area. This paper summarizes our recent work on hardware architecture design for the nested KES. The clock frequency bottleneck is first eliminated by reformulating the nested KES and exploiting architectural transformations. Then the complexity of each processing element (PE) in the nested KES architecture is reduced by a scaled nested KES algorithm. Furthermore, the number of PEs is reduced by exploiting the data dependency and analyzing the minimum number of coefficients to keep for the involved polynomials.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128735637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9244995
A. Anastasopoulos, S. Pradhan
The capacity of the MAC with feedback has been characterized through a multi-letter expression based on the work of Kramer. Except for the two-user Gaussian channel, this expression has resisted simplification; as a result there is no single-letter characterization for the capacity of the general discrete memoryless MAC (DM-MAC). In this paper we investigate connections between this problem and the problem of decentralized sequential active hypothesis testing (DSAHT). In this problem, two transmitting agents, each possessing a private message, are actively helping a third agent–and each other–to learn the message pair over a DM-MAC. The third agent (receiver) observes the noisy channel output, which is also available to the transmitting agents via noiseless feedback. We provide a characterization of the optimal transmission scheme for the DSAHT problem depending on an appropriately defined sufficient statistic. Returning to the problem of simplifying the multi-letter expression for the DM-MAC feedback capacity, we show that restricting attention to distributions induced by optimal transmission schemes for the DSAHT problem, without loss of optimality, transforms the capacity expression, so that it can be thought of as the average reward received by an appropriately defined stochastic dynamical system with time-invariant state space.
{"title":"New perspectives on MAC feedback capacity using decentralized sequential active hypothesis testing paradigm","authors":"A. Anastasopoulos, S. Pradhan","doi":"10.1109/ITA50056.2020.9244995","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9244995","url":null,"abstract":"The capacity of the MAC with feedback has been characterized through a multi-letter expression based on the work of Kramer. Except for the two-user Gaussian channel, this expression has resisted simplification; as a result there is no single-letter characterization for the capacity of the general discrete memoryless MAC (DM-MAC). In this paper we investigate connections between this problem and the problem of decentralized sequential active hypothesis testing (DSAHT). In this problem, two transmitting agents, each possessing a private message, are actively helping a third agent–and each other–to learn the message pair over a DM-MAC. The third agent (receiver) observes the noisy channel output, which is also available to the transmitting agents via noiseless feedback. We provide a characterization of the optimal transmission scheme for the DSAHT problem depending on an appropriately defined sufficient statistic. Returning to the problem of simplifying the multi-letter expression for the DM-MAC feedback capacity, we show that restricting attention to distributions induced by optimal transmission schemes for the DSAHT problem, without loss of optimality, transforms the capacity expression, so that it can be thought of as the average reward received by an appropriately defined stochastic dynamical system with time-invariant state space.","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127608892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-02DOI: 10.1109/ITA50056.2020.9245006
Seungshik Kang, W. Jung
{"title":"Natural Language Analysis and Generation by Deep Learning and the Bias Problem","authors":"Seungshik Kang, W. Jung","doi":"10.1109/ITA50056.2020.9245006","DOIUrl":"https://doi.org/10.1109/ITA50056.2020.9245006","url":null,"abstract":"","PeriodicalId":137257,"journal":{"name":"2020 Information Theory and Applications Workshop (ITA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123472733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}