A principal who values an object allocates it to one or more agents. Agents learn private information (signals) from an information designer about the allocation payoff to the principal. Monetary transfer is not available but the principal can costly verify agents' private signals. The information designer can influence the agents' signal distributions, based upon which the principal maximizes the allocation surplus. An agent's utility is simply the probability of obtaining the good. With a single agent, we characterize (i) the agent-optimal information, (ii) the principal-worst information, and (iii) the principal-optimal information. Even though the objectives of the principal and the agent are not directly comparable, we find that any agent-optimal information is principal-worst. Moreover, there exists a robust mechanism that achieves the principal's payoff under (ii), which is therefore an optimal robust mechanism. Many of our results extend to the multiple-agent case; if not, we provide counterexamples.
{"title":"Information Design in Allocation with Costly Verification","authors":"Yi-Chun Chen, Gaoji Hu, Xiangqian Yang","doi":"10.2139/ssrn.4245445","DOIUrl":"https://doi.org/10.2139/ssrn.4245445","url":null,"abstract":"A principal who values an object allocates it to one or more agents. Agents learn private information (signals) from an information designer about the allocation payoff to the principal. Monetary transfer is not available but the principal can costly verify agents' private signals. The information designer can influence the agents' signal distributions, based upon which the principal maximizes the allocation surplus. An agent's utility is simply the probability of obtaining the good. With a single agent, we characterize (i) the agent-optimal information, (ii) the principal-worst information, and (iii) the principal-optimal information. Even though the objectives of the principal and the agent are not directly comparable, we find that any agent-optimal information is principal-worst. Moreover, there exists a robust mechanism that achieves the principal's payoff under (ii), which is therefore an optimal robust mechanism. Many of our results extend to the multiple-agent case; if not, we provide counterexamples.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123912903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-15DOI: 10.48550/arXiv.2210.08173
Lirong Xia, Weiqiang Zheng
The computational complexity of winner determination is a classical and important problem in computational social choice. Previous work based on worst-case analysis has established NP-hardness of winner determination for some classic voting rules, such as Kemeny, Dodgson, and Young. In this paper, we revisit the classical problem of winner determination through the lens of semi-random analysis , which is a worst average-case analysis where the preferences are generated from a distribution chosen by the adversary. Under a natural class of semi-random models that are inspired by recommender systems, we prove that winner determination remains hard for Dodgson, Young, and some multi-winner rules such as the Chamberlin-Courant rule and the Monroe rule. Under another natural class of semi-random models that are extensions of the Impartial Culture, we show that winner determination is hard for Kemeny, but is easy for Dodgson. This illustrates an interesting separation between Kemeny and Dodgson. ,
{"title":"Beyond the Worst Case: Semi-Random Complexity Analysis of Winner Determination","authors":"Lirong Xia, Weiqiang Zheng","doi":"10.48550/arXiv.2210.08173","DOIUrl":"https://doi.org/10.48550/arXiv.2210.08173","url":null,"abstract":"The computational complexity of winner determination is a classical and important problem in computational social choice. Previous work based on worst-case analysis has established NP-hardness of winner determination for some classic voting rules, such as Kemeny, Dodgson, and Young. In this paper, we revisit the classical problem of winner determination through the lens of semi-random analysis , which is a worst average-case analysis where the preferences are generated from a distribution chosen by the adversary. Under a natural class of semi-random models that are inspired by recommender systems, we prove that winner determination remains hard for Dodgson, Young, and some multi-winner rules such as the Chamberlin-Courant rule and the Monroe rule. Under another natural class of semi-random models that are extensions of the Impartial Culture, we show that winner determination is hard for Kemeny, but is easy for Dodgson. This illustrates an interesting separation between Kemeny and Dodgson. ,","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"os-39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127777514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-12DOI: 10.48550/arXiv.2210.06507
P. Lu, Enze Sun, Chenghan Zhou
Submodular over signal (SOS) defines a family of interesting functions for which there exist truthful mechanisms with constant approximation to the social welfare for agents with interdependent valuations. The best-known truthful auction is of $4$-approximation and a lower bound of 2 was proved. We propose a new and simple truthful mechanism to achieve an approximation ratio of 3.315.
{"title":"Better Approximation for Interdependent SOS Valuations","authors":"P. Lu, Enze Sun, Chenghan Zhou","doi":"10.48550/arXiv.2210.06507","DOIUrl":"https://doi.org/10.48550/arXiv.2210.06507","url":null,"abstract":"Submodular over signal (SOS) defines a family of interesting functions for which there exist truthful mechanisms with constant approximation to the social welfare for agents with interdependent valuations. The best-known truthful auction is of $4$-approximation and a lower bound of 2 was proved. We propose a new and simple truthful mechanism to achieve an approximation ratio of 3.315.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121762803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-11DOI: 10.48550/arXiv.2210.05795
Matthew Eichhorn, Siddhartha Banerjee, D. Kempe
Team formation is ubiquitous in many sectors: education, labor markets, sports, etc. A team’s success depends on its members’ latent types, which are not directly observable but can be (partially) inferred from past performances. From the viewpoint of a principal trying to select teams, this leads to a natural exploration-exploitation trade-off: retain successful teams that are discovered early, or reassign agents to learn more about their types? We study a natural model for online team formation, where a principal repeatedly partitions a group of agents into teams. Agents have binary latent types, each team comprises two members, and a team’s performance is a symmetric function of its members’ types. Over multiple rounds, the principal selects matchings over agents and incurs regret equal to the deficit in the number of successful teams versus the optimal matching for the given function. Our work provides a complete characterization of the regret landscape for all symmetric functions of two binary inputs. In particular, we develop team-selection policies that, despite being agnostic of model parameters, achieve optimal or near-optimal regret against an adaptive adversary.
{"title":"Online Team Formation under Different Synergies","authors":"Matthew Eichhorn, Siddhartha Banerjee, D. Kempe","doi":"10.48550/arXiv.2210.05795","DOIUrl":"https://doi.org/10.48550/arXiv.2210.05795","url":null,"abstract":"Team formation is ubiquitous in many sectors: education, labor markets, sports, etc. A team’s success depends on its members’ latent types, which are not directly observable but can be (partially) inferred from past performances. From the viewpoint of a principal trying to select teams, this leads to a natural exploration-exploitation trade-off: retain successful teams that are discovered early, or reassign agents to learn more about their types? We study a natural model for online team formation, where a principal repeatedly partitions a group of agents into teams. Agents have binary latent types, each team comprises two members, and a team’s performance is a symmetric function of its members’ types. Over multiple rounds, the principal selects matchings over agents and incurs regret equal to the deficit in the number of successful teams versus the optimal matching for the given function. Our work provides a complete characterization of the regret landscape for all symmetric functions of two binary inputs. In particular, we develop team-selection policies that, despite being agnostic of model parameters, achieve optimal or near-optimal regret against an adaptive adversary.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128226244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.48550/arXiv.2209.14468
Kamesh Munagala, Yiheng Shen, Kangning Wang
We consider the participatory budgeting problem where each of $n$ voters specifies additive utilities over $m$ candidate projects with given sizes, and the goal is to choose a subset of projects (i.e., a committee) with total size at most $k$. Participatory budgeting mathematically generalizes multiwinner elections, and both have received great attention in computational social choice recently. A well-studied notion of group fairness in this setting is core stability: Each voter is assigned an"entitlement"of $frac{k}{n}$, so that a subset $S$ of voters can pay for a committee of size at most $|S| cdot frac{k}{n}$. A given committee is in the core if no subset of voters can pay for another committee that provides each of them strictly larger utility. This provides proportional representation to all voters in a strong sense. In this paper, we study the following auditing question: Given a committee computed by some preference aggregation method, how close is it to the core? Concretely, how much does the entitlement of each voter need to be scaled down by, so that the core property subsequently holds? As our main contribution, we present computational hardness results for this problem, as well as a logarithmic approximation algorithm via linear program rounding. We show that our analysis is tight against the linear programming bound. Additionally, we consider two related notions of group fairness that have similar audit properties. The first is Lindahl priceability, which audits the closeness of a committee to a market clearing solution. We show that this is related to the linear programming relaxation of auditing the core, leading to efficient exact and approximation algorithms for auditing. The second is a novel weakening of the core that we term the sub-core, and we present computational results for auditing this notion as well.
{"title":"Auditing for Core Stability in Participatory Budgeting","authors":"Kamesh Munagala, Yiheng Shen, Kangning Wang","doi":"10.48550/arXiv.2209.14468","DOIUrl":"https://doi.org/10.48550/arXiv.2209.14468","url":null,"abstract":"We consider the participatory budgeting problem where each of $n$ voters specifies additive utilities over $m$ candidate projects with given sizes, and the goal is to choose a subset of projects (i.e., a committee) with total size at most $k$. Participatory budgeting mathematically generalizes multiwinner elections, and both have received great attention in computational social choice recently. A well-studied notion of group fairness in this setting is core stability: Each voter is assigned an\"entitlement\"of $frac{k}{n}$, so that a subset $S$ of voters can pay for a committee of size at most $|S| cdot frac{k}{n}$. A given committee is in the core if no subset of voters can pay for another committee that provides each of them strictly larger utility. This provides proportional representation to all voters in a strong sense. In this paper, we study the following auditing question: Given a committee computed by some preference aggregation method, how close is it to the core? Concretely, how much does the entitlement of each voter need to be scaled down by, so that the core property subsequently holds? As our main contribution, we present computational hardness results for this problem, as well as a logarithmic approximation algorithm via linear program rounding. We show that our analysis is tight against the linear programming bound. Additionally, we consider two related notions of group fairness that have similar audit properties. The first is Lindahl priceability, which audits the closeness of a committee to a market clearing solution. We show that this is related to the linear programming relaxation of auditing the core, leading to efficient exact and approximation algorithms for auditing. The second is a novel weakening of the core that we term the sub-core, and we present computational results for auditing this notion as well.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"531 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133400370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-16DOI: 10.48550/arXiv.2209.07815
Yuan Qiu, Jinyan Liu, Di Wang
In this paper we study estimating Generalized Linear Models (GLMs) in the case where the agents (individuals) are strategic or self-interested and they concern about their privacy when reporting data. Compared with the classical setting, here we aim to design mechanisms that can both incentivize most agents to truthfully report their data and preserve the privacy of individuals' reports, while their outputs should also close to the underlying parameter. In the first part of the paper, we consider the case where the covariates are sub-Gaussian and the responses are heavy-tailed where they only have the finite fourth moments. First, motivated by the stationary condition of the maximizer of the likelihood function, we derive a novel private and closed form estimator. Based on the estimator, we propose a mechanism which has the following properties via some appropriate design of the computation and payment scheme for several canonical models such as linear regression, logistic regression and Poisson regression: (1) the mechanism is $o(1)$-jointly differentially private (with probability at least $1-o(1)$); (2) it is an $o(frac{1}{n})$-approximate Bayes Nash equilibrium for a $(1-o(1))$-fraction of agents to truthfully report their data, where $n$ is the number of agents; (3) the output could achieve an error of $o(1)$ to the underlying parameter; (4) it is individually rational for a $(1-o(1))$ fraction of agents in the mechanism ; (5) the payment budget required from the analyst to run the mechanism is $o(1)$. In the second part, we consider the linear regression model under more general setting where both covariates and responses are heavy-tailed and only have finite fourth moments. By using an $ell_4$-norm shrinkage operator, we propose a private estimator and payment scheme which have similar properties as in the sub-Gaussian case.
{"title":"Truthful Generalized Linear Models","authors":"Yuan Qiu, Jinyan Liu, Di Wang","doi":"10.48550/arXiv.2209.07815","DOIUrl":"https://doi.org/10.48550/arXiv.2209.07815","url":null,"abstract":"In this paper we study estimating Generalized Linear Models (GLMs) in the case where the agents (individuals) are strategic or self-interested and they concern about their privacy when reporting data. Compared with the classical setting, here we aim to design mechanisms that can both incentivize most agents to truthfully report their data and preserve the privacy of individuals' reports, while their outputs should also close to the underlying parameter. In the first part of the paper, we consider the case where the covariates are sub-Gaussian and the responses are heavy-tailed where they only have the finite fourth moments. First, motivated by the stationary condition of the maximizer of the likelihood function, we derive a novel private and closed form estimator. Based on the estimator, we propose a mechanism which has the following properties via some appropriate design of the computation and payment scheme for several canonical models such as linear regression, logistic regression and Poisson regression: (1) the mechanism is $o(1)$-jointly differentially private (with probability at least $1-o(1)$); (2) it is an $o(frac{1}{n})$-approximate Bayes Nash equilibrium for a $(1-o(1))$-fraction of agents to truthfully report their data, where $n$ is the number of agents; (3) the output could achieve an error of $o(1)$ to the underlying parameter; (4) it is individually rational for a $(1-o(1))$ fraction of agents in the mechanism ; (5) the payment budget required from the analyst to run the mechanism is $o(1)$. In the second part, we consider the linear regression model under more general setting where both covariates and responses are heavy-tailed and only have finite fourth moments. By using an $ell_4$-norm shrinkage operator, we propose a private estimator and payment scheme which have similar properties as in the sub-Gaussian case.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129989306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-15DOI: 10.48550/arXiv.2209.07580
Pan Xu
. In this paper, we propose an online-matching-based model to study the assignment problems arising in a wide range of online-matching markets, including online recommendations, ride-hailing platforms, and crowdsourcing markets. It features that each assignment can request a random set of resources and yield a random utility, and the two (cost and utility) can be arbitrarily correlated with each other. We present two linear-programming-based parameterized policies to study the tradeoff between the competitive ratio (CR) on the total utilities and the variance on the total number of matches (unweighted version). The first one (SAMP) is simply to sample an edge according to the distribution extracted from the clairvoyant optimal, while the second (ATT) features a time-adaptive attenuation framework that leads to an improvement over the state-of-the-art competitive-ratio result. We also consider the problem under a large-budget assumption and show that SAMP achieves asymptotically optimal performance in terms of competitive ratio.
{"title":"Exploring the Tradeoff between Competitive Ratio and Variance in Online-Matching Markets","authors":"Pan Xu","doi":"10.48550/arXiv.2209.07580","DOIUrl":"https://doi.org/10.48550/arXiv.2209.07580","url":null,"abstract":". In this paper, we propose an online-matching-based model to study the assignment problems arising in a wide range of online-matching markets, including online recommendations, ride-hailing platforms, and crowdsourcing markets. It features that each assignment can request a random set of resources and yield a random utility, and the two (cost and utility) can be arbitrarily correlated with each other. We present two linear-programming-based parameterized policies to study the tradeoff between the competitive ratio (CR) on the total utilities and the variance on the total number of matches (unweighted version). The first one (SAMP) is simply to sample an edge according to the distribution extracted from the clairvoyant optimal, while the second (ATT) features a time-adaptive attenuation framework that leads to an improvement over the state-of-the-art competitive-ratio result. We also consider the problem under a large-budget assumption and show that SAMP achieves asymptotically optimal performance in terms of competitive ratio.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125004443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-05DOI: 10.48550/arXiv.2207.01970
Siddharth Barman, Anand Krishna, Y. Narahari, Soumya Sadhukhan
We study coverage problems in which, for a set of agents and a given threshold $T$, the goal is to select $T$ subsets (of the agents) that, while satisfying combinatorial constraints, achieve fair and efficient coverage among the agents. In this setting, the valuation of each agent is equated to the number of selected subsets that contain it, plus one. The current work utilizes the Nash social welfare function to quantify the extent of fairness and collective efficiency. We develop a polynomial-time $left(18 + o(1) right)$-approximation algorithm for maximizing Nash social welfare in coverage instances. Our algorithm applies to all instances wherein, for the underlying combinatorial constraints, there exists an FPTAS for weight maximization. We complement the algorithmic result by proving that Nash social welfare maximization is APX-hard in coverage instances.
{"title":"Nash Welfare Guarantees for Fair and Efficient Coverage","authors":"Siddharth Barman, Anand Krishna, Y. Narahari, Soumya Sadhukhan","doi":"10.48550/arXiv.2207.01970","DOIUrl":"https://doi.org/10.48550/arXiv.2207.01970","url":null,"abstract":"We study coverage problems in which, for a set of agents and a given threshold $T$, the goal is to select $T$ subsets (of the agents) that, while satisfying combinatorial constraints, achieve fair and efficient coverage among the agents. In this setting, the valuation of each agent is equated to the number of selected subsets that contain it, plus one. The current work utilizes the Nash social welfare function to quantify the extent of fairness and collective efficiency. We develop a polynomial-time $left(18 + o(1) right)$-approximation algorithm for maximizing Nash social welfare in coverage instances. Our algorithm applies to all instances wherein, for the underlying combinatorial constraints, there exists an FPTAS for weight maximization. We complement the algorithmic result by proving that Nash social welfare maximization is APX-hard in coverage instances.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"213 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133574736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-30DOI: 10.48550/arXiv.2205.00140
Yu Fei
We study the two-agent single-item bilateral trade. Ideally, the trade should happen whenever the buyer's value for the item exceeds the seller's cost. However, the classical result of Myerson and Satterthwaite showed that no mechanism can achieve this without violating one of the Bayesian incentive compatibility, individual rationality and weakly balanced budget conditions. This motivates the study of approximating the trade-whenever-socially-beneficial mechanism, in terms of the expected gains-from-trade. Recently, Deng, Mao, Sivan, and Wang showed that the random-offerer mechanism achieves at least a 1/8.23 approximation. We improve this lower bound to 1/3.15 in this paper. We also determine the exact worst-case approximation ratio of the seller-pricing mechanism assuming the distribution of the buyer's value satisfies the monotone hazard rate property.
{"title":"Improved Approximation to First-Best Gains-from-Trade","authors":"Yu Fei","doi":"10.48550/arXiv.2205.00140","DOIUrl":"https://doi.org/10.48550/arXiv.2205.00140","url":null,"abstract":"We study the two-agent single-item bilateral trade. Ideally, the trade should happen whenever the buyer's value for the item exceeds the seller's cost. However, the classical result of Myerson and Satterthwaite showed that no mechanism can achieve this without violating one of the Bayesian incentive compatibility, individual rationality and weakly balanced budget conditions. This motivates the study of approximating the trade-whenever-socially-beneficial mechanism, in terms of the expected gains-from-trade. Recently, Deng, Mao, Sivan, and Wang showed that the random-offerer mechanism achieves at least a 1/8.23 approximation. We improve this lower bound to 1/3.15 in this paper. We also determine the exact worst-case approximation ratio of the seller-pricing mechanism assuming the distribution of the buyer's value satisfies the monotone hazard rate property.","PeriodicalId":192438,"journal":{"name":"Workshop on Internet and Network Economics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130732813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}