We propose a deep reinforcement learning algorithm that employs an adversarial training strategy for adhering to implicit human norms alongside optimizing for a narrow goal objective. Previous methods which incorporate human values into reinforcement learning algorithms either scale poorly or assume hand-crafted state features. Our algorithm drops these assumptions and is able to automatically infer norms from human demonstrations, which allows for integrating it into existing agents in the form of multi-objective optimization. We benchmark our approach in a search-and-rescue grid world and show that, conditioned on respecting human norms, our agent maintains optimal performance with respect to the predefined goal.
{"title":"Training for Implicit Norms in Deep Reinforcement Learning Agents through Adversarial Multi-Objective Reward Optimization","authors":"M. Peschl","doi":"10.1145/3461702.3462473","DOIUrl":"https://doi.org/10.1145/3461702.3462473","url":null,"abstract":"We propose a deep reinforcement learning algorithm that employs an adversarial training strategy for adhering to implicit human norms alongside optimizing for a narrow goal objective. Previous methods which incorporate human values into reinforcement learning algorithms either scale poorly or assume hand-crafted state features. Our algorithm drops these assumptions and is able to automatically infer norms from human demonstrations, which allows for integrating it into existing agents in the form of multi-objective optimization. We benchmark our approach in a search-and-rescue grid world and show that, conditioned on respecting human norms, our agent maintains optimal performance with respect to the predefined goal.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120943512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated decisions based on trained algorithms influence human life in an increasingly far-reaching way. In recent years, it has become clear that these decisions are often accompanied by bias and unfair treatment of different subpopulations.Meanwhile, several notions of fairness circulate in the scientific literature, with trade-offs between profit and fairness and between fairness metrics among themselves. Based on both analytical calculations and numerical simulations, we show in this study that some profit-fairness trade-offs and fairness-fairness trade-offs depend substantially on the underlying score distributions given to subpopulations and we present two complementary perspectives to visualize this influence. We further show that higher symmetry in scores of subpopulations can significantly reduce the trade-offs between fairness notions within a given acceptable strictness, even when sacrificing expressiveness. Our exploratory study may help to understand how to overcome the strict mathematical statements about the statistical incompatibility of certain fairness notions.
{"title":"How Do the Score Distributions of Subpopulations Influence Fairness Notions?","authors":"Carmen Mazijn, J. Danckaert, V. Ginis","doi":"10.1145/3461702.3462601","DOIUrl":"https://doi.org/10.1145/3461702.3462601","url":null,"abstract":"Automated decisions based on trained algorithms influence human life in an increasingly far-reaching way. In recent years, it has become clear that these decisions are often accompanied by bias and unfair treatment of different subpopulations.Meanwhile, several notions of fairness circulate in the scientific literature, with trade-offs between profit and fairness and between fairness metrics among themselves. Based on both analytical calculations and numerical simulations, we show in this study that some profit-fairness trade-offs and fairness-fairness trade-offs depend substantially on the underlying score distributions given to subpopulations and we present two complementary perspectives to visualize this influence. We further show that higher symmetry in scores of subpopulations can significantly reduce the trade-offs between fairness notions within a given acceptable strictness, even when sacrificing expressiveness. Our exploratory study may help to understand how to overcome the strict mathematical statements about the statistical incompatibility of certain fairness notions.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122035692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seth Lazar, Taina Bucher, A. Korolova, Cailin O’Connor, Nicolas Suzor
This panel brings experts together from law, philosophy, computer science and media studies to explore how digital platforms exercise power over which content is visible online, and which content is promoted to users, with a special focus on the use of algorithmic systems to achieve these ends.
{"title":"Platform Power and AI: The Case of Content","authors":"Seth Lazar, Taina Bucher, A. Korolova, Cailin O’Connor, Nicolas Suzor","doi":"10.1145/3461702.3462443","DOIUrl":"https://doi.org/10.1145/3461702.3462443","url":null,"abstract":"This panel brings experts together from law, philosophy, computer science and media studies to explore how digital platforms exercise power over which content is visible online, and which content is promoted to users, with a special focus on the use of algorithmic systems to achieve these ends.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of data-driven Automated Decision Making (ADM) to determine access to products or services in competitive markets can enhance or limit access to equality and fair treatment. In cases where essential services such housing, energy and telecommunications, are accessed through a competitive market, consumers who are denied access to one or more of these services may not be able to access a suitable alternative if there are none available to match their needs, budget, and unique circumstances. Being denied access to an essential service such as electricity or housing can be an issue of life or death. Competitive essential services markets therefore illuminate the ways that using ADM to determine access to products or services, if not balanced by appropriate consumer protections, can cause significant harm. My research explores existing and emerging consumer protections that are effective in preventing consumers being harmed by ADM-facilitated decisions in essential services markets.
{"title":"Designing Effective and Accessible Consumer Protections against Unfair Treatment in Markets where Automated Decision Making is used to Determine Access to Essential Services: A Case Study in Australia's Housing Market","authors":"Linda Przhedetsky","doi":"10.1145/3461702.3462468","DOIUrl":"https://doi.org/10.1145/3461702.3462468","url":null,"abstract":"The use of data-driven Automated Decision Making (ADM) to determine access to products or services in competitive markets can enhance or limit access to equality and fair treatment. In cases where essential services such housing, energy and telecommunications, are accessed through a competitive market, consumers who are denied access to one or more of these services may not be able to access a suitable alternative if there are none available to match their needs, budget, and unique circumstances. Being denied access to an essential service such as electricity or housing can be an issue of life or death. Competitive essential services markets therefore illuminate the ways that using ADM to determine access to products or services, if not balanced by appropriate consumer protections, can cause significant harm. My research explores existing and emerging consumer protections that are effective in preventing consumers being harmed by ADM-facilitated decisions in essential services markets.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115300662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scientists and philosophers have warned of the possibility that humans, in the future, might create a 'superintelligent' machine that could, in some scenarios, form an existential threat to humanity. This paper argues that such a machine may already exist, and that, if so, it does, in fact, represent such a threat.
{"title":"Feeding the Beast: Superintelligence, Corporate Capitalism and the End of Humanity","authors":"Dominic Leggett","doi":"10.1145/3461702.3462581","DOIUrl":"https://doi.org/10.1145/3461702.3462581","url":null,"abstract":"Scientists and philosophers have warned of the possibility that humans, in the future, might create a 'superintelligent' machine that could, in some scenarios, form an existential threat to humanity. This paper argues that such a machine may already exist, and that, if so, it does, in fact, represent such a threat.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122801511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals about their experiences around two decision-making processes during hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals' job content as it automates or augments pieces of the existing hiring process. Finally, we discuss how candidate roles that recruiters and HR professionals support drive the use of algorithmic hiring software.
{"title":"Algorithmic Hiring in Practice: Recruiter and HR Professional's Perspectives on AI Use in Hiring","authors":"Lan Li, T. Lassiter, Joohee Oh, Min Kyung Lee","doi":"10.1145/3461702.3462531","DOIUrl":"https://doi.org/10.1145/3461702.3462531","url":null,"abstract":"The use of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals about their experiences around two decision-making processes during hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals' job content as it automates or augments pieces of the existing hiring process. Finally, we discuss how candidate roles that recruiters and HR professionals support drive the use of algorithmic hiring software.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125118770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Flathmann, Beau G. Schelble, Rui Zhang, Nathan J. Mcneese
With artificial intelligence continuing to advance, so too do the ethical concerns that can potentially negatively impact humans and the greater society. When these systems begin to interact with humans, these concerns become much more complex and much more important. The field of human-AI teaming provides a relevant example of how AI ethics can have significant and continued effects on humans. This paper reviews research in ethical artificial intelligence, as well as ethical teamwork through the lens of the rapidly advancing field of human-AI teaming, resulting in a model demonstrating the requirements and outcomes of building ethical human-AI teams. The model is created to guide the prioritization of ethics in human-AI teaming by outlining the ethical teaming process, outcomes of ethical teams, and external requirements necessary to ensure ethical human-AI teams. A final discussion is presented on how the developed model will influence the implementation of AI teammates, as well as the development of policy and regulation surrounding the domain in the coming years.
{"title":"Modeling and Guiding the Creation of Ethical Human-AI Teams","authors":"Christopher Flathmann, Beau G. Schelble, Rui Zhang, Nathan J. Mcneese","doi":"10.1145/3461702.3462573","DOIUrl":"https://doi.org/10.1145/3461702.3462573","url":null,"abstract":"With artificial intelligence continuing to advance, so too do the ethical concerns that can potentially negatively impact humans and the greater society. When these systems begin to interact with humans, these concerns become much more complex and much more important. The field of human-AI teaming provides a relevant example of how AI ethics can have significant and continued effects on humans. This paper reviews research in ethical artificial intelligence, as well as ethical teamwork through the lens of the rapidly advancing field of human-AI teaming, resulting in a model demonstrating the requirements and outcomes of building ethical human-AI teams. The model is created to guide the prioritization of ethics in human-AI teaming by outlining the ethical teaming process, outcomes of ethical teams, and external requirements necessary to ensure ethical human-AI teams. A final discussion is presented on how the developed model will influence the implementation of AI teammates, as well as the development of policy and regulation surrounding the domain in the coming years.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128727299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Severin Engelmann, Mo Chen, Lorenz Dang, Jens Grossklags
The Chinese Social Credit System (SCS) is a novel digital socio-technical credit system. The SCS aims to regulate societal behavior by reputational and material devices. Scholarship on the SCS has offered a variety of legal and theoretical perspectives. However, little is known about its actual implementation. Here, we provide the first comprehensive empirical study of digital blacklists (listing "bad" behavior) and redlists (listing "good" behavior) in the Chinese SCS. Based on a unique data set of reputational blacklists and redlists in 30 Chinese provincial-level administrative divisions (ADs), we show the diversity, flexibility, and comprehensiveness of the SCS listing infrastructure. First, our results demonstrate that the Chinese SCS unfolds in a highly diversified manner: we find differences in accessibility, interface design and credit information across provincial-level SCS blacklists and redlists. Second, SCS listings are flexible. During the COVID-19 outbreak, we observe a swift addition of blacklists and redlists that helps strengthen the compliance with coronavirus-related norms and regulations. Third, the SCS listing infrastructure is comprehensive. Overall, we identify 273 blacklists and 154 redlists across provincial-level ADs. Our blacklist and redlist taxonomy highlights that the SCS listing infrastructure prioritizes law enforcement and industry regulations. We also identify redlists that reward political and moral behavior. Our study substantiates the enormous scale and diversity of the Chinese SCS and puts the debate on its reach and societal impact on firmer ground. Finally, we initiate a discussion on the ethical dimensions of data-driven research on the SCS.
{"title":"Blacklists and Redlists in the Chinese Social Credit System: Diversity, Flexibility, and Comprehensiveness","authors":"Severin Engelmann, Mo Chen, Lorenz Dang, Jens Grossklags","doi":"10.1145/3461702.3462535","DOIUrl":"https://doi.org/10.1145/3461702.3462535","url":null,"abstract":"The Chinese Social Credit System (SCS) is a novel digital socio-technical credit system. The SCS aims to regulate societal behavior by reputational and material devices. Scholarship on the SCS has offered a variety of legal and theoretical perspectives. However, little is known about its actual implementation. Here, we provide the first comprehensive empirical study of digital blacklists (listing \"bad\" behavior) and redlists (listing \"good\" behavior) in the Chinese SCS. Based on a unique data set of reputational blacklists and redlists in 30 Chinese provincial-level administrative divisions (ADs), we show the diversity, flexibility, and comprehensiveness of the SCS listing infrastructure. First, our results demonstrate that the Chinese SCS unfolds in a highly diversified manner: we find differences in accessibility, interface design and credit information across provincial-level SCS blacklists and redlists. Second, SCS listings are flexible. During the COVID-19 outbreak, we observe a swift addition of blacklists and redlists that helps strengthen the compliance with coronavirus-related norms and regulations. Third, the SCS listing infrastructure is comprehensive. Overall, we identify 273 blacklists and 154 redlists across provincial-level ADs. Our blacklist and redlist taxonomy highlights that the SCS listing infrastructure prioritizes law enforcement and industry regulations. We also identify redlists that reward political and moral behavior. Our study substantiates the enormous scale and diversity of the Chinese SCS and puts the debate on its reach and societal impact on firmer ground. Finally, we initiate a discussion on the ethical dimensions of data-driven research on the SCS.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126984380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the biggest reasons artificial intelligence (AI) gets a backlash is because of inherent biases in AI software. Deep learning algorithms use data fed into the systems to find patterns to draw conclusions used to make application decisions. Patterns in data fed into machine learning algorithms have revealed that the AI software decisions have biases embedded within them. Algorithmic audits can certify that the software is making responsible decisions. These audits verify the standards centered around the various AI principles such as explainability, accountability, human-centered values, such as, fairness and transparency, to increase the trust in the algorithm and the software systems that implement AI algorithms.
{"title":"Examining Religion Bias in AI Text Generators","authors":"D. Muralidhar","doi":"10.1145/3461702.3462469","DOIUrl":"https://doi.org/10.1145/3461702.3462469","url":null,"abstract":"One of the biggest reasons artificial intelligence (AI) gets a backlash is because of inherent biases in AI software. Deep learning algorithms use data fed into the systems to find patterns to draw conclusions used to make application decisions. Patterns in data fed into machine learning algorithms have revealed that the AI software decisions have biases embedded within them. Algorithmic audits can certify that the software is making responsible decisions. These audits verify the standards centered around the various AI principles such as explainability, accountability, human-centered values, such as, fairness and transparency, to increase the trust in the algorithm and the software systems that implement AI algorithms.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129018281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Kim, De'Aira G. Bryant, Deepak Srikanth, A. Howard
The growing potential for facial emotion recognition (FER) technology has encouraged expedited development at the cost of rigorous validation. Many of its use-cases may also impact the diverse global community as FER becomes embedded into domains ranging from education to security to healthcare. Yet, prior work has highlighted that FER can exhibit both gender and racial biases like other facial analysis techniques. As a result, bias-mitigation research efforts have mainly focused on tackling gender and racial disparities, while other demographic related biases, such as age, have seen less progress. This work seeks to examine the performance of state of the art commercial FER technology on expressive images of men and women from three distinct age groups. We utilize four different commercial FER systems in a black box methodology to evaluate how six emotions - anger, disgust, fear, happiness, neutrality, and sadness - are correctly detected by age group. We further investigate how algorithmic changes over the last year have affected system performance. Our results found that all four commercial FER systems most accurately perceived emotion in images of young adults and least accurately in images of older adults. This trend was observed for analyses conducted in 2019 and 2020. However, little to no gender disparities were observed in either year. While older adults may not have been the initial target consumer of FER technology, statistics show the demographic is quickly growing more keen to applications that use such systems. Our results demonstrate the importance of considering various demographic subgroups during FER system validation and the need for inclusive, intersectional algorithmic developmental practices.
{"title":"Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older Adults","authors":"E. Kim, De'Aira G. Bryant, Deepak Srikanth, A. Howard","doi":"10.1145/3461702.3462609","DOIUrl":"https://doi.org/10.1145/3461702.3462609","url":null,"abstract":"The growing potential for facial emotion recognition (FER) technology has encouraged expedited development at the cost of rigorous validation. Many of its use-cases may also impact the diverse global community as FER becomes embedded into domains ranging from education to security to healthcare. Yet, prior work has highlighted that FER can exhibit both gender and racial biases like other facial analysis techniques. As a result, bias-mitigation research efforts have mainly focused on tackling gender and racial disparities, while other demographic related biases, such as age, have seen less progress. This work seeks to examine the performance of state of the art commercial FER technology on expressive images of men and women from three distinct age groups. We utilize four different commercial FER systems in a black box methodology to evaluate how six emotions - anger, disgust, fear, happiness, neutrality, and sadness - are correctly detected by age group. We further investigate how algorithmic changes over the last year have affected system performance. Our results found that all four commercial FER systems most accurately perceived emotion in images of young adults and least accurately in images of older adults. This trend was observed for analyses conducted in 2019 and 2020. However, little to no gender disparities were observed in either year. While older adults may not have been the initial target consumer of FER technology, statistics show the demographic is quickly growing more keen to applications that use such systems. Our results demonstrate the importance of considering various demographic subgroups during FER system validation and the need for inclusive, intersectional algorithmic developmental practices.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134368709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}