Pub Date : 2024-01-08DOI: 10.1007/s43681-023-00393-3
Christian Sieberichs, Simon Geerkens, Alexander Braun, Thomas Waschulzik
With the increasing capabilities of machine learning systems and their potential use in safety-critical systems, ensuring high-quality data is becoming increasingly important. In this paper, we present a novel approach for the assurance of data quality. For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples. This results in the detection of data points with potentially harmful properties for the use in safety-critical systems.
{"title":"ECS: an interactive tool for data quality assurance","authors":"Christian Sieberichs, Simon Geerkens, Alexander Braun, Thomas Waschulzik","doi":"10.1007/s43681-023-00393-3","DOIUrl":"10.1007/s43681-023-00393-3","url":null,"abstract":"<div><p>With the increasing capabilities of machine learning systems and their potential use in safety-critical systems, ensuring high-quality data is becoming increasingly important. In this paper, we present a novel approach for the assurance of data quality. For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples. This results in the detection of data points with potentially harmful properties for the use in safety-critical systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"131 - 139"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00393-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139444880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1007/s43681-023-00411-4
Oliver Li
Recent research papers and tests in real life point in the direction that machines in the future may develop some form of possibly rudimentary inner life. Philosophers have warned and emphasized that the possibility of artificial suffering or the possibility of machines as moral patients should not be ruled out. In this paper, I reflect on the consequences for moral development of striving for AGI. In the introduction, I present examples which point into the direction of the future possibility of artificial suffering and highlight the increasing similarity between, for example, machine–human and human–human interaction. Next, I present and discuss responses to the possibility of artificial suffering supporting a cautious attitude for the sake of the machines. From a virtue ethical perspective and the development of human virtues, I subsequently argue that humans should not pursue the path of developing and creating AGI, not merely for the sake of possible suffering in machines, but also due to machine–human interaction becoming more alike to human–human interaction and for the sake of the human’s own moral development. Thus, for several reasons, humanity, as a whole, should be extremely cautious about pursuing the path of developing AGI—Artificial General Intelligence.
{"title":"Should we develop AGI? Artificial suffering and the moral development of humans","authors":"Oliver Li","doi":"10.1007/s43681-023-00411-4","DOIUrl":"10.1007/s43681-023-00411-4","url":null,"abstract":"<div><p>Recent research papers and tests in real life point in the direction that machines in the future may develop some form of possibly rudimentary inner life. Philosophers have warned and emphasized that the possibility of artificial suffering or the possibility of machines as moral patients should not be ruled out. In this paper, I reflect on the consequences for moral development of striving for AGI. In the introduction, I present examples which point into the direction of the future possibility of artificial suffering and highlight the increasing similarity between, for example, machine–human and human–human interaction. Next, I present and discuss responses to the possibility of artificial suffering supporting a cautious attitude for the sake of the machines. From a virtue ethical perspective and the development of human virtues, I subsequently argue that humans should not pursue the path of developing and creating AGI, not merely for the sake of possible suffering in machines, but also due to machine–human interaction becoming more alike to human–human interaction and for the sake of the human’s own moral development. Thus, for several reasons, humanity, as a whole, should be extremely cautious about pursuing the path of developing AGI—Artificial General Intelligence.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"641 - 651"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00411-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139444857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1007/s43681-023-00392-4
Marc Zeller, Thomas Waschulzik, Reiner Schmid, Claus Bahlmann
Traditional automation technologies alone are not sufficient to enable driverless operation of trains (called Grade of Automation (GoA) 4) on non-restricted infrastructure. The required perception tasks are nowadays realized using Machine Learning (ML) and thus need to be developed and deployed reliably and efficiently. One important aspect to achieve this is to use an MLOps process for tackling improved reproducibility, traceability, collaboration, and continuous adaptation of a driverless operation to changing conditions. MLOps mixes ML application development and operation (Ops) and enables high-frequency software releases and continuous innovation based on the feedback from operations. In this paper, we outline a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain. It integrates system engineering, safety assurance, and the ML life-cycle in a comprehensive workflow. We present the individual stages of the process and their interactions. Moreover, we describe relevant challenges to automate the different stages of the safe MLOps process.
仅靠传统的自动化技术还不足以实现列车在非限制性基础设施上的无人驾驶(称为 "自动化等级(GoA)4")。如今,所需的感知任务可通过机器学习(ML)来实现,因此需要可靠、高效地开发和部署。实现这一目标的一个重要方面是采用 MLOps 流程,以提高无人驾驶操作的可重复性、可追溯性、协作性,并使其不断适应不断变化的条件。MLOps 混合了 ML 应用程序开发和运营(Ops),可实现高频率的软件发布和基于运营反馈的持续创新。在本文中,我们概述了铁路领域基于 ML 系统的持续开发和安全保证的安全 MLOps 流程。它将系统工程、安全保证和 ML 生命周期整合在一个全面的工作流程中。我们介绍了该流程的各个阶段及其相互作用。此外,我们还介绍了实现安全 MLOps 流程不同阶段自动化的相关挑战。
{"title":"Toward a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain","authors":"Marc Zeller, Thomas Waschulzik, Reiner Schmid, Claus Bahlmann","doi":"10.1007/s43681-023-00392-4","DOIUrl":"10.1007/s43681-023-00392-4","url":null,"abstract":"<div><p>Traditional automation technologies alone are not sufficient to enable driverless operation of trains (called Grade of Automation (GoA) 4) on non-restricted infrastructure. The required perception tasks are nowadays realized using Machine Learning (ML) and thus need to be developed and deployed reliably and efficiently. One important aspect to achieve this is to use an MLOps process for tackling improved reproducibility, traceability, collaboration, and continuous adaptation of a driverless operation to changing conditions. MLOps mixes ML application development and operation (Ops) and enables high-frequency software releases and continuous innovation based on the feedback from operations. In this paper, we outline a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain. It integrates system engineering, safety assurance, and the ML life-cycle in a comprehensive workflow. We present the individual stages of the process and their interactions. Moreover, we describe relevant challenges to automate the different stages of the safe MLOps process.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"123 - 130"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The right to be forgotten (RTBF) allows individuals to request the removal of personal information from online platforms. Researchers have proposed machine unlearning algorithms as a solution for erasing specific data from trained models to support RTBF. However, these methods modify how data are fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help AI practitioners make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Results show that non-uniform data deletion with the variant of SISA leads to better fairness compared to ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. This research can help practitioners make informed decisions when implementing RTBF solutions that consider potential trade-offs on fairness.
{"title":"To be forgotten or to be fair: unveiling fairness implications of machine unlearning methods","authors":"Dawen Zhang, Shidong Pan, Thong Hoang, Zhenchang Xing, Mark Staples, Xiwei Xu, Lina Yao, Qinghua Lu, Liming Zhu","doi":"10.1007/s43681-023-00398-y","DOIUrl":"10.1007/s43681-023-00398-y","url":null,"abstract":"<div><p>The right to be forgotten (RTBF) allows individuals to request the removal of personal information from online platforms. Researchers have proposed machine unlearning algorithms as a solution for erasing specific data from trained models to support RTBF. However, these methods modify how data are fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help AI practitioners make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Results show that non-uniform data deletion with the variant of SISA leads to better fairness compared to ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. This research can help practitioners make informed decisions when implementing RTBF solutions that consider potential trade-offs on fairness.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"83 - 93"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00398-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142409544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1007/s43681-023-00385-3
Hubert Etienne, Florian Cova
In the past years, many studies have surveyed people’s intuitions about moral dilemmas involving autonomous vehicles (AVs). One widespread rationale for this line of research has been that understanding people’s attitudes about such dilemmas might help increase the pace of the adoption of autonomous vehicles—a goal that certain researchers consider a pressing moral imperative. However, surveying people is not a neutral process that is independent of respondents’ opinions and responses: in fact, respondents’ opinions can be influenced merely by taking part in a survey. In this paper, we present the results of three studies that suggest that participating in such surveys impacts participants’ willingness to acquire AVs. In our studies, we find that reflecting on AV dilemmas negatively impacted participants' willingness. Based on these results, we argue that prompting the general population to focus on AV dilemmas might highlight aspects of AVs that discourage their adoption. This results in a tension between the main rationale for empirical research on AV dilemmas and the impact of this research on the public at large.
{"title":"The more they think, the less they want: studying people’s attitudes about autonomous vehicles could also contribute to shaping them","authors":"Hubert Etienne, Florian Cova","doi":"10.1007/s43681-023-00385-3","DOIUrl":"10.1007/s43681-023-00385-3","url":null,"abstract":"<div><p>In the past years, many studies have surveyed people’s intuitions about moral dilemmas involving autonomous vehicles (AVs). One widespread rationale for this line of research has been that understanding people’s attitudes about such dilemmas might help increase the pace of the adoption of autonomous vehicles—a goal that certain researchers consider a pressing moral imperative. However, surveying people is not a neutral process that is independent of respondents’ opinions and responses: in fact, respondents’ opinions can be influenced merely by taking part in a survey. In this paper, we present the results of three studies that suggest that participating in such surveys impacts participants’ willingness to acquire AVs. In our studies, we find that reflecting on AV dilemmas negatively impacted participants' willingness. Based on these results, we argue that prompting the general population to focus on AV dilemmas might highlight aspects of AVs that discourage their adoption. This results in a tension between the main rationale for empirical research on AV dilemmas and the impact of this research on the public at large.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"633 - 640"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1007/s43681-023-00405-2
Mirko Farina, Xiao Yu, A. Lavazza
{"title":"Ethical considerations and policy interventions concerning the impact of generative AI tools in the economy and in society","authors":"Mirko Farina, Xiao Yu, A. Lavazza","doi":"10.1007/s43681-023-00405-2","DOIUrl":"https://doi.org/10.1007/s43681-023-00405-2","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"22 9","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139388884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the design, development, and use of artificial intelligence systems, it is important to ensure that they are safe and trustworthy. This requires a systematic approach to identifying, analyzing, evaluating, mitigating, and monitoring risks throughout the entire lifecycle of an AI system. While standardized risk management processes are being developed, organizations may struggle to implement AI risk management effectively and efficiently due to various implementation gaps. This paper discusses the main gaps in AI risks management and describes a tool that can be used to support organizations in AI risk assessment. The tool consists of a structured process for identifying, analyzing, and evaluating risks in the context of specific AI applications and environments. The tool accounts for the multidimensionality and context-sensitivity of AI risks. It provides a visualization and quantification of AI risks and can inform strategies to mitigate and minimize those risks.
{"title":"AI risk assessment using ethical dimensions","authors":"Alessio Tartaro, Enrico Panai, Mariangela Zoe Cocchiaro","doi":"10.1007/s43681-023-00401-6","DOIUrl":"10.1007/s43681-023-00401-6","url":null,"abstract":"<div><p>In the design, development, and use of artificial intelligence systems, it is important to ensure that they are safe and trustworthy. This requires a systematic approach to identifying, analyzing, evaluating, mitigating, and monitoring risks throughout the entire lifecycle of an AI system. While standardized risk management processes are being developed, organizations may struggle to implement AI risk management effectively and efficiently due to various implementation gaps. This paper discusses the main gaps in AI risks management and describes a tool that can be used to support organizations in AI risk assessment. The tool consists of a structured process for identifying, analyzing, and evaluating risks in the context of specific AI applications and environments. The tool accounts for the multidimensionality and context-sensitivity of AI risks. It provides a visualization and quantification of AI risks and can inform strategies to mitigate and minimize those risks.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"105 - 112"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139389056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1007/s43681-023-00402-5
Eva Thelisson, Himanshu Verma
The European Commission proposed harmonised rules on artificial intelligence (AI) on the 21st of April 2021 (namely the EU AI Act). Following a consultative process with the European Council and many amendments, a General Approach of the EU AI Act was published on the 25th of November 2022. The EU Parliament approved the initial draft in May 2023. Trilogue meetings took place in June, July, September and October 2023, with the aim for the European Parliament, the Council of the European Union and the European Commission to adopt a final version early 2024. This is the first attempt to build a legally binding legal instrument on Artificial Intelligence in the European Union (EU). In a similar way as the General Data Protection Regulation (GDPR), the EU AI Act has an extraterritorial effect. It has, therefore, the potential to become a global gold standard for AI regulation. It may also contribute to developing a global consensus on AI Trustworthiness because AI providers must conduct conformity assessments for high-risk AI systems prior to entry into the EU market. As the AI Act contains limited guidelines on how to conduct conformity assessments and ex-post monitoring in practice, there is a need for consensus building on this topic. This paper aims at studying the governance structure proposed by the EU AI Act, as approved by the European Council in November 2022, and proposes tools to conduct conformity assessments of AI systems.
{"title":"Conformity assessment under the EU AI act general approach","authors":"Eva Thelisson, Himanshu Verma","doi":"10.1007/s43681-023-00402-5","DOIUrl":"10.1007/s43681-023-00402-5","url":null,"abstract":"<div><p>The European Commission proposed harmonised rules on artificial intelligence (AI) on the 21st of April 2021 (namely the EU AI Act). Following a consultative process with the European Council and many amendments, a General Approach of the EU AI Act was published on the 25th of November 2022. The EU Parliament approved the initial draft in May 2023. Trilogue meetings took place in June, July, September and October 2023, with the aim for the European Parliament, the Council of the European Union and the European Commission to adopt a final version early 2024. This is the first attempt to build a legally binding legal instrument on Artificial Intelligence in the European Union (EU). In a similar way as the General Data Protection Regulation (GDPR), the EU AI Act has an extraterritorial effect. It has, therefore, the potential to become a global gold standard for AI regulation. It may also contribute to developing a global consensus on AI Trustworthiness because AI providers must conduct conformity assessments for high-risk AI systems prior to entry into the EU market. As the AI Act contains limited guidelines on how to conduct conformity assessments and ex-post monitoring in practice, there is a need for consensus building on this topic. This paper aims at studying the governance structure proposed by the EU AI Act, as approved by the European Council in November 2022, and proposes tools to conduct conformity assessments of AI systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"113 - 121"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1007/s43681-023-00408-z
Jan Segessenmann, Thilo Stadelmann, Andrew Davison, Oliver Dürr
Following the success of deep learning (DL) in research, we are now witnessing the fast and widespread adoption of artificial intelligence (AI) in daily life, influencing the way we act, think, and organize our lives. However, much still remains a mystery when it comes to how these systems achieve such high performance and why they reach the outputs they do. This presents us with an unusual combination: of technical mastery on the one hand, and a striking degree of mystery on the other. This conjunction is not only fascinating, but it also poses considerable risks, which urgently require our attention. Awareness of the need to analyze ethical implications, such as fairness, equality, and sustainability, is growing. However, other dimensions of inquiry receive less attention, including the subtle but pervasive ways in which our dealings with AI shape our way of living and thinking, transforming our culture and human self-understanding. If we want to deploy AI positively in the long term, a broader and more holistic assessment of the technology is vital, involving not only scientific and technical perspectives, but also those from the humanities. To this end, we present outlines of a work program for the humanities that aim to contribute to assessing and guiding the potential, opportunities, and risks of further developing and deploying DL systems. This paper contains a thematic introduction (Sect. 1), an introduction to the workings of DL for non-technical readers (Sect. 2), and a main part, containing the outlines of a work program for the humanities (Sect. 3). Readers familiar with DL might want to ignore 2 and instead directly read 3 after 1.
{"title":"Assessing deep learning: a work program for the humanities in the age of artificial intelligence","authors":"Jan Segessenmann, Thilo Stadelmann, Andrew Davison, Oliver Dürr","doi":"10.1007/s43681-023-00408-z","DOIUrl":"10.1007/s43681-023-00408-z","url":null,"abstract":"<div><p>Following the success of deep learning (DL) in research, we are now witnessing the fast and widespread adoption of artificial intelligence (AI) in daily life, influencing the way we act, think, and organize our lives. However, much still remains a mystery when it comes to how these systems achieve such high performance and why they reach the outputs they do. This presents us with an unusual combination: of technical mastery on the one hand, and a striking degree of mystery on the other. This conjunction is not only fascinating, but it also poses considerable risks, which urgently require our attention. Awareness of the need to analyze ethical implications, such as fairness, equality, and sustainability, is growing. However, other dimensions of inquiry receive less attention, including the subtle but pervasive ways in which our dealings with AI shape our way of living and thinking, transforming our culture and human self-understanding. If we want to deploy AI positively in the long term, a broader and more holistic assessment of the technology is vital, involving not only scientific and technical perspectives, but also those from the humanities. To this end, we present outlines of a <i>work program</i> for the humanities that aim to contribute to assessing and guiding the potential, opportunities, and risks of further developing and deploying DL systems. This paper contains a thematic introduction (Sect. 1), an introduction to the workings of DL for non-technical readers (Sect. 2), and a main part, containing the outlines of a work program for the humanities (Sect. 3). Readers familiar with DL might want to ignore 2 and instead directly read 3 after 1.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00408-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}