Pub Date : 2022-03-01DOI: 10.1007/s11023-021-09583-6
Carlos A. Zednik, Hannes Boelsen
{"title":"Scientific Exploration and Explainable Artificial Intelligence","authors":"Carlos A. Zednik, Hannes Boelsen","doi":"10.1007/s11023-021-09583-6","DOIUrl":"https://doi.org/10.1007/s11023-021-09583-6","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 1","pages":"219 - 239"},"PeriodicalIF":7.4,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52620398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-28eCollection Date: 2022-01-01DOI: 10.1080/20008198.2021.2002572
Andrew A Nicholson, Magdalena Siegel, Jakub Wolf, Sandhya Narikuzhy, Sophia L Roth, Taylor Hatchard, Ruth A Lanius, Maiko Schneider, Chantelle S Lloyd, Margaret C McKinnon, Alexandra Heber, Patrick Smith, Brigitte Lueger-Schuster
Background: Systemic oppression, particularly towards sexual minorities, continues to be deeply rooted in the bedrock of many societies globally. Experiences with minority stressors (e.g. discrimination, hate-crimes, internalized homonegativity, rejection sensitivity, and microaggressions or everyday indignities) have been consistently linked to adverse mental health outcomes. Elucidating the neural adaptations associated with minority stress exposure will be critical for furthering our understanding of how sexual minorities become disproportionately affected by mental health burdens.
Methods:
Following PRISMA-guidelines, we systematically reviewed published neuroimaging studies that compared neural dynamics among sexual minority and heterosexual populations, aggregating information pertaining to any measurement of minority stress and relevant clinical phenomena.
Results: Only 1 of 13 studies eligible for inclusion examined minority stress directly, where all other studies focused on investigating the neurobiological basis of sexual orientation. In our narrative synthesis, we highlight important themes that suggest minority stress exposure may be associated with decreased activation and functional connectivity within the default-mode network (related to the sense-of-self and social cognition), and summarize preliminary evidence related to aberrant neural dynamics within the salience network (involved in threat detection and fear processing) and the central executive network (involved in executive functioning and emotion regulation). Importantly, this parallels neural adaptations commonly observed among individuals with posttraumatic stress disorder (PTSD) in the aftermath of trauma and supports the inclusion of insidious forms of trauma related to minority stress within models of PTSD.
Conclusions: Taken together, minority stress may have several shared neuropsychological pathways with PTSD and stress-related disorders. Here, we outline a detailed research agenda that provides an overview of literature linking sexual minority stress to PTSD and insidious trauma, moral affect (including shame and guilt), and mental health risk/resiliency, in addition to racial, ethnic, and gender related minority stress. Finally, we propose a novel minority mosaic framework designed to inform future directions of minority stress neuroimaging research from an intersectional lens.
{"title":"A systematic review of the neural correlates of sexual minority stress: towards an intersectional minority mosaic framework with implications for a future research agenda.","authors":"Andrew A Nicholson, Magdalena Siegel, Jakub Wolf, Sandhya Narikuzhy, Sophia L Roth, Taylor Hatchard, Ruth A Lanius, Maiko Schneider, Chantelle S Lloyd, Margaret C McKinnon, Alexandra Heber, Patrick Smith, Brigitte Lueger-Schuster","doi":"10.1080/20008198.2021.2002572","DOIUrl":"10.1080/20008198.2021.2002572","url":null,"abstract":"<p><strong>Background: </strong>Systemic oppression, particularly towards sexual minorities, continues to be deeply rooted in the bedrock of many societies globally. Experiences with minority stressors (e.g. discrimination, hate-crimes, internalized homonegativity, rejection sensitivity, and microaggressions or everyday indignities) have been consistently linked to adverse mental health outcomes. Elucidating the neural adaptations associated with minority stress exposure will be critical for furthering our understanding of how sexual minorities become disproportionately affected by mental health burdens.</p><p><strong>Methods: </strong></p><p><p>Following PRISMA-guidelines, we systematically reviewed published neuroimaging studies that compared neural dynamics among sexual minority and heterosexual populations, aggregating information pertaining to any measurement of minority stress and relevant clinical phenomena.</p><p><strong>Results: </strong>Only 1 of 13 studies eligible for inclusion examined minority stress directly, where all other studies focused on investigating the neurobiological basis of sexual orientation. In our narrative synthesis, we highlight important themes that suggest minority stress exposure may be associated with decreased activation and functional connectivity within the default-mode network (related to the sense-of-self and social cognition), and summarize preliminary evidence related to aberrant neural dynamics within the salience network (involved in threat detection and fear processing) and the central executive network (involved in executive functioning and emotion regulation). Importantly, this parallels neural adaptations commonly observed among individuals with posttraumatic stress disorder (PTSD) in the aftermath of trauma and supports the inclusion of insidious forms of trauma related to minority stress within models of PTSD.</p><p><strong>Conclusions: </strong>Taken together, minority stress may have several shared neuropsychological pathways with PTSD and stress-related disorders. Here, we outline a detailed research agenda that provides an overview of literature linking sexual minority stress to PTSD and insidious trauma, moral affect (including shame and guilt), and mental health risk/resiliency, in addition to racial, ethnic, and gender related minority stress. Finally, we propose a novel <b>minority mosaic framework</b> designed to inform future directions of minority stress neuroimaging research from an intersectional lens.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"15 1","pages":"2002572"},"PeriodicalIF":4.2,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8890555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74237131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-25DOI: 10.1007/s11023-022-09595-w
Nina Poth
{"title":"Schema-Centred Unity and Process-Centred Pluralism of the Predictive Mind","authors":"Nina Poth","doi":"10.1007/s11023-022-09595-w","DOIUrl":"https://doi.org/10.1007/s11023-022-09595-w","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"1 1","pages":"1-27"},"PeriodicalIF":7.4,"publicationDate":"2022-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47686760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-27DOI: 10.1007/s11023-022-09589-8
M. Green, Jan G. Michel
{"title":"What Might Machines Mean?","authors":"M. Green, Jan G. Michel","doi":"10.1007/s11023-022-09589-8","DOIUrl":"https://doi.org/10.1007/s11023-022-09589-8","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 1","pages":"323 - 338"},"PeriodicalIF":7.4,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41941927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-21DOI: 10.1007/s11023-022-09591-0
Stefan Buijsman, Herman Veluwenkamp
{"title":"Spotting When Algorithms Are Wrong","authors":"Stefan Buijsman, Herman Veluwenkamp","doi":"10.1007/s11023-022-09591-0","DOIUrl":"https://doi.org/10.1007/s11023-022-09591-0","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"1 1","pages":"1-22"},"PeriodicalIF":7.4,"publicationDate":"2022-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43000962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-11-05DOI: 10.1007/s11023-021-09577-4
Jakob Mökander, Maria Axente, Federico Casolari, Luciano Floridi
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
{"title":"Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.","authors":"Jakob Mökander, Maria Axente, Federico Casolari, Luciano Floridi","doi":"10.1007/s11023-021-09577-4","DOIUrl":"https://doi.org/10.1007/s11023-021-09577-4","url":null,"abstract":"<p><p>The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the <i>conformity assessments</i> that providers of high-risk AI systems are expected to conduct, and the <i>post-market monitoring plans</i> that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 2","pages":"241-268"},"PeriodicalIF":7.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8569069/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39604812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-09-23DOI: 10.1007/s11023-021-09574-7
Abel Wajnerman Paz
It has been argued that neural data (ND) are an especially sensitive kind of personal information that could be used to undermine the control we should have over access to our mental states (i.e. our mental privacy), and therefore need a stronger legal protection than other kinds of personal data. The Morningside Group, a global consortium of interdisciplinary experts advocating for the ethical use of neurotechnology, suggests achieving this by treating legally ND as a body organ (i.e. protecting them through bodily integrity). Although the proposal is currently shaping ND-related policies (most notably, a Neuroprotection Bill of Law being discussed by the Chilean Senate), it is not clear what its conceptual and legal basis is. Treating legally something as something else requires some kind of analogical reasoning, which is not provided by the authors of the proposal. In this paper, I will try to fill this gap by addressing ontological issues related to neurocognitive processes. The substantial differences between ND and body organs or organic tissue cast doubt on the idea that the former should be covered by bodily integrity. Crucially, ND are not constituted by organic material. Nevertheless, I argue that the ND of a subject s are analogous to neurocognitive properties of her brain. I claim that (i) s' ND are a 'medium independent' property that can be characterized as natural semantic personal information about her brain and that (ii) s' brain not only instantiates this property but also has an exclusive ontological relationship with it: This information constitutes a domain that is unique to her neurocognitive architecture.
{"title":"Is Your Neural Data Part of Your Mind? Exploring the Conceptual Basis of Mental Privacy.","authors":"Abel Wajnerman Paz","doi":"10.1007/s11023-021-09574-7","DOIUrl":"https://doi.org/10.1007/s11023-021-09574-7","url":null,"abstract":"<p><p>It has been argued that neural data (ND) are an especially sensitive kind of personal information that could be used to undermine the control we should have over access to our mental states (i.e. our mental privacy), and therefore need a stronger legal protection than other kinds of personal data. The Morningside Group, a global consortium of interdisciplinary experts advocating for the ethical use of neurotechnology, suggests achieving this by treating legally ND as a body organ (i.e. protecting them through bodily integrity). Although the proposal is currently shaping ND-related policies (most notably, a Neuroprotection Bill of Law being discussed by the Chilean Senate), it is not clear what its conceptual and legal basis is. Treating legally something as something else requires some kind of analogical reasoning, which is not provided by the authors of the proposal. In this paper, I will try to fill this gap by addressing ontological issues related to neurocognitive processes. The substantial differences between ND and body organs or organic tissue cast doubt on the idea that the former should be covered by bodily integrity. Crucially, ND are not constituted by organic material. Nevertheless, I argue that the ND of a subject <i>s</i> are analogous to neurocognitive properties of her brain. I claim that (i) <i>s</i>' ND are a 'medium independent' property that can be characterized as natural semantic personal information about her brain and that (ii) <i>s</i>' brain not only instantiates this property but also has an exclusive ontological relationship with it: This information constitutes a domain that is unique to her neurocognitive architecture.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 2","pages":"395-415"},"PeriodicalIF":7.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8460199/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39467273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2022-08-25DOI: 10.1007/s11023-022-09610-0
Lidia Flores, Sean D Young
The COVID-19 pandemic and its related policies (e.g., stay at home and social distancing orders) have increased people's use of digital technology, such as social media. Researchers have, in turn, utilized artificial intelligence to analyze social media data for public health surveillance. For example, through machine learning and natural language processing, they have monitored social media data to examine public knowledge and behavior. This paper explores the ethical considerations of using artificial intelligence to monitor social media to understand the public's perspectives and behaviors surrounding COVID-19, including potential risks and benefits of an AI-driven approach. Importantly, investigators and ethics committees have a role in ensuring that researchers adhere to ethical principles of respect for persons, beneficence, and justice in a way that moves science forward while ensuring public safety and confidence in the process.
{"title":"Ethical Considerations in the Application of Artificial Intelligence to Monitor Social Media for COVID-19 Data.","authors":"Lidia Flores, Sean D Young","doi":"10.1007/s11023-022-09610-0","DOIUrl":"https://doi.org/10.1007/s11023-022-09610-0","url":null,"abstract":"<p><p>The COVID-19 pandemic and its related policies (e.g., stay at home and social distancing orders) have increased people's use of digital technology, such as social media. Researchers have, in turn, utilized artificial intelligence to analyze social media data for public health surveillance. For example, through machine learning and natural language processing, they have monitored social media data to examine public knowledge and behavior. This paper explores the ethical considerations of using artificial intelligence to monitor social media to understand the public's perspectives and behaviors surrounding COVID-19, including potential risks and benefits of an AI-driven approach. Importantly, investigators and ethics committees have a role in ensuring that researchers adhere to ethical principles of respect for persons, beneficence, and justice in a way that moves science forward while ensuring public safety and confidence in the process.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 4","pages":"759-768"},"PeriodicalIF":7.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9406274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40330436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}