Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2245242
Moran Cerf, Adam Waytz
Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology’s capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI’s harms has distracted from human beings’ outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.
{"title":"If you worry about humanity, you should be more scared of humans than of AI","authors":"Moran Cerf, Adam Waytz","doi":"10.1080/00963402.2023.2245242","DOIUrl":"https://doi.org/10.1080/00963402.2023.2245242","url":null,"abstract":"Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology’s capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI’s harms has distracted from human beings’ outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2245244
Sara Goudarzi
Since ChatGPT’s release in November 2022, artificial intelligence has come into the spotlight. Inspiring both fascination and fear, chatbots have stirred debates among researchers, developers, and policy makers. The concerns range from concrete and tangible ones—which include replication of existing biases and discrimination at scale, harvesting personal data, and spreading misinformation—to more existential fears that their development will lead to machines with human-like cognitive abilities. Understanding how chatbots work and the human labor and data involved can better help evaluate the validity of concerns surrounding these systems, which although innovative, are hardly the stuff of science fiction.
{"title":"Popping the chatbot hype balloon","authors":"Sara Goudarzi","doi":"10.1080/00963402.2023.2245244","DOIUrl":"https://doi.org/10.1080/00963402.2023.2245244","url":null,"abstract":"Since ChatGPT’s release in November 2022, artificial intelligence has come into the spotlight. Inspiring both fascination and fear, chatbots have stirred debates among researchers, developers, and policy makers. The concerns range from concrete and tangible ones—which include replication of existing biases and discrimination at scale, harvesting personal data, and spreading misinformation—to more existential fears that their development will lead to machines with human-like cognitive abilities. Understanding how chatbots work and the human labor and data involved can better help evaluate the validity of concerns surrounding these systems, which although innovative, are hardly the stuff of science fiction.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2245240
John Mecklin
{"title":"Interview: Emerging military technology expert Paul Scharre on global power dynamics in the AI age","authors":"John Mecklin","doi":"10.1080/00963402.2023.2245240","DOIUrl":"https://doi.org/10.1080/00963402.2023.2245240","url":null,"abstract":"","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2245249
Rumtin Sepasspour
ABSTRACT Global governance of artificial intelligence (AI) must grapple with four monumental challenges. AI is a tough problem to govern given the speed, scale, and uncertainty of its progress. Various aspects of the AI problem require governing because of the range of benefits, risks, and impacts on other global issues. Multilateral efforts on AI are nascent, as is national-level policy. And the multilateral system is under immense pressure from institutional gridlock, fragmentation, and geopolitical competition. No one global governance model for AI is perfect, or desirable. Instead, policymakers must pursue several governance models, each starting in a targeted and focused manner before evolving. They must make clear what policy outcomes are being sought and which institutional functions are needed to reach those outcomes. AI governance within regional and multilateral issue-based groupings would commit nations to action and test models for governing AI globally. And national champions will be critical to success. This pragmatic yet optimistic path will allow humanity to maximize the benefits of artificial intelligence applications and distribute them as widely as possible, while mitigating harms and reducing risks as effectively as possible.
{"title":"A reality check and a way forward for the global governance of artificial intelligence","authors":"Rumtin Sepasspour","doi":"10.1080/00963402.2023.2245249","DOIUrl":"https://doi.org/10.1080/00963402.2023.2245249","url":null,"abstract":"ABSTRACT Global governance of artificial intelligence (AI) must grapple with four monumental challenges. AI is a tough problem to govern given the speed, scale, and uncertainty of its progress. Various aspects of the AI problem require governing because of the range of benefits, risks, and impacts on other global issues. Multilateral efforts on AI are nascent, as is national-level policy. And the multilateral system is under immense pressure from institutional gridlock, fragmentation, and geopolitical competition. No one global governance model for AI is perfect, or desirable. Instead, policymakers must pursue several governance models, each starting in a targeted and focused manner before evolving. They must make clear what policy outcomes are being sought and which institutional functions are needed to reach those outcomes. AI governance within regional and multilateral issue-based groupings would commit nations to action and test models for governing AI globally. And national champions will be critical to success. This pragmatic yet optimistic path will allow humanity to maximize the benefits of artificial intelligence applications and distribute them as widely as possible, while mitigating harms and reducing risks as effectively as possible.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2245247
Dawn Stover
ABSTRACT Chatbots and other artificial-intelligence applications that mimic human conversation or writing have rapidly become some of the most popular tech applications of all time. Expert analysis and media coverage of the risks and benefits of AI have paid scant attention to how chatbots might affect public health at a time when depression, suicide, anxiety, and mental illness are epidemic in the United States, particularly among children and young adults. Many experts have pointed to a correlation between declining mental health and increasing online engagement. Generative AI’s potential to transform education, the job market, and social interactions could come at the expense of humanity’s own mental faculties, creativity, and social skills. Chatbots—which are prone to errors and fabrications—could also make it more difficult for humans to tell fact from fiction. But to the extent that mental health experts and the healthcare industry are interested in AI, it’s mostly viewed as a promising tool for identifying and treating mental health issues, rather than a potential threat to mental health.
{"title":"Will AI make us crazy?","authors":"Dawn Stover","doi":"10.1080/00963402.2023.2245247","DOIUrl":"https://doi.org/10.1080/00963402.2023.2245247","url":null,"abstract":"ABSTRACT Chatbots and other artificial-intelligence applications that mimic human conversation or writing have rapidly become some of the most popular tech applications of all time. Expert analysis and media coverage of the risks and benefits of AI have paid scant attention to how chatbots might affect public health at a time when depression, suicide, anxiety, and mental illness are epidemic in the United States, particularly among children and young adults. Many experts have pointed to a correlation between declining mental health and increasing online engagement. Generative AI’s potential to transform education, the job market, and social interactions could come at the expense of humanity’s own mental faculties, creativity, and social skills. Chatbots—which are prone to errors and fabrications—could also make it more difficult for humans to tell fact from fiction. But to the extent that mental health experts and the healthcare industry are interested in AI, it’s mostly viewed as a promising tool for identifying and treating mental health issues, rather than a potential threat to mental health.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2245251
Jingjie He, Nikita Degtyarev
The associability of artificial intelligence (AI) as a dual-use technology for nuclear material production (NMP) within the academic and practitioner communities remains widely neglected, and so a widening opportunity for AI to aid in illicit and covert non-peaceful applications exists. To address this emerging gap, this paper investigates the evolving and applicable uses of AI and finds broad evidence of its use to optimize performance, promote innovation, reduce costs, and enhance safety associated with the development and production of nuclear material. AI’s use in this arena will, thereby, facilitate broader accessibility of peaceful uses of nuclear science and technology, while at the same time cause concerns that said improvements can aid the illicit development of nuclear weapons. As such, this paper advocates for a three-dimensional solution to manage the evolving dual-use concern of AI that involves advancing states-centric monitoring and regulation, promoting intellectual exchange between the nonproliferation sector and the AI industry, and encouraging AI industrial contributions.
{"title":"AI and atoms: How artificial intelligence is revolutionizing nuclear material production","authors":"Jingjie He, Nikita Degtyarev","doi":"10.1080/00963402.2023.2245251","DOIUrl":"https://doi.org/10.1080/00963402.2023.2245251","url":null,"abstract":"The associability of artificial intelligence (AI) as a dual-use technology for nuclear material production (NMP) within the academic and practitioner communities remains widely neglected, and so a widening opportunity for AI to aid in illicit and covert non-peaceful applications exists. To address this emerging gap, this paper investigates the evolving and applicable uses of AI and finds broad evidence of its use to optimize performance, promote innovation, reduce costs, and enhance safety associated with the development and production of nuclear material. AI’s use in this arena will, thereby, facilitate broader accessibility of peaceful uses of nuclear science and technology, while at the same time cause concerns that said improvements can aid the illicit development of nuclear weapons. As such, this paper advocates for a three-dimensional solution to manage the evolving dual-use concern of AI that involves advancing states-centric monitoring and regulation, promoting intellectual exchange between the nonproliferation sector and the AI industry, and encouraging AI industrial contributions.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2245260
Hans M. Kristensen, Matt Korda, Eliana Johns
The Nuclear Notebook is researched and written by the staff of the Federation of American Scientists’ Nuclear Information Project: director Hans M. Kristensen, senior research fellow Matt Korda, and research associate Eliana Johns. The Nuclear Notebook column has been published in the Bulletin of the Atomic Scientists since 1987. This issue’s column examines Pakistan’s nuclear arsenal, which we estimate to currently include approximately 170 warheads and which could realistically grow to around 200 by 2025 at the current growth rate. To see all previous Nuclear Notebook columns, go to https://thebulletin.org/nuclear-risk/nuclear-weapons/nuclear-notebook/.
{"title":"Pakistan nuclear weapons, 2023","authors":"Hans M. Kristensen, Matt Korda, Eliana Johns","doi":"10.1080/00963402.2023.2245260","DOIUrl":"https://doi.org/10.1080/00963402.2023.2245260","url":null,"abstract":"The Nuclear Notebook is researched and written by the staff of the Federation of American Scientists’ Nuclear Information Project: director Hans M. Kristensen, senior research fellow Matt Korda, and research associate Eliana Johns. The Nuclear Notebook column has been published in the Bulletin of the Atomic Scientists since 1987. This issue’s column examines Pakistan’s nuclear arsenal, which we estimate to currently include approximately 170 warheads and which could realistically grow to around 200 by 2025 at the current growth rate. To see all previous Nuclear Notebook columns, go to https://thebulletin.org/nuclear-risk/nuclear-weapons/nuclear-notebook/.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.1080/00963402.2023.2246264
John Mecklin
{"title":"Introduction: The hype, peril, and promise of artificial intelligence","authors":"John Mecklin","doi":"10.1080/00963402.2023.2246264","DOIUrl":"https://doi.org/10.1080/00963402.2023.2246264","url":null,"abstract":"","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134948435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-04DOI: 10.1080/00963402.2023.2223078
Lesley M. M. Blume
ABSTRACT The Trinity test site was chosen, in part, for its supposed remove from human inhabitation. Yet nearly half-a-million people were living within a 150-mile radius of the explosion, with some as close as 12 miles away. None were warned or evacuated by the US government ahead of time. After the blast went off, fallout snowed down across the landscape for days, contaminating water and food sources. Children played with the hot flakes. Then pets and livestock began to die. Still, no one was told the truth, nor were government efforts made to evacuate the surrounding populations—despite warnings from Manhattan Project doctors and physicists that the radiation hazard for these civilians was, in their words, “very significant.” Nearly eight decades later, Trinity test “downwinders” still await government recognition and restitution.
{"title":"Collateral damage: American civilian survivors of the 1945 Trinity test","authors":"Lesley M. M. Blume","doi":"10.1080/00963402.2023.2223078","DOIUrl":"https://doi.org/10.1080/00963402.2023.2223078","url":null,"abstract":"ABSTRACT The Trinity test site was chosen, in part, for its supposed remove from human inhabitation. Yet nearly half-a-million people were living within a 150-mile radius of the explosion, with some as close as 12 miles away. None were warned or evacuated by the US government ahead of time. After the blast went off, fallout snowed down across the landscape for days, contaminating water and food sources. Children played with the hot flakes. Then pets and livestock began to die. Still, no one was told the truth, nor were government efforts made to evacuate the surrounding populations—despite warnings from Manhattan Project doctors and physicists that the radiation hazard for these civilians was, in their words, “very significant.” Nearly eight decades later, Trinity test “downwinders” still await government recognition and restitution.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"79 1","pages":"232 - 237"},"PeriodicalIF":1.3,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42031175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-04DOI: 10.1080/00963402.1954.11453469
J. Oppenheimer
ABSTRACT On March 4, Dr. Oppenheimer replied to General Nichols’ letter of December 23, 1953. The complete text is printed below.
摘要3月4日,奥本海默博士回复了尼科尔斯将军1953年12月23日的来信。全文打印在下面。
{"title":"Oppenheimer Replies","authors":"J. Oppenheimer","doi":"10.1080/00963402.1954.11453469","DOIUrl":"https://doi.org/10.1080/00963402.1954.11453469","url":null,"abstract":"ABSTRACT On March 4, Dr. Oppenheimer replied to General Nichols’ letter of December 23, 1953. The complete text is printed below.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"79 1","pages":"242 - 254"},"PeriodicalIF":1.3,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/00963402.1954.11453469","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47044518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}