{"title":"人工智能、对抗性攻击和眼战","authors":"Michael Balas , David T Wong , Steve A Arshinoff","doi":"10.1016/j.ajoint.2024.100062","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><p>We explore the potential misuse of artificial intelligence (AI), specifically large language models (LLMs), in generating harmful content related to ocular warfare. By examining the vulnerabilities of AI systems to adversarial attacks, we aim to highlight the urgent need for robust safety measures, enforceable regulation, and proactive ethics.</p></div><div><h3>Design</h3><p>A viewpoint paper discussing the ethical challenges posed by AI, using ophthalmology as a case study. It examines the susceptibility of AI systems to adversarial attacks and the potential for their misuse in creating harmful content.</p></div><div><h3>Methods</h3><p>The study involved crafting adversarial prompts to test the safeguards of a well-known LLM, OpenAI's ChatGPT-4.0. The focus was on evaluating the model's responses to hypothetical scenarios aimed at causing ocular damage through biological, chemical, and physical means.</p></div><div><h3>Results</h3><p>The AI provided detailed responses on using Onchocerca volvulus for mass infection, methanol for optic nerve damage, mustard gas for severe eye injuries, and high-powered lasers for inducing blindness. Despite significant safeguards, the study revealed that with enough effort, it was possible to bypass these constraints and obtain harmful information, underscoring the vulnerabilities in AI systems.</p></div><div><h3>Conclusion</h3><p>AI holds the potential for both positive transformative change and malevolent exploitation. The susceptibility of LLMs to adversarial attacks and the possibility of purposefully trained unethical AI systems present significant risks. This paper calls for improved robustness of AI systems, global legal and ethical frameworks, and proactive measures to ensure AI technologies benefit humanity and do not pose threats.</p></div>","PeriodicalId":100071,"journal":{"name":"AJO International","volume":"1 3","pages":"Article 100062"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2950253524000625/pdfft?md5=8082ec440eda4dbceca3671b311f30c2&pid=1-s2.0-S2950253524000625-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence, adversarial attacks, and ocular warfare\",\"authors\":\"Michael Balas , David T Wong , Steve A Arshinoff\",\"doi\":\"10.1016/j.ajoint.2024.100062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Purpose</h3><p>We explore the potential misuse of artificial intelligence (AI), specifically large language models (LLMs), in generating harmful content related to ocular warfare. By examining the vulnerabilities of AI systems to adversarial attacks, we aim to highlight the urgent need for robust safety measures, enforceable regulation, and proactive ethics.</p></div><div><h3>Design</h3><p>A viewpoint paper discussing the ethical challenges posed by AI, using ophthalmology as a case study. It examines the susceptibility of AI systems to adversarial attacks and the potential for their misuse in creating harmful content.</p></div><div><h3>Methods</h3><p>The study involved crafting adversarial prompts to test the safeguards of a well-known LLM, OpenAI's ChatGPT-4.0. The focus was on evaluating the model's responses to hypothetical scenarios aimed at causing ocular damage through biological, chemical, and physical means.</p></div><div><h3>Results</h3><p>The AI provided detailed responses on using Onchocerca volvulus for mass infection, methanol for optic nerve damage, mustard gas for severe eye injuries, and high-powered lasers for inducing blindness. Despite significant safeguards, the study revealed that with enough effort, it was possible to bypass these constraints and obtain harmful information, underscoring the vulnerabilities in AI systems.</p></div><div><h3>Conclusion</h3><p>AI holds the potential for both positive transformative change and malevolent exploitation. The susceptibility of LLMs to adversarial attacks and the possibility of purposefully trained unethical AI systems present significant risks. This paper calls for improved robustness of AI systems, global legal and ethical frameworks, and proactive measures to ensure AI technologies benefit humanity and do not pose threats.</p></div>\",\"PeriodicalId\":100071,\"journal\":{\"name\":\"AJO International\",\"volume\":\"1 3\",\"pages\":\"Article 100062\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2950253524000625/pdfft?md5=8082ec440eda4dbceca3671b311f30c2&pid=1-s2.0-S2950253524000625-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AJO International\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2950253524000625\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJO International","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2950253524000625","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial intelligence, adversarial attacks, and ocular warfare
Purpose
We explore the potential misuse of artificial intelligence (AI), specifically large language models (LLMs), in generating harmful content related to ocular warfare. By examining the vulnerabilities of AI systems to adversarial attacks, we aim to highlight the urgent need for robust safety measures, enforceable regulation, and proactive ethics.
Design
A viewpoint paper discussing the ethical challenges posed by AI, using ophthalmology as a case study. It examines the susceptibility of AI systems to adversarial attacks and the potential for their misuse in creating harmful content.
Methods
The study involved crafting adversarial prompts to test the safeguards of a well-known LLM, OpenAI's ChatGPT-4.0. The focus was on evaluating the model's responses to hypothetical scenarios aimed at causing ocular damage through biological, chemical, and physical means.
Results
The AI provided detailed responses on using Onchocerca volvulus for mass infection, methanol for optic nerve damage, mustard gas for severe eye injuries, and high-powered lasers for inducing blindness. Despite significant safeguards, the study revealed that with enough effort, it was possible to bypass these constraints and obtain harmful information, underscoring the vulnerabilities in AI systems.
Conclusion
AI holds the potential for both positive transformative change and malevolent exploitation. The susceptibility of LLMs to adversarial attacks and the possibility of purposefully trained unethical AI systems present significant risks. This paper calls for improved robustness of AI systems, global legal and ethical frameworks, and proactive measures to ensure AI technologies benefit humanity and do not pose threats.